Our services

Data Engineering and Data Architecture

Data silos prevent value from data

Data integration is the process of connecting different data of different formats for unified analysis.

As organisations continue to collect more data, they likely have multiple data sources that they use in isolation for a variety of different reasons. But siloed data can create challenges - data quality issues, information out of context, duplicated efforts wasting valuable time and resources, etc. - making it difficult for a business to maximise the value of its data.

Cloud-based data architectures provide the ability to ingest new data sources in minutes and scale storage and processing just as quickly. This flexibility increases the potential to get more value from your data - but for many organisations, the rapid accumulation of data has resulted in more data - but without systematic organisation and sharing across the business.

Stock photo (Canva): Grain silos can be compared to data silos, where data sits like grain in its own closed system
Stock photo (Canva): Grain silos can be compared to data silos, where data sits like grain in its own closed system

You need experienced data engineers and data architects

Breaking down data silos provides the opportunity to leverage data across the organisation, and agree on central definitions that can potentially cause confusion.

When done well, data integration enables your business to meet business requirements, increase data quality and deliver more valuable data to your business users, who in turn can shift focus from collecting and compiling data to using data for learning and improvement.

Sometimes it is enough to ingest raw data, other times we need to agree on how the data should fit together as a whole.

For this, experienced data engineers and data architects are needed, who ensure that data flows are built as efficiently as possible, and that data is prepared for analytics and reporting.

Newer techniques and tools for data engineering and data modelling can simplify and dramatically accelerate the process - including steps such as data cleansing, data transformation and integration to make data ready for analysis.

Our data engineering and data architecture services

We deliver:

1. Solution design and data architecture

We can help you understand the needs and requirements related to a data product, and contribute to ensuring you get the best possible solution pattern and design for data flow and data transformation. The wrong problem solved, or the wrong solution, can potentially limit value to users and create significant maintenance costs over time.

2. Data integration

We will help you choose the right tools and techniques for data integration for your various data, and determine where you should integrate your data, such as in a data stream (publish/subscribe), data lake, a staging layer in a data warehouse or data lakehouse. We will also help you prioritise which data should be integrated - and which should not - to control your costs related to integration, transformation and storage of your data.

3. Data pipelines / data flows

We design and implement data flows using modern tools to automate workflows and testing, standardise and accelerate data transformation, remove bottlenecks in data engineering, and involve more people with different data roles in the development process of a data flow. This makes your data more useful for decision support and analytics.

4. Data transformation and data modelling

Much of your data needs to be cleansed, combined with other data, and enhanced with derived business logic to create a reliable business-ready layer in your data lakehouse or data warehouse. We help you transform raw data into actionable information by using proven principles, technologies and techniques to create robust analytical solutions ready for use.

5. ELT and ETL frameworks

We have created reusable frameworks for ELT and ETL patterns that allow you to quickly ingest data with consistent naming conventions, auditable processes, and easily understandable provenance for data from sources.

6. Data cleansing and data quality / data observability

We will help you establish standards and thresholds for data quality, determine your best approach to data cleansing (sometimes manual rules make more sense than expensive technology), optimise your existing data cleansing tools, and gain support from your organisation to support initiatives that promote data quality control (data observability).

Stock photo (Canva): We work together with you on data architecture and data modelling
Stock photo (Canva): We work together with you on data architecture and data modelling

Our expertise in data engineering and data architecture

Our data engineers and data architects have expertise in tasks such as:

  • working with data architecture for data products based on business needs
  • modelling data to the right level according to use cases - in close dialogue with your users and experts who know the data best
  • building robust and automated data flows for ingesting, processing and storing data
  • working according to agile methodology and deliveries with continuous development of data flows and data products
  • setting up and improving monitoring, security and access management
  • documenting data flows and ensuring that data is linked to technical and business metadata, so that data becomes discoverable and usable over time

Why you should choose Glitni for data engineering and data architecture

  1. We see the big picture so you get lasting value from data flows We identify the right challenges and connect them with effective technological solutions, and communicate equally well with technologists and business stakeholders. This is especially important for translating needs into the right level of data transformation. Sometimes it makes most sense, for example, to transform data for a specific data product and not make the data modelling generic.
  2. We have experience with all common ETL and ELT tools - including the newer ones Data platforms have developed rapidly over the last 5 years. Where we previously used primarily broader toolboxes like IBM Cloud Pak for Integration, Informatica PowerCenter, Azure Data Factory and Google Dataflow, today we increasingly have a fragmented toolbox.
    We divide the tools used for data engineering and data modelling into subcategories such as:
    • data ingestion (e.g. Fivetran, Stitch)
    • data orchestration (e.g. Airflow, Astronomer, Prefect)
    • data observability (e.g. Great Expectations, Monte Carlo)
    • data storage (e.g. Google BigQuery, Azure Data Lake Gen2, Snowflake)
    • data processing (e.g. Google BigQuery, Databricks, Snowflake, Azure Synapse Analytics)
    • data transformation (e.g. dbt, Google Datafold, Databricks Delta Live Tables)
      We help you navigate what should be used for what, and how all data engineers on the team use these as efficiently as possible.
  3. We help everyone work efficiently with new techniques Perhaps you find it challenging to develop, test and deploy changes to data flows. We help all data engineers and data architects work efficiently using modern techniques. We help set up effective CI/CD pipelines, good code and documentation standards and the right level of monitoring and logging.
  4. We bring templates and methodology that save you time Glitni brings its own templates and code examples for data ingestion, data orchestration, data transformation and storage. The templates and examples give us a good starting point to adapt to your organisation’s needs.
  5. We work together, so you build competence We work as an integrated team together with you and other data engineers and data architects you use. It is important that your own resources actively participate in the development work, so that you can further develop data flows yourself when we are no longer there. This collaborative approach also ensures that there are no handovers from consultants to you.


Meet our experienced data engineers and data architects!

By working with us, you get access to a team of experts who can help you with solution design, data modelling and data engineering - and assist the rest of the team with new techniques and technologies.

Sven Eigenbrodt profile image

Sven Eigenbrodt

Data Platform Engineer

Sven is an experienced Data Platform and Data Engineer with experience in team leadership, development and architecture. 
Read More

Lars Snekkerhaugen profile image

Lars Snekkerhaugen

Data Platform Engineer

Lars is an experienced Data Engineer and Data Platform Engineer who is familiar with modern data platforms on Azure and AWS, Databricks and Snowflake. 
Read More

Knut Arne Smeland profile image

Knut Arne Smeland

Data Platform Engineer

Knut Arne is an experienced data engineer and platform developer who is passionate about building modular and robust platforms with minimal complexity. 
Read More

Halvar Trøyel Nerbø profile image

Halvar Trøyel Nerbø

Data Platform Engineer

Trøyel is a dedicated Data Platform Engineer who has specialised in building datalake and lakehouse-based data platforms in the cloud. 
Read More

Runar Alvseike profile image

Runar Alvseike

Data Platform Engineer

Runar is a seasoned Data Engineer in BI and Data & Analytics, where the majority of his experience has been spent building data lakehouse-based data platforms. 
Read More

Sindre Grindheim profile image

Sindre Grindheim

Data Platform Engineer

Sindre is an experienced Data Platform Engineer in architecture and implementation of modern data platforms on Azure and Google Cloud, Databricks and Snowflake. 
Read More


Get in Touch!

Get in Touch!

Feel free to contact us for a chat about how you can best establish and develop your data platform to create value from data!

Magne Bakkeli
91 66 22 69
magne.bakkeli@glitni.no