Danish medical company with a particular focus on the treatment and combating of chronic diseases. Internationally represented with clinical trials in more than 50 countries, 43,000+ employees in 80 locations around the world and marketing of products in 170 countries.


The company is characterized by being very knowledge-heavy, and the employees’ access to both internal data sets and the possibility of combination with international sources are crucial to being able to support the continued research and thus remain a market leader.

Before the start of the project, the company had two primary challenges in relation to access and handling of both its own and internationally obtained research data.

Access to international data sources typically takes place by acquiring so-called data processor rights for a given data set when entering into a contract. In relation to own research results, an employee has the status of data owner. Permission to use data sets must be obtained from either the data owner or data processor.

Previously, it was up to the individual employee with the status of data processor or owner to decide where a given data set should be stored. In the event that another employee or a project group subsequently needed to use similar data, they first had to investigate whether data already existed in the organization that could be relevant for their project, and then contact the relevant data processor/owner and apply about access. When starting new research projects, this could result in long search processes and personal inquiries to numerous colleagues. Correspondingly, there was no complete clarity about which rules were applicable for access to and use of the individual data sets, as these were not necessarily registered in the same place as the data set in question.

The consequence of this was challenges with delays in the start-up of planned research projects, and not least with compliance with GxP.

GxP is an acronym that refers to the rules and guidelines (good practice) that apply within life science, with the possibility of replacing the x depending on the specific area in question. It could be GLP, which refers to Good Laboratory Practices, or GCP (Good Clinical Practices). GxP also has rules that define good practice for handling data within life science. Typically referred to as Data Integrity


KeyCore orchestrated a data lake solution in AWS using, among other things, Redshift, Kenesis, Cognito and integration to MS AD. The solution is based on managed AWS services, where it is possible to ensure the smallest possible operational footprint for the company. On top of this, KeyCore developed a special directory for searching and obtaining the necessary permissions.

With the customer’s AWS data lake, all data sets – regardless of source, type and ownership – can be gathered in one place in a data lake that can be accessed from all units in the global company. Datasets are basically defined in one of two categories: Self service or Managed data. Data sets under Self service can be accessed and used freely, while Managed data requires permission to be obtained. Categorization and policies for access, permissions and ownership or processing rights are registered as a mandatory procedure in connection with uploading new material.

Employees are ensured a full overview of all relevant data sets for a given research project by means of a search in the catalogue. Once the list of relevant data sets has been provided, employees can issue requests for access to all the different data owners and processors in the organization directly from the catalog in one unified operation.

KeyCore’s solution means that the company’s data management has now become GxP qualified. A quality mark that refers to the handling complying with the guidelines in Data Integrity and thus GxP. This is significant, as it means that everyone has gained access to knowledge about and exchange of data, without affecting the possibility of using data that is GxP qualified. This increases the value of the solution for the customer, as all types of data sets can be processed and combined in one data lake that covers the entire company.

In relation to compliance with other regulations, the solution partly uses AWS’ services for traceability etc. with strict monitoring, logging and auditing. With placement in AWS, the solution is also – due to the very high level of security under the so-called hypervisor line – secured access to and documentation for compliance with international regulations such as HIPPA and GDPR, which are globally recognized as a prerequisite for license-to-operate in life science.

Read more about the right level of security in AWS

Scroll to Top