Summary of AWS blogs for the week of monday Mon Jul 31

In the week of Mon Jul 31 2023 AWS published 106 blog posts – here is an overview of what happened.

Topics Covered

Desktop and Application Streaming

Empower Your Workforce with Amazon WorkSpaces and Microsoft 365

Introducing Amazon WorkSpaces Services

Amazon WorkSpaces provide secure, scalable, and cost-effective virtual desktops for users to do their jobs. With Amazon WorkSpaces services, tens of thousands of customers can use wide range of applications on their WorkSpaces virtual desktops, from simple web apps to complex rendering applications. It’s easy to scale up or down to meet demands with Amazon WorkSpaces.

Enhanced Security with Amazon WorkSpaces

Amazon WorkSpaces also provides enhanced security measures to help protect customer data. Customers can use authentication methods such as active directory or RADIUS to authenticate users, and leverage Amazon WorkSpaces’ encryption and isolation features to help protect customer data. Additionally, customers can use Amazon WorkSpaces services to control and manage the security of their virtual desktop environments.

Integrating Microsoft 365 with Amazon WorkSpaces

Amazon WorkSpaces and Microsoft 365 integration provides customers with a powerful end-user computing service. By combining Amazon WorkSpaces and Microsoft 365, customers can provide a secure, cost-effective, scalable virtual desktop experience for their employees, all while benefiting from the enriched security, productivity, and collaboration capabilities of Microsoft 365.

KeyCore’s Professional Services and Managed Services

At KeyCore, the leading Danish AWS consultancy, we provide both professional services and managed services to empower your workforce with Amazon WorkSpaces and Microsoft 365. Our team of experts can help you design, build, and manage your virtual desktop environment. With our professional services, we can help you set up the right infrastructure for a successful deployment. And with our managed services, we can provide you with the ongoing support you need to ensure that your virtual desktop environment is up and running smoothly.

Contact us today to learn more about how KeyCore can help you achieve the most cost-effective and secure virtual desktop experience with Amazon WorkSpaces and Microsoft 365.

Read the full blog posts from AWS

AWS DevOps Blog

Working with Different Technologies with Amazon CodeWhisperer

As a software development team, it is often necessary to work with multiple programming languages, frameworks, and technologies, depending on the task at hand. This can be the result of choosing the right tool for a specific problem, or it is mandated by adhering to a specific technology adopted by the team. Amazon CodeWhisperer is a powerful tool that can help teams of developers work with different technologies more efficiently.

Enhancements for AWS CloudFormation

AWS CloudFormation is an Infrastructure as Code (IaC) service that allows users to model, provision, and manage AWS and third-party resources. Recently, AWS CloudFormation released a new language transform that enhances the core language. Since its initial release, two more enhancements have been added: Fn::FindInMap enhancements and a new looping function – Fn::ForEach. These new language transforms provide users with more functionality and make it easier to deploy applications in a multicloud environment.

Deploying Serverless Applications in a Multicloud Environment with Amazon CodeCatalyst

Amazon CodeCatalyst is an integrated service designed for software development teams that are adopting continuous integration and deployment practices into their software development process. CodeCatalyst provides developers with the tools they need in one place, allowing them to plan work, collaborate on code, and build, test, and deploy applications by leveraging CodeCatalyst Workflows. In the first post of this series, we saw how organizations can deploy workloads to virtual machines (VMs) in a hybrid and multicloud environment. This post explores how organizations can use CodeCatalyst to deploy container applications in a multicloud environment.

Using Amazon CodeCatalyst to Deploy Container Applications in a Multicloud Environment

Container technology is becoming increasingly popular as a way of deploying applications in a hybrid and multicloud environment. Amazon CodeCatalyst makes it easier for organizations to deploy container applications in such an environment by providing a streamlined process and a set of tools to simplify the task. CodeCatalyst Workflows can be used to provision the required infrastructure and deploy the container application. Furthermore, CodeCatalyst also provides a dashboard and automated reports that provide visibility into the application deployment process.

How KeyCore Can Help

KeyCore is the leading AWS consultancy in Denmark. We provide comprehensive professional services and managed services to help our customers get the most out of their cloud investment. Our team of experienced AWS professionals can help your organization deploy applications in a multicloud environment with Amazon CodeCatalyst. From setting up the necessary infrastructure to deploying and managing your container applications, our team can provide the expertise and guidance you need to get the job done quickly and efficiently.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Optimizing Data Preparation with Amazon SageMaker Data Wrangler

Data preparation is a crucial step in any data-driven project and having the right tools can greatly enhance operational efficiency. Amazon SageMaker Data Wrangler is a powerful tool that can reduce the time it takes to aggregate and prepare tabular and image data for machine learning (ML) from weeks to minutes.

Features of SageMaker Data Wrangler

SageMaker Data Wrangler simplifies the process of data exploration, transformation, cleaning, and validation for ML. It can easily connect to data sources such as Amazon S3 buckets, Amazon Athena, Amazon Redshift, and Apache Hive. It provides an easy-to-use graphical user interface with an intuitive drag-and-drop interface to streamline the process of data transformation. You can preview the transformations before applying them, making the ML development process faster and easier.

Using the Integration with Salesforce Data Cloud and SageMaker

The integration of Salesforce Data Cloud and Amazon SageMaker lets businesses access their Salesforce data securely with a zero-copy approach. SageMaker tools can be used to build, train, and deploy AI models, while the inference endpoints are served through Amazon API Gateway. The integration makes it easy for businesses to access their data from Salesforce without any additional setup. Furthermore, the integration leverages SageMaker features such as Amazon SageMaker Processing, SageMaker training jobs, and SageMaker multi-model endpoints (MMEs) to scale training and inference of thousands of ML models.

Generative AI and Amazon SageMaker

Generative AI can be used to enhance and accelerate the creative process across various industries. It enables more personalized experiences for audiences and improves the overall quality of the final products. Amazon SageMaker is a fully managed platform that enables developers and data scientists to build, train, and deploy ML models quickly. It comes with pre-built Docker images containing popular packages for ML, data science, and data visualization, as well as IDEs like JupyterLab.

Applications of Generative AI with Amazon SageMaker

Generative AI can be used to create personalized avatars and captions for images, as well as automated creative for marketing campaigns. Amazon Kendra is an intelligent search service powered by ML that makes it easy to find content scattered across multiple locations and content repositories. Generative AI can also be used to speed up drug discovery processes with protein folding workflows.

Amazon SageMaker Canvas and Advanced Metrics

Amazon SageMaker Canvas is a visual interface that enables business analysts to generate accurate ML predictions on their own, without requiring any ML experience or having to write a single line of code. It allows business analysts to browse and access disparate data sources in the cloud or on premises, and can be used to gain insights into why customers leave.

How KeyCore Can Help

KeyCore can help you optimize your data preparation, gain insights into customer behavior, and develop automated creative for your marketing campaigns using Amazon SageMaker and generative AI. Our consultants have extensive experience in AWS and are skilled in building ML models, deploying them efficiently, and maintaining them at scale. We can provide professional and managed services that will help you make the most of your data and AWS services.

Read the full blog posts from AWS

Announcements, Updates, and Launches

Announcements, Updates, and Launches

New Seventh-Generation General Purpose Amazon EC2 Instances (M7i-Flex and M7i)

AWS is launching new Amazon Elastic Compute Cloud (Amazon EC2) M7i-Flex and M7i instances powered by custom 4th generation Intel Xeon Scalable processors, exclusively available on AWS. These new instances provide best performance compared to Intel processors used by other cloud providers, with up to 15% faster speeds. M7i-Flex instances come with the ability to burst up to 100 Gbps in select instance sizes and offer up to five times the packet per second (pps) rate compared to the previous generation instances.

The M7i instances also provide high performance with a maximum of 100 Gbps of network bandwidth and up to 10 million pps. The ability to run in both burstable and non-burstable performance mode provides customers with flexibility and cost savings. M7i-Flex and M7i instances are perfect for applications that require low latency such as web servers, gaming servers, and in-memory databases.

Prime Day 2023 Powered by AWS – All the Numbers

AWS’ Prime Day was a smashing success in 2023, with customers buying all kinds of stuff for their hobbies. AWS provided its customers with a secure platform that enabled customers to buy all the things they wanted with confidence. To make this possible, AWS gave customers the ability to shop on their own terms with flexible payment options and quick and easy returns.

AWS also provided an array of services such as Amazon Aurora for quick and reliable data processing, Amazon DynamoDB for NoSQL data storage, Amazon MQ for reliable message queues, and Amazon Route 53 for high availability and scalability.

Introducing the first AWS Security Heroes

The AWS Heroes program recognizes individuals who are passionate about helping others learn and build on AWS. With the evolving trends in how the community develops and deploys solutions, specialized Hero categories have been created. The first AWS Security Heroes are now officially introduced. These experts have significant expertise in security-related technologies and have made significant contributions to the security community.

Now Open – AWS Israel (Tel Aviv) Region

The new AWS Israel (Tel Aviv) Region is now open and available for customers with three Availability Zones and the il-central-1 API name. This new addition gives customers an additional option for running their applications and serving users from data centers that are closer to them.

The Tel Aviv Region offers customers high and low latency networks that support up to 25 Gbps of bandwidth. The new region also supports a variety of security and compliance frameworks, including ISO/IEC 27001 and PCI DSS, as well as advanced networking features such as AWS PrivateLink and AWS Direct Connect.

AWS Week in Review – Agents for Amazon Bedrock, Amazon SageMaker Canvas New Capabilities, and More – July 31, 2023

AWS communities in ASEAN recently made history: the AWS User Group Malaysia held its first AWS Community Day and the AWS User Group Philippines celebrated its tenth anniversary with two days of AWS Community Day.

During the week of July 31, 2023, AWS released various updates and services for its customers. This included the release of Agents for Amazon Bedrock, which enables customers to quickly deploy and manage applications at scale. AWS also released Amazon SageMaker Canvas, a new managed service for creating interactive machine learning notebooks, and various new capabilities for Amazon EventBridge.

At KeyCore, our team of AWS experts can help customers make the most of all the latest services and updates from AWS. We have extensive experience helping customers manage and deploy their applications at scale, create interactive machine learning projects, and utilize new AWS services. We can also help customers take advantage of the improved security and compliance framework tools offered by the new Tel Aviv Region. Get in touch with us today to find out how our team can help you get the most out of AWS.

Read the full blog posts from AWS

Containers

Amazon ECS Anywhere Custom Service Discovery, Log Loss Prevention, IPV4 Exhaustion Automation, AWS Proton Infrastructure Automation, Larger Ephemeral Volumes and Karpenter/Bottlerocket Optimization

Amazon ECS Anywhere Custom Service Discovery

Amazon Elastic Container Service (Amazon ECS) is a managed container orchestration service from AWS which simplifies the deployment, management, and scalability of containerized applications. With Amazon ECS Anywhere, customers can run containers on their existing servers, without needing to manage a container orchestration framework. To make sure their services work together, customers can implement custom service discovery to ensure that the containers are able to identify and connect to each other.

Preventing Log Loss with Non-Blocking Mode

Improving observability and troubleshooting is best done by shipping container logs from the compute platform to a container running on a centralized logging server. When the log server is unable to accept logs, tradeoffs often need to be made. Non-blocking mode can be used in the AWSLogs container log driver to help prevent log loss.

Automating Custom Networking to Solve IPV4 Exhaustion

When Amazon VPC Container Network Interface (CNI) plugin assigns IPv4 addresses to Pods, it allocates them from the VPC CIDR range assigned to the cluster. To combat the exhaustion of limited number of IPv4 addresses, custom networking can be automated to solve IPV4 exhaustion in Amazon EKS.

Achieving Infrastructure Automation at Scale Using AWS Proton

Regeneron is a biotechnology company that develops life-transforming medicines and uses AWS Proton to optimize and secure their AI/ML infrastructure. AWS Proton allows them to manage and automate resources at scale in a repeatable, reliable, and secure way. It helps them to systematically define, deploy, and govern their application stacks, enabling them to rapidly deliver high-quality, consistent, and secure applications.

AWS Fargate Adds Support for Larger Ephemeral Volumes

AWS Fargate is a serverless, pay-as-you-go compute engine that focuses on building applications without having to manage servers. Starting today, the amount of ephemeral storage you can allocate to the containers in an EKS Fargate pod is configurable up to a maximum of 175 GiB per pod.

Optimizing and Securing AI/ML Infrastructure with Karpenter and Bottlerocket

H2O.ai is a biotechnology company that uses Karpenter and Bottlerocket to optimize and secure their AI/ML infrastructure. They use an AWS SaaS platform, H2O AI Managed Cloud, to quickly build productive models and gain insights from their data. Karpenter and Bottlerocket help them to rapidly provision AI platforms, improve observability, and identify and troubleshoot any issues.

KeyCore helps companies like Regeneron and H2O.ai to optimize and secure their AI/ML infrastructure. We offer a full suite of professional services and managed services for AWS, including expertise in custom service discovery, log loss prevention, IPV4 exhaustion automation, AWS Proton infrastructure automation, larger ephemeral volumes, and Karpenter/Bottlerocket optimization. Our team of highly skilled AWS consultants have the know-how to help your business automate your resources at scale in a repeatable and secure way. Contact us today to learn how we can help you.

Read the full blog posts from AWS

AWS Quantum Technologies Blog

Amazon Braket and the Wolfram Quantum Framework

The Wolfram Quantum Framework is a powerful way to access quantum computing resources, and in this post we’ll explore how it works with Amazon Braket. Amazon Braket is a managed service that provides access to a variety of quantum computing hardware and simulators. This makes it easier for developers to get started with quantum computing and to incorporate quantum computing capabilities into their applications.

What is the Wolfram Quantum Framework?

The Wolfram Quantum Framework is an open source library that allows developers to access quantum computing resources. It provides a set of tools and APIs that make it easier to create and execute quantum programs. These tools include a Jupyter notebook interface for writing and running code, and a command line interface for interacting with quantum computing resources.

How does the Wolfram Quantum Framework work with Amazon Braket?

The Wolfram Quantum Framework can be used with Amazon Braket to access the underlying quantum computing hardware and simulators. It provides a set of APIs that allow developers to interact with the quantum computing resources. This makes it easier to create and execute quantum programs on the Amazon Braket platform.

What are the benefits of using the Wolfram Quantum Framework?

Using the Wolfram Quantum Framework with Amazon Braket offers a number of benefits. It provides a simpler way to interact with quantum computing resources, making it easier to get started with quantum computing. It also allows developers to incorporate quantum computing capabilities into their applications. Additionally, it provides a set of APIs that allow developers to create and execute quantum programs more efficiently.

How can KeyCore Help?

At KeyCore we provide professional services and managed services that can help you get started with the Wolfram Quantum Framework and Amazon Braket. We can provide guidance and expertise on how to best use the Wolfram Quantum Framework and Amazon Braket to create and execute quantum programs. We can also help you explore and develop quantum computing capabilities for your applications. Contact us to get started.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Amazon Web Services Database Blog

Amazon Web Services (AWS) offers a wide range of database services for building and running applications with highly connected datasets. In this blog post, we discuss the most recent features and updates released for some of the most popular database services on AWS: Amazon Neptune, Amazon Managed Blockchain Access Bitcoin, Amazon Aurora I/O-Optimized Feature, and Amazon RDS for MySQL with Multi-AZ DB Clusters. We also discuss how to migrate from Microsoft SQL Server to Babelfish for Aurora PostgreSQL with minimal downtime, how to make dashboards faster and more cost-effective with Grafana query caching and Amazon Timestream, and how to use local write forwarding with Amazon Aurora. With these updates, AWS customers can optimize their databases for increased performance, availability, and cost-efficiency.

Exploring the feature packed 1.2.1.0 release for Amazon Neptune

Amazon Neptune is a fast, reliable, and fully managed graph database service. The recent 1.2.1.0 engine update to Amazon Neptune includes many new features, such as developing and running applications on encrypted graph databases using AWS KMS, improved performance for large graphs, and the ability to create user-defined functions in Amazon Neptune. With these features, developers can build applications for knowledge graphs, fraud graphs, identity graphs, and security graphs more easily and securely than ever.

Introducing Amazon Managed Blockchain Access Bitcoin

Amazon Managed Blockchain Access Bitcoin is designed to reduce the complexity of managing blockchain nodes. It provides a managed blockchain node service, eliminating the need for customers to configure, provision, and maintain the nodes themselves. This allows users to access Bitcoin transactions and the Bitcoin network without having to worry about infrastructure costs or manual setup. This makes it easier for developers to build and deploy applications that take advantage of blockchain technology.

Estimate cost savings for the Amazon Aurora I/O-Optimized Feature using Amazon CloudWatch

The Amazon Aurora I/O-Optimized feature allows users to take advantage of Aurora’s storage architecture and improved performance. It provides users with an easy way to estimate their cost savings with Aurora’s I/O-Optimized feature using Amazon CloudWatch. With the I/O-Optimized feature, users can save up to 40% on their database costs. Additionally, Aurora supports MySQL and PostgreSQL open-source database engines, making it a great choice for users who want to save money and get the best performance possible.

Best strategies for achieving high performance and high availability on Amazon RDS for MySQL with Multi-AZ DB Clusters

Amazon RDS for MySQL with Multi-AZ DB Clusters provides users with high availability and high performance. Multi-AZ deployments have either one or two standby DB instances that can provide failover support, but cannot serve read traffic. When there are two readable standby DB instances, the deployment is referred to as a Multi-AZ DB cluster deployment. With this setup, users can benefit from increased reliability, improved availability, and better performance.

Migrate Microsoft SQL Server to Babelfish for Aurora PostgreSQL with minimal downtime using AWS DMS

Migrating from Microsoft SQL Server to open-source databases like PostgreSQL can be a difficult task. With AWS Database Migration Service (AWS DMS), users can migrate their data from Microsoft SQL Server to Babelfish for Aurora PostgreSQL with minimal downtime. AWS DMS supports a wide variety of source and target databases and can create a migration plan that will ensure a successful migration with minimal downtime. KeyCore can help with the migration process and ensure that it is done correctly.

Make your dashboards faster and more cost-effective with Grafana query caching and Amazon Timestream

Grafana query caching and Amazon Timestream are powerful tools for making dashboards faster and more cost-effective. With query caching, users can reduce the query latency for their dashboards and make them more responsive. Amazon Timestream also allows users to store and manage their time-series data more efficiently, reducing their costs. KeyCore can help you take advantage of these tools and optimize your dashboards for increased performance and cost-efficiency.

Local write forwarding with Amazon Aurora

For stateful resources such as databases, scaling can be more challenging. With Amazon Aurora, users can take advantage of local write forwarding to help scale their databases. With local write forwarding, users can take advantage of the full capacity of their Aurora-based database cluster, ensuring that their application is able to scale with the needs of their business. KeyCore can help you take advantage of this feature and make sure that your databases are able to scale with your business.

Read the full blog posts from AWS

AWS for Games Blog

Scalable Cross-Platform Game Backends On AWS

Game developers today must develop secure and scalable backend features to support cross-platform online elements of their games. Customers need to allow players to play with their friends across platforms, and move gameplay between those platforms to provide a seamless player experience. To meet these demands, AWS has created new Solution Guidance for building scalable cross-platform game backends on AWS.

Key Benefits

AWS offers many features and services that have proven useful for game developers. These include dedicated compute services, including Amazon EC2 and AWS Fargate, managed databases like Amazon Aurora, Amazon DynamoDB, and Amazon Redshift, and storage and content delivery services like Amazon S3 and Amazon CloudFront. AWS also provides global content delivery through its global infrastructure, which can be used to deliver faster performance to players in different locations.

Solution Guidance

This new Solution Guidance provides a comprehensive overview of the features and services available on AWS for building scalable cross-platform game backends. It provides an overview of the AWS services that are relevant for game developers, and how they can be used to build a secure and scalable backend for their games. In addition, the guidance explains how game developers can take advantage of the global infrastructure available on AWS to deliver fast and reliable performance to their players.

KeyCore’s Role

At KeyCore, we offer professional and managed services to help customers build secure and scalable backend features for their games. We provide expertise in the architecture, design, and development of cloud-native applications, and work with customers to ensure their games are running on the most efficient and reliable infrastructure available. Our team of experts also provides guidance on how to take advantage of the AWS features and services available for building scalable cross-platform game backends. We can help customers get the most out of their game backend infrastructure, and ensure their players have a seamless experience.

Read the full blog posts from AWS

AWS Training and Certification Blog

Cloud-Native Skills Acceleration with AWS Training and Certification

Companies such as TCS and Salesforce have realized the potential of using AWS Training and Certification to develop cloud-native skills in their workforce. In this blog post, we’ll look at how TCS and Salesforce are leveraging AWS Training and Certification to accelerate cloud growth and transformation.

TCS Accelerate Cloud Growth

TCS has been able to speed up the process of deploying entry-level associates to projects due to the entry criterion that these associates be AWS Certified. Before this criterion was applied, it took TCS 2-3 months to deploy entry-level associates to projects after onboarding. By working with AWS Education Programs, TCS has been able to tap into a pool of entry-level, cloud-skilled talent, thus drastically reducing new-hire deployment time.

Salesforce Transforms Skills to Cloud-Native

As one of the largest customer relationship management providers, with over 150,000 enterprise customers worldwide, Salesforce has set itself apart by continually investing in its workforce to stay up-to-date with the latest technologies. Specifically, Salesforce has been leveraging AWS Training and Certification to develop and advance cloud-native skills in its workforce. By doing this, Salesforce can innovate with AWS and maintain its position as a leader in customer relationship management.

KeyCore Can Help With AWS Training and Certification

As the leading Danish AWS consultancy, KeyCore offers a range of professional and managed services. Our team of highly advanced AWS experts can help you get the most out of AWS Training and Certification, ensuring that your workforce is skilled in the cloud and ready to take on the latest technologies.

For more information about KeyCore and our offerings, please visit our website. We look forward to helping you get the most out of AWS Training and Certification.

Read the full blog posts from AWS

Microsoft Workloads on AWS

Microsoft Workloads on AWS

Using AWS Launch Wizard to deploy SQL Server Always On Failover Cluster Instances with Amazon FSx for NetApp ONTAP

AWS Launch Wizard provides a powerful way to deploy a SQL Server Always On Failover Cluster (FCI). With Amazon FSx for NetApp ONTAP, it’s easy to set up shared storage for your cluster. In this blog post, we’ll walk through the steps required to deploy a SQL Server Always On Failover Cluster using AWS Launch Wizard with Amazon FSx for NetApp ONTAP.

First, you’ll need to configure the desired cluster size, instance types, and Availability Zones for the nodes in the cluster. You’ll also need to specify the capacity of the Amazon FSx shared storage. Then, you’ll need to configure the user settings, such as the user name and password for the administrator account. Finally, you’ll need to configure the optional settings, such as network settings and backup settings.

Once the configuration is complete, it’s time to launch the cluster. This is done by clicking the “Launch” button in the AWS Launch Wizard console. The launch process can take several minutes to complete. Once it’s finished, you’ll have a fully functioning SQL Server Always On Failover Cluster with Amazon FSx for NetApp ONTAP providing the shared storage.

At KeyCore, our team of AWS experts can help you get the most from AWS Launch Wizard and Amazon FSx for NetApp ONTAP. We can help you configure your cluster and storage to ensure the best possible performance and reliability. Our team of experts can also help you with designing and deploying DR strategies for your Active Directory.

Analyze modernization incompatibilities using AWS Migration Hub Strategy Recommendations

AWS Migration Hub Strategy Recommendations provides an in-depth analysis of your environment, including server inventory, running applications, and databases. It looks for potential migration obstacles and provides anti-pattern reports. This blog post will go over how AWS Migration Hub Strategy Recommendations works, what aspects it analyzes, and how to take advantage of the anti-pattern reports.

AWS Migration Hub Strategy Recommendations inspects your environment to identify potential migration obstacles. It examines your server inventory, runtime environment, applications, and databases to detect any incompatibilities with the AWS cloud. The platform then generates anti-pattern reports that can help you identify and address any potential issues.

AWS Migration Hub Strategy Recommendations can also help you plan out your migration strategy. You can use the reports to help you identify which applications and services need to be moved first, and which ones can wait. You can also use the reports to plan out the sequence of your migration—which components need to be migrated first and which ones can wait. This can help you move your workloads to the cloud faster and more efficiently.

At KeyCore, our team of AWS experts can help you get the most from AWS Migration Hub Strategy Recommendations. We can help you analyze your environment, identify any potential issues, and create a plan for your migration. Our team can also help you plan, implement, and manage your migration, so you can move your workloads to the cloud quickly and reliably.

Automate disaster recovery for your self-managed Active Directory on AWS

This blog post will show you how to leverage AWS for disaster recovery for your self-managed Microsoft Active Directory. It will also provide an automated solution to help you run periodic DR tests of your AD infrastructure.

The first step is to create an Amazon Machine Image (AMI) of the Active Directory server. This will allow you to quickly spin up new instances in the event of a disaster. You can then utilize AWS CloudFormation and AWS Systems Manager to automate the process of creating and testing the new instances.

AWS CloudFormation and AWS Systems Manager can be used to create and configure the instances, and they can also be used to automate testing of the new instances. This includes running scripted tests to verify that the new instances are functioning as expected. This helps to ensure that the disaster recovery process is reliable and repeatable.

At KeyCore, our team of AWS experts can help you get the most from Amazon Machine Image, AWS CloudFormation, and AWS Systems Manager. We can help you create, configure, and test your Active Directory disaster recovery solution. We can also help you set up a regular schedule of DR tests, so you can be sure that your disaster recovery plan is up to date.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

Modernizing Your Data Lake with AWS Services: Configure cross-Region Table Access with AWS Glue and Lake Formation, Create an Apache Hudi-based Data Lake with AWS DMS, Amazon Kinesis, and Glue Streaming ETL, Estimate Scope 1 Carbon Footprint with Athena, Quickly Resolve Tickets with OpenSearch Service, Scaling up to 1 GB/second Ingest Capacity with Kinesis Data Streams, Empower Jira Data with AppFlow and Glue, and Migrate SQL-based ETL Workloads to Serverless Infrastructure with Glue

Configure cross-Region table access with the AWS Glue Catalog and AWS Lake Formation: Companies need to have the ability to share and access data securely and safely across Regions. The AWS Glue Data Catalog is a serverless repository that makes data accessible across multiple accounts and Regions so that organizations can easily share and access data. AWS Lake Formation extends this replication capability to authorized users in other Regions with the introduction of Cross-Region Table Access. This helps organizations set up data sharing across Regions, and supports different access levels depending on data classification.

Create an Apache Hudi-based near-real-time transactional data lake using AWS DMS, Amazon Kinesis, AWS Glue streaming ETL, and data visualization using Amazon QuickSight: AWS Glue streaming ETL jobs continuously consume data from streaming sources, clean and transform the data in-flight, and make it available for analysis in seconds. AWS Database Migration Service (AWS DMS) can replicate the data from your source systems to Amazon Simple Storage Service (Amazon S3), which commonly hosts the storage layer of the data lake. With AWS services like Kinesis, Glue streaming ETL, and Amazon QuickSight, organizations can apply CDC changes from Amazon RDS or other relational databases to an S3 data lake, and transform and enrich the data in near-real time.

Estimating Scope 1 Carbon Footprint with Amazon Athena: To help organizations reach their net-zero carbon goals, Amazon Athena enables customers to quickly and easily estimate their Scope 1 carbon footprint. Athena allows customers to query large datasets stored in Amazon S3 using standard SQL, enabling them to analyze their carbon footprint in a matter of minutes. This helps organizations identify opportunities to reduce their emissions, and allows them to quickly take action to meet their sustainability goals.

How FIS ingests and searches vector data for quick ticket resolution with Amazon OpenSearch Service: FIS uses Amazon OpenSearch Service to ingest and search vector data for quick ticket resolution. With OpenSearch, FIS can quickly pull up and view the data in one unified view, helping them resolve tickets faster. OpenSearch can also be used to detect anomalies and issues in data, as well as provide insights to help FIS make better decisions.

Amazon Kinesis Data Streams on-demand capacity mode now scales up to 1 GB/second ingest capacity: Amazon Kinesis Data Streams is a serverless data streaming service that makes it easy to capture, process, and store streaming data at any scale. With the on-demand capacity mode, Kinesis Data Streams can now scale up to 1 GB/second ingest capacity, making it easier to handle unpredictable data traffic.

Empower your Jira data in a data lake with Amazon AppFlow and AWS Glue: Organizations can use Amazon AppFlow and AWS Glue to load data from Atlassian Jira Cloud into a data lake. Loading Jira data into a data lake allows organizations to enrich the data with other datasets and derive deeper insights, as well as create predictive analytics that can help organizations optimize their software development processes.

Migrate your existing SQL-based ETL workload to an AWS serverless ETL infrastructure using AWS Glue: With AWS services such as AWS Glue, organizations can easily migrate their existing SQL-based ETL workloads to a serverless ETL infrastructure. Glue can continuously ingest and transform streaming data, and can also be used to cleanse and prepare data for use in other AWS services. By using serverless ETL with Glue, organizations can reduce their operational costs and improve their ETL performance.

KeyCore Can Help : KeyCore is the leading AWS consultancy in Denmark, providing both professional services and managed services. We offer comprehensive support for all of the AWS services described in this post, and can help you implement a modern, serverless data lake with AWS. Our team of experts can help you create data lakes that are secure, scalable, and cost-effective, and can help you meet your longer-term data needs. Contact us today to learn more about how KeyCore can help you get the most out of your data lake.

Read the full blog posts from AWS

Networking & Content Delivery

How to Configure Block Duration for IP Addresses Rate Limited by AWS WAF

Understanding Volumetric Attacks

Volumetric attacks are one of the most common types of cyberattacks. In these attacks, a web application is overwhelmed with an enormous number of HTTP requests. This flood of traffic puts a strain on the application’s servers, leading to degraded performance, increased latency for legitimate users, and even resource exhaustion in severe cases.

Using AWS WAF to Prevent Volumetric Attacks

Fortunately, AWS WAF provides rate-based rules to address this kind of attack. These rules allow customers to configure the rate limit of requests from a particular IP address. This means that if the number of requests exceeds the set limit, then AWS WAF will deny the request and block further requests from the same IP address for a pre-set duration.

Configuring Block Duration for IP Addresses Rate Limited by AWS WAF

When configuring the block duration for an IP address rate-limited by AWS WAF, the first step is to decide how to set the rate limit. To do this, you’ll need to consider your application’s normal traffic patterns. For example, if you know that your application typically sees a peak of 10 requests per second from a particular IP address, you can set the rate limit to something much higher than 10. This will help ensure that legitimate requests don’t get blocked.

Once you’ve set the rate limit, you’ll need to decide how long to block IP addresses that exceed the rate limit. When doing this, it is important to take into account the severity of the attack. For instance, if you are seeing a sudden spike in requests from a particular IP address, it may indicate that the IP address is part of a more aggressive attack and should be blocked for a longer duration.

Conclusion & How KeyCore Can Help

By taking the time to configure the block duration for IP addresses rate limited by AWS WAF, customers can ensure that their applications are protected from volumetric attacks. If you need assistance with configuring AWS WAF, or would like help managing your AWS environment, KeyCore can help. KeyCore provides both professional services and managed services for AWS, and our team of experienced AWS Certified Solutions Architects are ready to help you build, maintain, and secure your AWS environment.

Read the full blog posts from AWS

AWS Compute Blog

Introducing Automatic Deletion of Schedules with Amazon EventBridge Scheduler

Amazon EventBridge Scheduler allows customers to create, run, and manage schedules on scale. This is an incredibly powerful capability, but it can be difficult to manage individual schedules. To help manage these schedules more efficiently, Amazon EventBridge Scheduler now supports configuring automatic deletion of schedules after completion.

Configuring Automatic Deletion in EventBridge Scheduler

Using EventBridge Scheduler, customers can now configure one-time and recurring schedules with an end date to be automatically deleted upon completion. This helps customers lower the cost of managing the schedules by reducing the need of managing individual schedules. Customers can also configure the schedules to be deleted after a certain time period, such as after one hour or one day.

The Benefits of Automatic Deletion in EventBridge Scheduler

By utilizing the automatic deletion capability in EventBridge Scheduler, customers can save time and money managing their schedules. This helps customers reduce the amount of manual effort required to keep track of their schedules, allowing them to focus on other activities. Additionally, customers can also configure their schedules to be deleted after a certain time period, such as after one hour or one day, which helps customers reduce their costs of running the schedules.

How KeyCore Can Help

At KeyCore, our AWS-certified experts are highly experienced in setting up and managing AWS EventBridge Scheduler. Our team can help you set up and configure your EventBridge Scheduler to utilize the automatic deletion feature. This will help you save time and money on managing your schedules, allowing you to focus on your core business needs. Contact us today to find out how our experts can help you take advantage of this powerful feature.

Read the full blog posts from AWS

AWS for M&E Blog

The Benefits of AWS for M&E – a Comprehensive Look

In this blog post, we explore the benefits of using AWS for media and entertainment (M&E) workloads, and how KeyCore can help you make the most of your M&E workloads. We will look at the latest version of AWS Thinkbox Deadline, Outpost VFX’s use of AWS to service new clients, the importance of resilient video encoding, and the Live Cloud Production Initiative.

AWS Thinkbox Deadline adds final-pixel render support for real-time animation work in Unreal Engine 5

AWS Thinkbox Deadline 10.3 is the latest version of the batch compute render scheduler, now available, which adds support for Epic Games’ real-time 3D content creation tool, Unreal Engine. This version also includes version updates for many other industry standard digital content creation (DCC) applications. Deadline 10.3 is compatible across Windows, Linux, and macOS platforms, and supports over 150+ applications and plugins.

AWS enables Outpost VFX to service new clients anywhere in the world in hours

Outpost VFX is a company that creates visual effects for films and episodic television. When they founded in 2012, they never expected that their on-premises computing infrastructure would max out their building’s electrical system. By using AWS, they are now able to service new clients anywhere in the world in just hours, something that was never before possible.

Resilient video encoding across multiple AWS regions

In the media and entertainment (M&E) industry, disruption of media workloads may lead to customer churn or brand tarnish. To ensure resilience, workloads must have resilient architecture, but not be over-engineered. AWS offers media workloads the ability to encode video across multiple regions, allowing for a more reliable and resilient system.

AWS supports multi-partner interoperability workshop as part of the Live Cloud Production Initiative

In April 2022, AWS launched the Live Cloud Production Initiative (formerly the Virtual Live Remote Production Initiative) to expand the broadcast solution portfolio in collaboration with AWS partners. The aim is to deliver outcomes that benefit partners, customers, and the M&E community. AWS has supported a multi-partner interoperability workshop as part of this initiative, allowing them to collaborate on developing solutions that meet customer needs.

Conclusion:

AWS provides a wide range of solutions to meet the needs of the media and entertainment industry. From the latest version of AWS Thinkbox Deadline, to Outpost VFX’s use of AWS, to resilient video encoding, to the Live Cloud Production Initiative, AWS is enabling media and entertainment companies to improve their operations and better serve their customers. At KeyCore, our team of AWS certified professionals can help you make the most of AWS and create the perfect setup for your M&E workloads.

Read the full blog posts from AWS

AWS Developer Tools Blog

AWS Developer Tools Blog: Introducing Smithy for Python

AWS is excited to introduce the preview of the Smithy client generation for Python. This tooling is a great new way for developers to generate clients in type-hinted Python. With this model-driven manner, developers have the same experience as what AWS has used to develop its services for more than a decade.

Writing and maintaining hand-written clients for a web service is a tedious and time-consuming process. It also requires extensive skills in Python and other coding languages. Now, Smithy for Python offers an easier way for developers to generate these clients. This eliminates the need to manually create and maintain clients for web services.

What Does Smithy for Python Do?

Smithy for Python generates clients that are optimized for a specific service. This is made possible by leveraging the Smithy model which works with a set of schema definitions. These definitions are used to generate clients that are optimized for the service this model is designed for. This means that developers have access to a client that is fully optimized for the web service they are working with.

Smithy for Python also automates the process for generating clients. This means that developers don’t need to manually create and maintain clients for services. The Smithy model handles the generation and maintenance of the relevant clients. This makes it easier for developers to focus on the tasks that need their attention.

How KeyCore Can Help

KeyCore is the leading Danish AWS consultancy that provides both professional services and managed services. We specialize in helping organizations succeed in their cloud migration and ensure that their AWS solutions are optimized for their specific needs. We are highly experienced in AWS technology and have access to all the necessary tools and resources to help guide you in utilizing Smithy for Python. Our team of experts is dedicated to helping you every step of the way.

Read the full blog posts from AWS

AWS Architecture Blog

Multi-Tenancy in the AWS Cloud

The Benefits of Multi-Tenancy

More and more software providers are turning to multi-tenancy to optimize resources and reduce operational costs. With the Amazon Web Services (AWS) Cloud, customers have the opportunity to benefit from the advantages of multi-tenancy. AWS provides customers with the scalability, security, and cost-effectiveness necessary to make the transition to multi-tenancy successful.

Architectural Resiliency

Resiliency is an important factor when considering a multi-tenancy architecture. The AWS Well-Architected Framework defines resilience as “the capability to recover when stressed by load, accidental or intentional attacks, and failure of any part in the workload’s components”. Resilience is vital for businesses with multi-tenancy architectures as it ensures the continuation of services in the event of a disruption.

Medical Imaging and AWS HealthImaging

Medical imaging is essential for patient diagnosis and treatment plans, however managing, storing, and analyzing medical images can be costly and time-consuming. AWS HealthImaging helps healthcare providers work through these challenges by providing powerful storage and image processing capabilities. With the help of AWS SageMaker, medical imaging workflows become easier and more efficient.

KeyCore’s AWS Solutions

At KeyCore, we understand the importance of multi-tenancy architectures and the benefits of partnering with AWS. We provide professional services and managed services that help customers design and implement multi-tenancy solutions. Our team of AWS experts can assist you in building a reliable, cost-effective, and secure multi-tenancy architecture that meets the needs of your business. Contact us today to learn more.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

Using AWS to Modernize and Protect Data Platforms: An Overview

The AWS Partner Network (APN) provides an array of solutions to help organizations modernize and protect their data platforms. Liberty Mutual’s multi-year initiative to migrate and upgrade its primary claims processing platform to Guidewire Software, an AWS Partner, is just one example of the exciting opportunities to leverage AWS for cloud migration. Additionally, AWS Lake Formation can be used to govern data access through the integration with Privacera, and organizations can accelerate their data-driven success with Fivetran and Amazon S3. Managing AWS Account root MFA with CyberArk Privileged Access Manager provides another layer of security, and Amazon CloudWatch can be used to monitor workloads hosted on VMware Cloud on AWS. AI21 Labs’ Jurassic-2 (J2) large language model can be accessed via an AWS Lambda endpoint, and IBM’s energy anomaly detection solution for energy and utilities companies leverages personalized AI. Finally, Sapphire Systems used AWS Migration Acceleration Program (MAP) to migrate to SAP on AWS at scale to increase speed and agility, while Totogi’s Charging-as-a-Service platform helps customers reduce their carbon footprint.

At KeyCore, we can help organizations modernize and protect their data platform on AWS. Our team has extensive experience in leveraging cloud migration solutions to help customers transition to the cloud. We can also help organizations build out their data lake and pipelines for consuming services. In addition, our team can assist with implementing security protocols like managing AWS account root MFA using CyberArk PAM, and monitoring workloads hosted on VMware Cloud on AWS. We also have expertise in working with large language models such as Jurassic-2 and IBM’s energy anomaly detection solutions, as well as solutions for reducing the carbon footprint of mobile networks operations. Finally, our team is experienced in working with SAP on AWS, and we can help customers leverage the AWS Migration Acceleration Program (MAP) framework for their migrations.

At KeyCore, we strive to provide customers with the best possible solutions for modernizing and protecting their data platform on AWS. Our team has the expertise to help customers transition to the cloud, build out their data lakes and pipelines, and implement security protocols to keep data safe. We can also help customers leverage large language models, IBM’s energy anomaly detection solutions, and solutions for reducing the carbon footprint of mobile networks operations. Our team is experienced in working with SAP on AWS, and we can help customers leverage the AWS Migration Acceleration Program (MAP) framework for their migrations. Contact us today to learn more about how our team can help you.

Read the full blog posts from AWS

AWS HPC Blog

AWS Batch & NFL Player Health

The National Football League (NFL) wanted to use machine learning (ML) to improve player health, so they turned to AWS Batch. Batch enabled them to scale their ML workloads, and the results were so impressive that they were able to reduce manual labor by 90% and the results exceeded human accuracy by 12%.

Fair Share Scheduling in AWS Batch

Fair share scheduling (FSS) in AWS Batch ensures that all jobs are fairly distributed to queues. With FSS, each queue is allocated a share of the instance hours and jobs can be placed based on the share. In this blog, we dive into the details of FSS and illustrate the results of different share policies. We also discuss practical use cases where FSS can be beneficial for Batch.

How KeyCore Can Help with AWS Batch

At KeyCore, we provide both professional services and managed services for AWS Batch. Our experienced cloud experts can help you optimize job placement and help you find the right fair share policies for your workloads. We provide end-to-end services from setting up and configuring the AWS environment to integrating machine learning models with AWS Batch.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

AWS Cloud Operations & Migrations Blog

Monitoring Version Compliance of Amazon Elastic Kubernetes Service by Using AWS Config

Amazon Elastic Kubernetes Services (Amazon EKS) simplifies cluster operations by offloading undifferentiated heavy lifting to AWS. It is difficult for customers to keep their EKS clusters up-to-date due to the Kubernetes release cycle every four months, especially across multiple AWS accounts. AWS Config helps to ensure that customers are running the latest version of EKS for their clusters. This helps customers maintain compliance and guarantee the best performance. Through AWS Config, customers can monitor the current version of their EKS clusters, as well as view the available updates and the release notes for each version. By setting up a config rule, customers can ensure that their EKS clusters are always running the latest version.

Configuring Thresholds for Creating Health Events in Amazon CloudWatch Internet Monitor

Amazon CloudWatch Internet Monitor now allows customers to configure thresholds for when health events are created for their application’s internet traffic. Internet Monitor creates health events when availability or performance scores drop across monitored geographies. The threshold configuration options provide near-continuous internet measurements for availability and performance metrics, such as latency, jitter, and packet loss. Customers can use this information to gain insights into their application’s performance and quickly identify and address issues.

Importing Existing AWS Control Tower Accounts to Account Factory for Terraform

AWS Control Tower Account Factory for Terraform (AFT) allows customers to provision and customize their account in AWS Control Tower using Terraform. AFT also enables customers to import existing AWS Control Tower managed accounts into AFT management. This allows customers to manage the global and account-specific customization at scale using Terraform. Customers can also use AFT to automate their account creation process and reduce the workload associated with manual account creation.

Gaining Actionable Business Insights with Monitoring of Amazon MSK with Amazon Managed Service for Prometheus and Amazon Managed Grafana

Monitoring is a critical aspect of maintaining the health and performance of any distributed system. Apache Kafka-based applications rely on robust monitoring of their Kafka clusters to ensure real-time data processing. Amazon MSK now provides Amazon Managed Service for Prometheus and Amazon Managed Grafana, which helps customers monitor their Kafka clusters with minimal effort. With the help of these services, customers can gain actionable insights into their Kafka clusters and quickly troubleshoot any issues.

How to Perform a Well-Architected Framework Review – Part 3

As discussed in the previous two blog posts, the Well-Architected Framework Review (WAFR) consists of three phases: Prepare, Review, and Improve. The Improve phase is all about taking the feedback from the Review phase and applying it to your workloads. This involves creating action plans for any issues identified, implementing the changes necessary to fix those issues, and then verifying that the changes were effective. Customers should also use this phase to educate their teams on the importance of cloud best practices, as well as the benefits of adhering to them.

Leveraging Infrastructure as Code for AWS Mainframe Modernization with AWS CloudFormation

AWS Mainframe Modernization service supports AWS CloudFormation templates to manage environments and applications. This allows customers to leverage best practices of Infrastructure-as-Code (IaC), such as automated provisioning, configuration, and deployment. CloudFormation provides many benefits for mainframe applications modernized with AWS cloud, such as improved visibility and governance, easy scalability, and better security. Additionally, customers can quickly replicate their environment and save time and money.

Managing Continuous Compliance by Using AWS Config Configuration Recorder Resource Type

AWS Config now has support for configuration recorder as a resource type. This configuration item (CI) helps customers track changes to the state of AWS Config configuration recorder. Customers can use this CI to ensure that the state of the configuration recorder has not drifted from its original state. Additionally, customers can use this CI to ensure compliance with their desired configuration baseline.

Optimizing Alarm Lifecycle with Amazon CloudWatch Metrics Insights Alarms

Amazon CloudWatch Metrics Insights Alarms help customers easily monitor and set alarms on their dynamically changing resources. This simplified way of creating alarms automatically adjusts to resources that are added or removed from the fleets. Customers can also get rid of dangling alarms that are cluttering their view and wasting resources. This helps customers save time and money while optimizing their alarm lifecycle.

Increasing Visibility and Governance on Cloud with AWS Cloud Operations Services – Part 2

This blog post is the continuation of the first part in the series on increasing visibility and governance on cloud with AWS Cloud Operations services. We discussed some foundational patterns to prepare the environments for centralized operations and governance. In this blog (Part 2), we will show you how to use the AWS Systems Manager Parameter Store and AWS Backup to further enhance your visibility and governance capabilities. Additionally, customers can use Amazon Quicksight to gain insights from their cloud operations data.

How KeyCore Can Help

At KeyCore, we provide both professional and managed services to help our customers with their AWS Cloud Operations & Migrations. Our team of experienced AWS consultants can help you leverage the AWS Cloud Operations services to ensure visibility and governance across your cloud environment. We can also help you automate and streamline your cloud operations, helping you reduce costs and increase efficiency. To learn more about how KeyCore can help you with your cloud operations, visit our website at https://www.keycore.dk.

Read the full blog posts from AWS

AWS for Industries

AWS for Industries

The Financial Services Industry (FSI) has long relied on messaging standards to share and receive payments, and AWS can help organizations adopt industry-standard messaging solutions. With AWS, organizations can take advantage of an event-driven architecture for ISO 20022 messaging workflows, enabling them to meet the increasing demands in the FSI and other industries.

Bio-Rad Laboratories Leverages IoT Solutions

Bio-Rad Laboratories, Inc. (Bio-Rad) uses AWS Systems Manager to build an IoT device, which allows them to provide a next-generation quality control informatics solution. This solution helps clinical diagnostics labs deliver precise and accurate results with every run, and streamlines workflows, reduces errors, and facilitates compliance.

5 Key Considerations for Amazon Elastic File System

AWS customers running workloads on Amazon Elastic File System (Amazon EFS) can ensure compliance, data protection, isolation of compute environments, audits with APIs, and access control and security by following these five key considerations. Reference architectures, security best practices, and other guidance is provided to help customers meet these and other requirements.

Smart EV Routing for Optimized EV Travel

WirelessCar offers Smart EV Routing, a Software-as-a-Service (SaaS) API designed for automakers to offer electric vehicle (EV) customers a state-of-the-art route planner. This accurate and reliable EV route planning helps reduce time and energy costs, as well as prevents unnecessary delays due to range anxiety.

BlackBerry QNX on AWS Workshop

BlackBerry Limited recently released the BlackBerry QNX on AWS workshop, a hands-on lab experience to help AWS customers quickly grasp embedded software development with QNX® OS on AWS. This workshop provides guidance on using the AWS Cloud, enabling customers to easily and securely build and deploy embedded software development workloads.

AWS offers a range of solutions in the Financial Services Industry as well as other industries. Through AWS customers can take advantage of event-driven architecture for ISO 20022 messaging workflows as well as IoT and embedded software development solutions. KeyCore provides professional services and managed services to help organizations leverage the power of AWS to meet their needs.

Read the full blog posts from AWS

AWS Marketplace

Cloud Technology’s Role in Insuring Modernization

The findings from a 2023 Forrester survey revealed that insurers have yet to tap into the full potential of cloud technology. To explore this potential, industry leaders from Forrester, Hyland, Montoux, and Unqork recently joined forces with AWS to host a webinar titled Drive growth through insurance modernization. In this post, we will highlight the trends that Forrester presented along with several of the panelists’ unique insights, and how KeyCore can help insurers modernize their processes.

Discovering Insurer’s Cloud Readiness

The survey results showed that insurers are still in the early stages of cloud adoption. While most insurers understand the potential of cloud technology, many haven’t yet implemented it. One of the biggest barriers to adoption? Legacy systems. These systems are often outdated and difficult to integrate with, making it difficult for insurers to move to the cloud.

Integrating Cloud Technology

The panelists discussed how insurers can take advantage of cloud technology to streamline processes and improve customer experience. By leveraging the scalability of the cloud, insurers can offer customers more personalized products and services. They can also improve operational efficiency and reduce costs by automating processes and leveraging data to make smarter decisions.

The Benefits of Cloud Technology

The panelists also discussed the advantages of cloud technology to insurers, such as enhanced agility, greater scalability, and improved customer experience. By leveraging the cloud, insurers can quickly and easily modify existing products and services to meet the changing needs of customers. Additionally, they can quickly scale up to meet customer demand, as well as identify customer trends and preferences to offer more tailored products and services.

KeyCore’s Role in Modernizing Insurance Technology

At KeyCore, we understand the challenges that insurers face in adopting and integrating cloud technology. That’s why we offer professional services and managed services to help insurers modernize their processes and leverage the power of the cloud. By leveraging our expertise, insurers can quickly and easily integrate cloud technology to improve operational efficiency and customer experience.

Our team of experienced professionals can help insurers move their business processes to the cloud, as well as design and develop customized solutions to meet their specific needs. We understand the unique requirements of insurers and can help them streamline processes, reduce costs, and improve customer service.

We’re proud to help insurers modernize their processes and take advantage of the power of the cloud. Contact us today to find out how we can help you drive growth through insurance modernization.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

Securely Share Resources via AWS RAM, Continuously Scan for Vulnerabilities with Amazon Inspector, and Receive Alerts for Changes to IAM Configuration

Sharing Resources with AWS RAM

Administrators can use AWS Resource Access Manager (RAM) to manage resources across accounts and organizations in a secure and organized way. RAM allows for the provision of resources once, and then shared through RAM with other accounts. AWS RAM supports a variety of resource types, and is an efficient way to share resources in an organization or within an OU.

Continuously Scanning with Amazon Inspector

Amazon Inspector is an automated vulnerability management service that is able to scan workloads such as those found on AWS Lambda for software vulnerabilities or network exposure. This blog post demonstrates how to enable Amazon Inspector for one or more AWS accounts to be notified when a vulnerability is detected.

Receiving Alerts for Changes to IAM Configuration

It is essential for AWS administrators to create protective controls in the security configuration. As a detective control, it is possible to monitor changes to IAM configuration and create alerts when changes occur. This post was originally published in 2015, and was updated in 2023.

KeyCore Can Help

KeyCore is the leading Danish AWS consultancy. We provide a wide range of professional and managed services related to security, identity, and compliance. Our team of experienced AWS professionals can assist with setting up AWS RAM, Amazon Inspector, and IAM configuration alerts. We ensure that you can benefit from the latest in AWS security, identity, and compliance, so contact us today to learn more.

Read the full blog posts from AWS

AWS Startups Blog

Compute for Climate Fellowship and Building a Serverless Dynamic DNS System

Compute for Climate Fellowship

As climate change continues to be a major threat to our planet, tech solutions are needed to help address the crisis. AWS has created the Compute for Climate Fellowship to recognize and support projects that are actively working on solutions. This fellowship provides an opportunity to have projects showcased at the AWS re:Invent conference in November 2023. The deadline for submitting applications is August 31, 2021, and applications submitted after September 1 will be considered for development in 2024.

Building a Serverless System with AWS

Creating a serverless system using AWS services and a few lines of code is a simple, cost-effective, and scalable solution for startups. This allows them to focus on their core business logic, without worrying about scaling and maintaining the underlying infrastructure. With AWS, startups can quickly and easily build the serverless system they need, allowing them to get up and running quickly.

KeyCore Can Help

At KeyCore, we provide both professional services and managed services, so startups can get the help they need while building their serverless systems. Our expertise in AWS allows us to help startups to create their serverless systems quickly and efficiently. We understand the challenges startups face, and we can help them to overcome those and create the right serverless system for their needs. If you need help getting started with serverless systems or need some support along the way, don’t hesitate to contact us.

Read the full blog posts from AWS

Front-End Web & Mobile

AWS Amplify Logger for Swift and Android Developers

Amazon Web Services (AWS) recently released the AWS Amplify Logger for Swift and Android developers, making it easier for app developers to send logs to Amazon CloudWatch. With the Amplify Logger, developers can quickly configure the logging levels they need, to keep track of any errors happening in different parts of their apps.

Benefits of Using CloudWatch Logging

CloudWatch Logging is beneficial for developers, since it gives them a central repository for monitoring and viewing their logs. This makes it easier to debug and identify errors quickly, and the logs can be used to troubleshoot potential issues. Additionally, CloudWatch Logging can be used to set up alarms to notify developers if an issue occurs, so they can take action immediately.

KeyCore’s Professional Services for AWS Logging

At KeyCore, we provide a range of professional services for AWS Logging. Our experienced team of AWS experts can help you set up CloudWatch Logging for your app, so you can get the most out of the Amplify Logger. We also provide managed services, ensuring that our clients’ applications are running as optimally as possible. To learn more about the professional and managed services we provide, visit https://www.keycore.dk.

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector with Cloud-Based Digital Pathology

INFINITT Healthcare, a HealthTech company based in South Korea, is revolutionizing digital pathology by streamlining the extraction and transformation of Whole Slide Imaging (WSI) output into a Digital Imaging and Communications in Medicine (DICOM) file. This digital version of WSI is helping pathologists save time, resources, and accelerate time-to-solutions for patient care. The resulting files are stored in the cloud-based Digital Pathology System (DPS) built on AWS.

Storing Historical Geospatial Data on AWS

Amazon DynamoDB is being used to store historical geospatial data, such as weather data. This approach allows for virtually unlimited amounts of data storage, combined with fast query performance for interactive UIs. Developers can use DynamoDB to filter by data or location, and enable cost-efficient querying.

KeyCore – AWS Cloud Solutions

At KeyCore, we specialize in providing professional services and managed services related to AWS. Our team of experienced engineers can help you implement the latest AWS services to your benefit. We can help you build a digital pathology system to use in the public sector, and develop a web-based UI to query your historical geospatial data. Our experienced AWS consultants are well versed in CloudFormation, Typescript, and AWS SDK for JavaScript v3.

To learn more about KeyCore and our offerings, visit our website at https://www.keycore.dk.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

IoT on AWS – Best Practices for Data Ingestion, Certificate Requirements and Device Discoverability and OEE with AWS IoT SiteWise

Best Practices for Data Ingestion

Understanding the behavior expected from the device is key when designing a scalable data ingestion technique. How is the device sending data and how much, what pattern does the data follow, and what latency is required? Answering these questions will help define the best approach for data ingestion.

In order to address scalability, data should be sent asynchronously, if possible, to reduce the load on the device. It is also beneficial to batch data as much as possible. AWS IoT Core and Amazon Kinesis can be used to facilitate ingestion.

AWS IoT Core is an AWS managed service that can be used for collecting and processing data from IoT devices and then storing it in other AWS services. Using the protocol bridge feature of AWS IoT Core, data from devices can be sent to AWS IoT Core over any protocol, like MQTT or HTTP. This data is then transformed into the AWS IoT Core-specific format and published to an MQTT topic. The message broker in AWS IoT Core can be used to persist the data for later use.

Amazon Kinesis allows for streaming data to the AWS Cloud. Data from IoT devices can be sent to Amazon Kinesis using the Kinesis Firehose API or the Kinesis PutRecord API. Kinesis Firehose allows for data to be streamed at a set rate and automatically stored in different data stores like Amazon S3 and Amazon Redshift. The Kinesis PutRecord API allows for data to be sent in batches and stored in a Kinesis Data Stream.

Updating Certificate Requirements

AWS IoT Core has announced an upcoming switch of control plane endpoints and newly supported customer endpoints to TLS1.2 specification, and a renewal of Symantec Server Intermediate Certificate Authority (ICA). This switch is to ensure the highest level of security of the data in transit to and from AWS IoT Core endpoints.

This switch requires developers and customers to update their applications, Gateways and Connected Devices to use TLS1.2 to communicate with AWS IoT Core. If you are using AWS IoT Core, it is recommended to update your application and devices to use TLS1.2.

Improving Device Discoverability

Using attributes for AWS IoT thing types is a great way to search for and discover a particular device or a set of devices based on their identities and capabilities. AWS IoT Core now supports up to 50 attributes for each thing type, which can be used to store device information like serial numbers, locations, and other metadata. These attributes can be used to filter devices for targeted operations; for example, devices in certain locations can be identified and used for a specific action, like firmware updates.

Calculating Overall Equipment Effectiveness (OEE)

Calculating Overall Equipment Effectiveness (OEE) is a complex task. To simplify the process, AWS IoT SiteWise can be used to collect, store, transform, and display calculations. This blogpost provides a deep dive on how to calculate OEE using AWS IoT SiteWise native capabilities.

AWS IoT SiteWise supports the OEE calculation process by enabling customers to collect data from various sources like sensors, PLCs, and databases, then transforming that data into meaningful metrics, and finally displaying those metrics in dashboards. AWS IoT SiteWise also provides APIs to programmatically access the data and results of the OEE calculations.

At KeyCore, we specialize in helping customers implement solutions on AWS to best suit their needs. With our expertise in AWS services, we can guide you through the process of setting up your IoT devices and data ingestion processes. We can also help you design your system to utilize AWS IoT Core and AWS IoT SiteWise to collect, store, and analyze data to calculate OEE and other metrics. Contact us today to learn more about how we can help you get the most out of AWS.

Read the full blog posts from AWS

Scroll to Top