Summary of AWS blogs for the week of monday Mon May 15

In the week of Mon May 15 2023 AWS published 87 blog posts – here is an overview of what happened.

Topics Covered

Desktop and Application Streaming

How Neo Financial and Veyon use Amazon Workspaces for Zero Trust and Virtual Labs

Organizations handling personal financial data must meet strict compliance requirements and maintain the highest levels of security while enabling remote workers with the tools required to do their jobs and supporting Zero Trust initiatives. Neo Financial, a Canadian financial technology firm, used Amazon WorkSpaces to achieve these goals. Amazon WorkSpaces allows Neo Financial to provide remote access to resources and applications, while maintaining the security requirements and compliance.

Zero Trust Security

Neo Financial adopted an Amazon WorkSpaces Zero Trust architecture with Amazon WorkDocs and Amazon WorkLink. With Amazon WorkDocs, Neo Financial uses a secure, cloud-based storage solution for all the files it needs to store, share, and collaborate on with its remote staff. Amazon WorkLink provides secure access to approved websites and resources from an approved list. This enables Neo Financial employees to securely access approved company resources without having to use a Virtual Private Network (VPN).

Compliance Requirements

Neo Financial also leveraged Amazon WorkSpaces to maintain compliance with Payment Card Industry (PCI) and Personal Information Protection and Electronic Documents Act (PIPEDA) regulations. Amazon WorkSpaces helps Neo Financial ensure that all customer data is stored securely and is not exposed to unauthorized access. Amazon WorkSpaces also helps reduce compliance audit costs and keeps Neo Financial compliant with the latest version of the PCI DSS.

Virtual Labs

Organizations can also leverage Amazon WorkSpaces to create virtual labs. Veyon, a solution for remote control and monitoring of virtual labs, can be used with Amazon WorkSpaces to create a secure and reliable virtual lab environment. With Veyon, teachers have the ability to monitor student activity and gain remote control of the lab environment. Veyon allows teachers to ensure that students access only the resources, applications, and websites that have been approved for use in the lab. Veyon also offers the ability to monitor student activity in real time, allowing teachers to review student progress and provide assistance when needed.

KeyCore and Amazon WorkSpaces

At KeyCore, we understand the importance of providing clients with secure remote access solutions while adhering to strict compliance requirements. Our team of certified professionals can help you set up and configure your Amazon WorkSpaces environment, as well as provide guidance on the best practices for Zero Trust security and compliance. Contact us today to learn more about how KeyCore can help you leverage Amazon WorkSpaces for your remote access needs.

Read the full blog posts from AWS

AWS DevOps Blog

How Cirrusgo Achieved Rapid Resolution with Amazon DevOps Guru

Cirrusgo, a Danish software engineering firm, was recently having issues related to database performance. Luckily, they were able to quickly resolve the issue thanks to the Amazon DevOps Guru for RDS feature. This feature uses machine learning algorithms to help organizations identify and resolve performance-related issues in the AWS environment.

How it Works

Amazon DevOps Guru relies on a combination of AI-driven algorithms and operational best practices to detect performance issues in databases, networks, and compute resources. These algorithms analyze millions of lines of raw operational data to identify when an issue is present. Once detected, the algorithm will then provide recommendations on how to resolve the issue.

Cirrusgo’s Results

Cirrusgo was able to use the Amazon DevOps Guru for RDS to quickly identify and resolve their operational issue related to database performance. This allowed them to reduce the impact on their business and continue providing the high-quality product and service they are known for.

How KeyCore Can Help

At KeyCore, we offer both professional and managed services to help organizations get the most out of their AWS environment. We are experienced in leveraging the Amazon DevOps Guru for RDS to ensure that our clients can quickly identify and address any performance-related issues. Our team of AWS experts will work with you to create the best solution for your organization. Contact us today to learn more about our services and how we can help you get the most out of your cloud environment.

Read the full blog posts from AWS

AWS for SAP

Introduction to AWS Application Load Balancer for SAP Enterprise Portal

SAP Enterprise Portal (EP) is a Java-based application that requires customers to use SAP Web Dispatcher as the entry point for their HTTP(s) requests. SAP Web Dispatcher takes care of the distribution of the requests across application servers, and for web-based HTTP(s) requests, it needs to be visible to the internet. This is where the AWS Application Load Balancer (ALB) comes in.

How is ALB different from Classic Load Balancer?

The AWS Application Load Balancer (ALB) provides a modern approach to distributing the incoming requests to application servers. This means that it offers a wide variety of features that are not available with the Classic Load Balancer. These features include SSL/TLS termination, content routing and automated provisioning of resources.

Using ALB for SAP Enterprise Portal

The ALB is a cost-effective solution for the distribution of requests for SAP EP, meaning that it can be used to provide a secure and reliable access to the application. With the ALB, customers can use SSL/TLS to securely encrypt the communication of requests and responses between the application and the internet, as well as content routing to route requests to different application servers depending on the request.

Benefits of using ALB for SAP Enterprise Portal

By using the ALB in conjunction with SAP Web Dispatcher for SAP Enterprise Portal, customers can benefit from its features and cost savings. These include:

  • SSL/TLS termination for secure communication.
  • Content routing to route requests to different application servers based on the request.
  • Automated provisioning of resources, which can reduce operational overhead and improve scalability.
  • Cost savings due to the ALB’s pay-per-use pricing model.

KeyCore Professional Services and Managed Services for SAP Enterprise Portal

KeyCore provides both professional and managed services for customers that need assistance with configuring and managing their SAP Enterprise Portal deployments. Our team of experienced AWS certified consultants can help customers with the deployment and management of their SAP Enterprise Portal applications, as well as providing ongoing support for their deployments.

We also provide managed services for customers that need help managing their SAP Enterprise Portal applications on AWS. Our managed services team can help customers with the deployment and configuration of their applications, as well as providing ongoing maintenance and support.

If you need help with your SAP Enterprise Portal applications on AWS, contact KeyCore today and let us show you how we can help.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Artificial Intelligence and Machine Learning Solutions From Amazon Web Services

Amazon Web Services (AWS) offers a suite of products and services to help customers develop machine learning (ML) and artificial intelligence (AI) solutions. This article will discuss some of the products and services available from AWS, including Amazon SageMaker, Amazon Kendra, and Amazon Textract.

Image-to-Speech Generative AI Application using Amazon SageMaker and Hugging Face

AWS and Hugging Face recently announced a collaboration to make generative AI (artificial intelligence that is able to create new data) more accessible and cost-efficient. To demonstrate this, they’ve developed an image-to-speech generative AI application that can be used to help people with vision impairment recognize the things they may not be able to see. This application is built with Amazon SageMaker, a fully-managed service that provides developers and data scientists with the ML tools they need to build, train, and deploy ML models quickly and cost-effectively.

Microsoft SharePoint Connector (V2.0) for Amazon Kendra

Amazon Kendra is an intelligent search service powered by ML. It offers data source connectors to simplify the process of ingesting and indexing content, wherever it resides. The Microsoft SharePoint connector (V2.0) allows customers to index content stored in Microsoft SharePoint, including both structured and unstructured data, and make it available through Amazon Kendra’s search services. This makes it easier for organizations to find and use critical data stored in SharePoint.

Serverless Meeting Summarization Backend with Large Language Models on Amazon SageMaker JumpStart

AWS and Hugging Face have also collaborated to make it easier for customers to build serverless meeting summarization backends with large language models on Amazon SageMaker JumpStart. JumpStart is the ML hub of Amazon SageMaker that provides access to high-quality, open-source language models and pretrained models that can be used to build custom ML solutions. This solution makes it easier for customers to generate summaries of meetings quickly and cost-effectively.

Prepare Training and Validation Datasets for Facies Classification using Snowflake Integration and Train Using Amazon SageMaker Canvas

Facies classification is the process of segmenting lithologic formations from geologic data at the wellbore location. To make it easier for customers to classify facies, this blog post outlines a process for preparing training and validation datasets using Snowflake integration and training with Amazon SageMaker Canvas. The process begins by ingesting logs into Snowflake and transforming the data into a usable format. Then, the labeled data is exported to Amazon S3 where it can be used for training with Amazon SageMaker Canvas.

GPT-NeoXT-Chat-Base-20B Foundation Model for Chatbot Applications is Now Available on Amazon SageMaker

Together Computer’s GPT-NeoXT-Chat-Base-20B language foundation model is now available for customers using Amazon SageMaker JumpStart. GPT-NeoXT-Chat-Base-20B is an open-source model to build conversational bots. JumpStart makes it easy for customers to use this model to build custom chatbot solutions.

Demand Forecasting at Getir Built with Amazon Forecast

This blog post from Getir describes how they built a demand forecasting solution using Amazon Forecast. Getir is a tech company that has revolutionized last-mile delivery with its “groceries in minutes” delivery proposition. To create their forecasting model, they ingested data into Amazon Forecast and trained the model, then evaluated and deployed the model.

Introducing Amazon Textract Bulk Document Uploader for Enhanced Evaluation and Analysis

Amazon Textract is a machine learning service that automatically extracts text, handwriting, and data from any document or image. To make it simpler to evaluate the capabilities of Amazon Textract, they have launched a new Bulk Document Uploader feature on the Amazon Textract console that enables customers to quickly process their own set of documents and get results in a format that is easy to analyze.

At KeyCore, our experienced team of AWS experts can help you leverage all of the products and services discussed in this article to enhance your ML and AI solutions. From helping you set up and configure products, to giving advice and guidance on how to use them most effectively, our experts are here to help. Contact us today to learn more about how we can help you accelerate your AI and ML projects.

Read the full blog posts from AWS

Announcements, Updates, and Launches

Announcements, Updates, and Launches

Amazon SageMaker Geospatial Capabilities Now Generally Available

Amazon SageMaker now provides geospatial capabilities, allowing data scientists and machine learning (ML) engineers to build, train, and deploy ML models using geospatial data. The geospatial ML support includes access to readily available geospatial data, purpose-built processing operations and open source libraries, pre-trained ML models, and built-in visualization tools. Additionally, this ML support includes security updates and use case samples which were previewed at AWS re:Invent 2022.

By using geospatial ML, customers can analyze and predict trends for situations like understanding how certain factors like land usage, weather, or elevation affect the spread of disease, or how traffic patterns change due to construction. With this new capability, customers can build reliable ML models faster and more efficiently.

Simplifying the Investigation of AWS Security Findings with Amazon Detective

Amazon Detective is a new service available to help customers investigate potential security issues. It collects and analyses events from AWS CloudTrail logs, Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, Amazon GuardDuty findings, and Amazon Elastic Kubernetes Service (EKS) audit logs. With Amazon Detective, customers can visualize the security data to help spot issues.

The service also includes an automated investigation feature, which uses machine learning to examine data and detect any suspicious activity. This makes it easier to quickly identify and resolve potential security issues.

Retiring the AWS Documentation on GitHub

Five years ago, AWS Documentation was made open source and hosted on GitHub. However, after a period of experimentation, AWS has decided to archive most of the repos starting the week of June 5th. All resources will now be devoted to directly improving the AWS documentation and website.

AWS Week in Review – New Open-Source Updates for Snapchange, Cedar, and Jupyter Community Contributions – May 15, 2023

Last week, there were many launches related to AWS. Here are a few announcements you should know about:

  • New Amazon EC2 g5dn Metal instances.
  • Amazon Elastic Container Registry (Amazon ECR) released a new feature called “Snapchange” to help customers track changes in container images.
  • Amazon Elastic Container Service (Amazon ECS) for Kubernetes (EKS) now supports the open source Cedar container runtime.
  • AWS CodeBuild now supports Jupyter community contributions as part of its build environment.

At KeyCore, we are committed to staying up-to-date with the latest news and developments in AWS and providing our customers with the best solutions. To ensure that you are always up to date with the latest news from AWS, you can count on us for the most up-to-date advice and insights. We have a team of AWS Certified Solutions Architects and DevOps Engineers with years of experience who can help you get the most out of the latest AWS releases. Contact us to learn more.

Read the full blog posts from AWS

Containers

Exploring the Effect of Topology Aware Hints with Amazon Elastic Kubernetes Service

Amazon EKS version 1.24 introduced Topology Aware Hints (TAH) as a feature that can be used to optimize traffic routing within an Amazon EKS cluster. This post will explore the effects of using TAH on reducing latency and inter-AZ data transfer costs when using multiple AZs, and how it works in Amazon EKS.

What is Topology Aware Hints?

Topology Aware Hints (TAH) is a feature that was introduced in version 1.24 of Amazon EKS. It provides a mechanism that tries to keep traffic closer to its origin by attempting to route traffic through the same Availability Zone (AZ) or another AZ that is close to the origin. TAH can be used with Amazon EKS to optimize the routing of traffic between pods within a cluster when using multiple AZs.

How Does it Work?

When TAH is enabled for a particular pod, the pod’s traffic is routed through its AZ (if possible) or a nearby AZ that is close to the origin of the traffic. This is accomplished by tagging each pod with a hint that specifies the pod’s AZ. When routing traffic, the hint is used to determine which AZ should be selected for the traffic. The goal of using TAH is to reduce latency and inter-AZ data transfer costs by ensuring that traffic is routed through an AZ that is close to the origin of the traffic.

Benefits of Using TAH with Amazon EKS

Using Topology Aware Hints with Amazon EKS can provide several benefits. First, it can reduce latency by routing traffic through an AZ that is close to the origin of the traffic. This can be especially beneficial when using multiple AZs and traffic needs to be routed between them. Second, using TAH can reduce the cost of inter-AZ data transfers by routing traffic through an AZ that is close to the origin of the traffic. This can also be especially beneficial when using multiple AZs and traffic needs to be routed between them. Finally, using TAH can help to ensure that traffic is routed through an AZ that is close to the source, which can result in improved performance and faster response times.

How Can KeyCore Help?

KeyCore is the leading Danish AWS Consultancy and can help you explore the effects of using Topology Aware Hints with Amazon EKS. Our professional services team can help you to deploy and configure TAH in your Amazon EKS cluster, as well as provide guidance on how to best optimize the traffic routing within your cluster. In addition, our managed services team can help you with the ongoing management and monitoring of your Amazon EKS cluster, ensuring that TAH is configured correctly and that the traffic is being routed optimally.

Read the full blog posts from AWS

AWS Quantum Technologies Blog

Amazon Braket Launches IonQ Aria with Built-In Error Mitigation

Amazon Braket recently launched IonQ Aria, a quantum computer based on trapped-ion technology. This is the first quantum processing unit (QPU) from AWS to feature error mitigation, as well as the first AWS QPU available on Amazon Braket.

What is IonQ Aria?

IonQ Aria is a large-scale, trapped-ion system that encapsulates ions in an ultra-high vacuum and uses laser light to control their quantum states. It is the first system from Amazon Braket to feature “error-mitigation” technology, allowing it to more accurately simulate quantum algorithms. IonQ Aria’s error-mitigation tech uses a combination of readout parameters and techniques that enable it to more precisely measure the output of a quantum circuit. This combination of parameters and techniques, known as a “calibration qubit,” allows IonQ Aria to measure quantum circuits with higher accuracy.

What Benefits Does IonQ Aria Offer?

IonQ Aria offers a range of benefits for quantum computer users. Its error mitigation technology provides users with more accurate results, allowing them to more effectively develop and prototype their quantum algorithms. IonQ Aria also offers the highest quantum volume among the current AWS QPUs, making it well-suited for tackling complex problems. Additionally, IonQ Aria is easy to use as it can be accessed directly from Amazon Braket, a fully managed quantum development environment.

What Can IonQ Aria Be Used For?

With its high quantum volume and error mitigation technology, IonQ Aria is a powerful tool for developing and running complex quantum algorithms. It is well-suited for tasks such as chemistry simulations, materials discovery, and machine learning. Additionally, IonQ Aria can be used for a wide range of research applications, including cryptography, optimization, and quantum physics.

How Can KeyCore Help With IonQ Aria?

KeyCore is the leading Danish provider of professional services and managed services for AWS. We have the expertise to help you get the most out of IonQ Aria, from helping you set up the quantum computing environment to providing guidance on how to optimize your quantum algorithms. We also provide ongoing support to ensure your quantum computing environment is running smoothly. With our help, you can get the most out of IonQ Aria and reap the benefits of quantum computing.

Read the full blog posts from AWS

AWS Smart Business Blog

How Customer Interaction Analytics Can Help SMBs Grow

Small and Medium Businesses (SMBs) are increasingly leveraging customer data and analytics to better understand their customers and achieve their goals. A 2022 survey conducted by Gartner found that 84 percent of customer service and service support leaders regarded customer data and analytics as an “extremely or very important” factor for their success in 2023. This guide provides information about customer interaction analytics and how they can help SMBs grow.

What is Customer Interaction Analytics?

Customer interaction analytics is a technology that enables SMBs to better understand customer behavior and interaction with their business. It allows businesses to track customer interactions across multiple channels, including websites, emails, phone calls, and more. By tracking customer interactions, businesses can gain insights into customer behavior and preferences. This data can then be used to make informed decisions about how to optimize customer experiences and drive more sales.

Benefits of Customer Interaction Analytics

Customer interaction analytics has numerous benefits for SMBs, including:

  • Identifying customer segments and preferences
  • Improving customer engagement by providing personalized experiences
  • Enhancing customer relationships and loyalty
  • Gaining a better understanding of customer needs and expectations
  • Improving customer service and support
  • Increasing customer retention and sales

By leveraging customer interaction analytics, SMBs can gain valuable insights into customer behavior and preferences and use this information to optimize their customer service and increase sales.

How KeyCore Can Help

At KeyCore, we provide comprehensive AWS solutions to help businesses maximize their use of customer interaction analytics. Our expert AWS Consultants are here to help you determine the best customer interaction analytics solutions for your business and implement them in an efficient and cost-effective way. From setting up the necessary infrastructure to managing customer data, we have the expertise and resources to ensure your customer data and analytics solutions are up and running in no time. Contact us today to find out more about how KeyCore can help you grow your business with customer interaction analytics.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Official Database Blog of Amazon Web Services

Backup Strategies for Amazon DynamoDB

Backups are an important part of any disaster recovery strategy, and Recovery Point Objective (RPO) and Recovery Time Objective (RTO) parameters are key to getting them right. The most important question when discussing databases is “How will we backup and restore our data?” Amazon DynamoDB offers built-in security, continuous backups, automated multi-Region replication, 99.999% availability SLA, and data import and export tools to ensure your backup strategies support your needs.

AWS Backup simplifies the process of creating and restoring backups, and automates the backup according to the policy you define. Through the use of AWS Backup, you can create an easy-to-manage, centralized system to store your data safely and securely. AWS also provides a range of other services to help you automate and monitor your backup process, including AWS Data Pipeline, AWS CloudFormation, Amazon CloudWatch, and AWS Identity and Access Management (IAM). With these services, you can scale your backup process to meet your specific requirements.

At KeyCore, our expert team of cloud engineers can help you design and implement the best backup strategy for your organization. We can help you leverage AWS Backup to quickly and easily create and restore backups, and automate the backup process according to the policies you define. With our expertise in CloudFormation, Data Pipeline, and CloudWatch, we can ensure that your backups are running smoothly and efficiently.

How Broadridge Used Amazon Managed Blockchain to Build a Private Equity Lifecycle Management Solution

Broadridge Financial Solutions (NYSE: BR), a global Fintech leader with more than $5 billion in revenues, provides the critical infrastructure that powers investing, corporate governance, and communications to enable better financial lives. To power their private equity lifecycle management solution, Broadridge leveraged the scalability and flexibility of Amazon Managed Blockchain.

The Amazon Managed Blockchain allowed Broadridge to instantly join a blockchain network of their choice and securely manage their own network nodes. This enabled Broadridge to quickly scale to meet the needs of their solution. It also provided an easy way for their clients to quickly join, manage, and leave their network of choice. Additionally, Broadridge was able to take advantage of Amazon Managed Blockchain’s robust security features, such as secure encryption of their data at rest and in transit.

At KeyCore, our experienced team of cloud engineers can help you design and implement blockchain-based solutions for your organization. We can leverage the scalability and flexibility of Amazon Managed Blockchain to quickly and easily set up blockchain networks for your specific needs. With our expertise in security, we can ensure that your data is securely encrypted both at rest and in transit. We can also help you monitor and maintain your blockchain networks so that they remain secure and reliable.

How Deliveroo Migrated Their Dispatcher Service to Amazon DynamoDB

Deliveroo operates a hyperlocal, three-sided marketplace, connecting local consumers, restaurants and grocers, and riders to fulfill purchases in under 30 minutes. To support their rapid growth, Deliveroo migrated their Dispatcher service to Amazon DynamoDB, a serverless, key-value NoSQL database. This enabled Deliveroo to take advantage of Amazon DynamoDB’s built-in security, continuous backups, automated multi-Region replication, 99.999% availability SLA, and data import and export tools.

At KeyCore, our team of experienced cloud engineers can help you migrate your services to AWS. We can leverage the built-in security and reliability of Amazon DynamoDB to ensure your services are running smoothly and efficiently. With our expertise in data migration, we can also help you move your data quickly and easily. Additionally, we can provide guidance on how to best utilize Amazon DynamoDB for your specific needs.

Automate the Configuration of Amazon RDS Custom for SQL Server Using AWS Systems Manager

In our previous post Use a self-hosted Active Directory with Amazon RDS Custom for SQL Server, we explained the manual steps to join Amazon Relational Database Service (Amazon RDS) Custom for SQL Server to a self-hosted Active Directory. To ensure that any changes are preserved, we highlighted the importance of using repeatable, idempotent scripts.

AWS Systems Manager offers a range of tools to automate the configuration of Amazon RDS Custom for SQL Server, such as the AWS-RunPowerShellScript and AWS-RunShellScript documents. AWS Systems Manager can also be used to automate the process of joining a self-hosted Active Directory. With AWS Systems Manager, you can reduce manual steps, minimize errors, and quickly deploy and maintain the configurations you need.

At KeyCore, our team of cloud engineers can help you automate the configuration of Amazon RDS Custom for SQL Server using AWS Systems Manager. We can help you create the scripts you need to join a self-hosted Active Directory, and automate the process of maintaining the configurations. With our expertise in AWS Systems Manager, we can ensure that your configurations are running smoothly and efficiently.

Migrate Generated Columns to PostgreSQL Using AWS Database Migration Service

AWS Database Migration Service (AWS DMS) can be used to migrate generated columns to PostgreSQL implementations such as Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible. For the source database, AWS DMS captures the IDENTITY column as a regular column. For the target database, it creates the IDENTITY column with the same name and data type as the source column.

At KeyCore, our team of experienced cloud engineers can help you migrate your data quickly and easily. We can leverage AWS DMS to migrate generated columns to PostgreSQL implementations such as Amazon RDS for PostgreSQL or Amazon Aurora PostgreSQL-Compatible. With our expertise in data migration, we can ensure that your data is migrated with minimal disruption and downtime.

Motivations for Migration to Amazon DynamoDB

Amazon DynamoDB was built working backward from the needs of external customers and internally at Amazon.com to overcome the scale and performance limitations of traditional databases. It is a fully managed, serverless, key-value NoSQL database for single-digit millisecond performance at any scale. It offers built-in security, continuous backups, automated multi-Region replication, 99.999% availability SLA, and data import and export tools.

At KeyCore, our team of cloud engineers can help you migrate to Amazon DynamoDB. We can help you leverage the built-in security, continuous backups, automated multi-Region replication, and 99.999% availability SLA of Amazon DynamoDB. With our expertise in data migration, we can ensure that your data is migrated with minimal disruption and downtime.

Understand Amazon Aurora High Availability and Disaster Recovery from an Oracle Perspective

When data exists in a database, the vendors of these systems produce methods to safeguard the asset. Amazon Aurora provides high availability (HA) and disaster recovery (DR) features to protect your data. This post compares the HA and DR features of Amazon Aurora to Oracle, with a focus on the Aurora disk subsystem and how this key innovation allows Amazon Aurora Global Database to deliver performance and availability.

At KeyCore, our team of cloud engineers can help you understand the HA and DR features of Amazon Aurora. We can leverage the Aurora disk subsystem and Amazon Aurora Global Database to ensure your data is reliably and securely stored. With our expertise in CloudFormation, we can also help you automate the setup and maintenance of your databases.

Handle IDENTITY Columns in AWS DMS: Part 1 & Part 2

In relational database management systems, an IDENTITY column is a column in a table that is made up of values generated automatically by the database at the time of data insertion. In Part 1 and Part 2 of this series, we discussed how the IDENTITY column is used in different relational database management systems and how AWS Database Migration Service (AWS DMS) handles tables with IDENTITY columns.

At KeyCore, our team of experienced cloud engineers can help you handle IDENTITY columns in your databases. We can leverage the capabilities of AWS DMS to capture the IDENTITY column as a regular column in the source database, and create the IDENTITY column with the same name and data type as the source column in the target database. With our expertise in data migration, we can ensure that your data is migrated with minimal disruption and downtime.

a-tune Accelerates Their AWS Migrations Using Migration Strategy and Implementation Plans From Amazon Database Migration Accelerator

a-tune offers data and analytics solutions to customers, and leveraged the capabilities of Amazon Database Migration Accelerator (Amazon DMA) to accelerate their migrations to AWS Databases and Analytics services. The Amazon DMA team helped a-tune design a migration strategy and implementation plans to ensure their migrations went smoothly.

At KeyCore, our team of cloud engineers can help you accelerate your migrations to AWS Databases and Analytics services. We can leverage the capabilities of Amazon DMA to design a migration strategy and implementation plans that are tailored to your specific needs. With our expertise in data migration, we can ensure that your migrations are running smoothly and efficiently.

Read the full blog posts from AWS

Microsoft Workloads on AWS

7 Strategies for Optimizing Microsoft Licensing on AWS

Moving enterprise workloads to the cloud has become a top priority for businesses, yet the costs associated with running Microsoft workloads on Amazon Web Services (AWS) can be a significant obstacle. Luckily, there are seven optimization strategies that can be implemented to help lower Microsoft licensing costs on AWS.

1. Leverage Shared Capacity

Shared Capacity allows businesses to purchase a set amount of compute power upfront and share it across multiple workloads. This helps customers get the most out of their Microsoft licenses by consolidating multiple server deployments into a single shared server.

2. Optimize Software Assurance

Software Assurance is an annual fee that entitles customers to a set of benefits including training, deployment assistance, and support. For customers with multiple deployments, Software Assurance can help reduce licensing costs.

3. Take Advantage of AWS Offerings

AWS offers a variety of services and products that can help customers reduce their Microsoft licensing costs. For example, AWS CloudFormation is a service that automates the provisioning of resources, which can help reduce the number of licenses needed. Additionally, AWS Lambda helps customers run code without having to manage servers, thus reducing the need for licenses.

4. Utilize License Mobility

License Mobility allows customers to transfer their licenses between AWS Availability Zones and AWS Regions. By moving licenses to the optimal region or Availability Zone, customers can reduce their licensing costs by up to 15%.

5. Consider Manager Services

Manager Services from AWS allow customers to get the most out of their Microsoft licenses and optimize their workloads. AWS Managed Microsoft AD, for example, can help customers more easily manage user accounts and access to resources.

6. Take Advantage of EC2 Reserved Instances

EC2 Reserved Instances provide customers with a capacity reservation on Amazon EC2. This reduces costs by up to 75% over on-demand pricing. Reserved Instances can also be used for Microsoft licenses to help customers save money.

7. Utilize KeyCore Professional Services

KeyCore Professional Services can help customers optimize their Microsoft licensing costs. Our experts can help identify the best licensing strategies and cost-saving opportunities to ensure customers are getting the most out of their investments.

By leveraging these seven strategies, businesses can reduce their Microsoft licensing costs on AWS and get the most out of their investments. KeyCore Professional Services team can help customers optimize their Microsoft licensing costs and ensure they are getting the most out of their investments. Contact us today to learn more.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

Simplify Data Orchestration with Amazon MWAA

Organizations of all sizes are trying to unlock the value of their data. To do that, they often have to store their data in different warehouses, search systems, NoSQL databases, and machine learning services. With all these different systems, it can be hard to keep track of the data and move it between systems for analysis. This is where Amazon Managed Workflow for Apache Airflow (Amazon MWAA) comes in.

What is Amazon MWAA?

Amazon MWAA is a managed service that helps organizations simplify complex data processing tasks and orchestrate workflows across different analytics systems. It provides scalability, availability, and security without the need to manage the underlying infrastructure. In April of 2023, Amazon MWAA added support for startup scripts.

How Does Amazon MWAA Work?

Amazon MWAA makes it easier to orchestrate jobs and monitor workflows. It supports AWS Glue jobs, Amazon SageMaker notebooks, and AWS Step Functions. It also provides features like job monitoring, failure notification, and job scheduling. The service also offers an easy-to-use web analytics interface.

AWS Glue 4.0 for Apache Spark

AWS Glue is a serverless data integration service that helps make data preparation faster and easier. It enables data integration between different data stores, so organizations can break down data silos. AWS Glue 4.0 for Apache Spark adds new features for data integration, including support for PySpark 3.0, support for AWS Glue Elastic Views, and support for Amazon Athena query federation.

Analysis with Amazon QuickSight

Analysis is an important part of data exploration and Amazon QuickSight provides advanced analytics capabilities for organizations. In this guest post, Softbrain Co., Ltd. explains how e-Sales Manager, their SFA/CRM tool, is using Amazon QuickSight to provide powerful analytics capabilities to their sales customers.

Stream Data with Amazon MSK Connect

Amazon MSK is a streaming platform for building enterprise data hubs. To make use of the pub/sub model for data distribution, Amazon MSK Connect makes it easier to publish and distribute data from Amazon MSK topics. It uses open-source JDBC connectors and supports streaming data flows from Amazon MSK to external databases, data warehouses, and data lakes.

Data-driven Transformation with Peloton and Amazon Redshift

Peloton was able to benefit from using Amazon Redshift to make the most of their data. Through Amazon Redshift, Peloton was able to unlock the power of data. To learn more about data insights, join AWS Data Insights Day on May 24, 2023.

Apache Hudi on Amazon EMR for Log Ingestion and GDPR Deletes

Logging is a critical part of application development and management. Zoom, in collaboration with AWS Data Lab, developed an architecture that allows for streaming log ingestion and efficient GDPR deletes using Apache Hudi on Amazon EMR.

Smart Sensor Data and Amazon QuickSight for Improved Power Utility Efficiency

Power outages and power quality issues can cost businesses millions of dollars in service interruptions. PG&E and Steve Alexander explain how they used smart sensor data and Amazon QuickSight to improve power utility efficiency.

KeyCore Can Help

At KeyCore, we have extensive experience in cloud engineering and data engineering that can help you get the most out of your data. Our AWS certified professionals can provide the right guidance on usage of the right AWS services and tools for your data orchestration and analytics needs. Contact us today to discuss how KeyCore can help your organization unlock the power of data.

Read the full blog posts from AWS

Networking & Content Delivery

Hosting Single Page Applications (SPA) with Tiered TTLs on CloudFront and S3

Many customers of KeyCore, the leading danish AWS Consultancy, use Amazon CloudFront and Amazon Simple Storage Service (Amazon S3) to deploy Single Page Applications (SPA): web applications created with React, Angular, Vue, etc. The development teams of these SPAs often face two seemingly contradictory requirements: the need for users to experience low latency when downloading the web application, and the need for the web application to be updated regularly.

Using CloudFront and S3 to Achieve Tiered TTLs

Tiered Time-To-Live (TTL) is a way to balance the need for low latency and frequent updates. Using CloudFront and S3, application assets are cached for a certain period of time, while other assets can be updated more frequently. By using tiered TTLs, a user’s experience can be improved without sacrificing the need to keep the application up to date.

To achieve tiered TTLs using CloudFront and S3, the S3 bucket should be configured to serve static assets. The CloudFront distribution should also be configured to use the S3 bucket as an origin. The objects stored in the S3 bucket must have a cache-control header set for a longer time-to-live (TTL) than the objects served directly from CloudFront. This allows the user to benefit from the low latency of the cached object, while still allowing the web application to be up to date.

Maximizing Performance with CloudFront

CloudFront also provides additional performance benefits for single page applications. CloudFront can be used to serve static assets and dynamic content from the same origin, as well as to cache dynamic content. By using a combination of cache-control headers, query strings, and cache invalidation, dynamic content can be served with minimal latency. Furthermore, CloudFront can also be used to offload SSL/TLS termination and reduce the load on an origin server, allowing the application to scale more easily.

The Benefits of Tiered TTLs Using CloudFront and S3

Using tiered TTLs with CloudFront and S3 provides a number of benefits for single page applications. It allows users to experience low latency when downloading the web application, while still allowing the application to be kept up to date. Additionally, CloudFront can be used to offload SSL/TLS termination and reduce the load on an origin server, allowing the application to scale more easily. With KeyCore’s expertise in AWS, customers can leverage their services to deploy and manage their single page applications with Tiered TTLs on CloudFront and S3.

Read the full blog posts from AWS

AWS Compute Blog

Optimizing Costs with Amazon EC2 Spot Instances

Amazon EC2 Spot Instances offer a convenient way for customers to optimize their costs by allowing them to bid on spare computing capacity in the AWS cloud. This can bring significant savings on compute costs, and is used by organizations like the National Football League (NFL) to run 4000 EC2 Spot Instances.

Control Spot Instance Availability

To make the most of Amazon EC2 Spot Instances, customers must understand how AWS manages the availability of spare capacity. AWS will interrupt Spot Instances if their bid price is lower than the current Spot price. It’s important to plan for these interruptions, and make sure that the applications running on Spot Instances can cope with them.

Minimize Interruptions

To minimize interruptions, customers should consider placing a higher bid price for their Spot Instances. This will reduce their chances of interruption, and also decrease the chances of having to wait in the queue for Spot resources to become available. Additionally, customers should ensure that they have the right Spot Instance types and sizes available for their workloads, as well as plenty of buffer capacity to absorb any potential interruptions.

Choose the Right Instances

When selecting the right Spot Instances for their workloads, customers should consider their burstable and non-burstable requirements. Burstable Spot Instances, such as the T3 family, can be ideal for workloads that require low cost compute resources that can handle short periods of high utilization.

Customers should also consider their concurrent spot requests. To increase the chances of getting the Spot Instances that they need, customers should distribute their requests across different Availability Zones and Spot Instance types.

Build Resilience into Your Architecture

To ensure the resilience of their application architectures, customers should consider using multiple Availability Zones and spread their workloads across multiple regions. Additionally, customers should add additional buffer capacity to their Spot Instance fleets to handle unexpected spikes in demand.

KeyCore Can Help

At KeyCore, we understand the challenges and opportunities of leveraging AWS Spot Instances to optimize compute costs. Our AWS certified team of experts can help you identify the right Spot Instance types and sizes for your workloads, and help you build a resilient architecture that can handle any potential interruptions. Contact us today to learn more about our professional services and managed services for Spot Instances.

Read the full blog posts from AWS

AWS for M&E Blog

IDN Media Leverages AWS IVS to Create IDN Live

IDN Media is an Indonesian platform the caters to Millennials and Gen Z providing entertainment and information. Founded in 2014, the company has experienced rapid growth and has now launched IDN Live, a live streaming platform, on Amazon IVS. This streaming platform acts as a connection between creators and communities, and uses Jakarta’s point of presence to provide a high-quality, low-latency experience across networks.

Key Benefits of IDN Live

IDN Live has various advantages for content creators and viewers. It provides reliable streaming capabilities and scalability, giving viewers access to high-quality content that is available on demand. IDN Live also helps creators monetize content by allowing them to set up paywalls, creating an avenue for viewers to access premium content. Furthermore, IDN Live supports multiple concurrent live video streams, allowing viewers to watch multiple streams at once.

How AWS IVS Makes IDN Live Possible

Amazon IVS is a managed live streaming service that helps developers build and scale live streaming experiences. IVS handles the hosting and distribution, allowing creators to focus on creating quality content. IVS is also able to handle high volumes of concurrent viewers, making it a great service for streaming high-quality content. Additionally, IVS enables low-latency streaming, which is essential for streaming live events.

How KeyCore Can Help with Live Streaming

KeyCore can help customers create and deploy live streaming applications using AWS IVS. Our team of experts can help customers set up and configure IVS with the correct encoding settings, as well as assist in migrating existing live streaming applications to AWS. Furthermore, KeyCore can set up the necessary AWS infrastructure to ensure a secure and reliable streaming experience. We also offer professional services and managed services to help customers with the entire streaming process.

Read the full blog posts from AWS

AWS Storage Blog

Creating a Cross-Platform Distributed File System with Amazon FSx for NetApp ONTAP

Organizations need to control costs while dealing with exponentially growing data, and they would like to use the benefits of the cloud to leverage their existing on-premise file assets for a highly resilient hybrid enterprise file share. Amazon FSx for NetApp ONTAP (ONTAP) is an enterprise-grade, cloud-native file storage service that provides the same shared file services found in NetApp’s on-premises ONTAP storage system.

Amazon FSx for NetApp ONTAP is designed to help organizations achieve their hybrid cloud storage goals, offering both the power and familiarity of the ONTAP platform and the cost-effectiveness of the AWS Cloud. With Amazon FSx for NetApp ONTAP, organizations can use their existing on-premises storage assets to deploy a highly available enterprise-grade file system in the cloud, and access it with any ONTAP-compatible client.

Amazon FSx for NetApp ONTAP also helps organizations reduce their costs by providing an automated approach to manage capacity and performance. Amazon FSx for NetApp ONTAP automates the capacity and performance of the file system, allowing organizations to reduce the overhead costs associated with manual capacity and performance management. Additionally, Amazon FSx for NetApp ONTAP provides AWS Outpost-ready storage solutions that can enable organizations to deploy hybrid cloud storage solutions in their own data centers.

Maximizing Price Performance for Big Data Workloads Using Amazon EBS

Hadoop, an open-source framework used to store and process large datasets, has been an essential part of dealing with big data over the past decade. It lets users store structured, partially structured, or unstructured data of any kind. Amazon EBS is a cloud-based storage service that provides a flexible, low-cost solution for storing large amounts of data.

Amazon EBS is particularly well-suited to big data workloads because it can scale up and down as needed, allowing organizations to optimize their usage for cost and performance. It offers a range of storage options, including SSD and HDD, to meet the needs of any workload. Amazon EBS also offers a number of features designed to improve performance, such as read and write caching, which can boost throughput and reduce latency.

Amazon EBS also provides a number of other features to support big data workloads. It provides encryption at rest and in transit, making sure that data is always secure. It also supports high availability and scalability, allowing organizations to scale up their workloads as needed. Organizations can also take advantage of Amazon EBS snapshots to back up their data and recover quickly from any failures.

Accelerating GPT Large Language Model Training with AWS Services

GPT, or Generative Pre-trained Transformer, is a language model that has made a big impact in many different industries. GPT can generate human-like text for data analysis, reports, and decision-making in the fields of finance, healthcare, legal, marketing, and more.

AWS services can be used to accelerate GPT large language model training. Amazon SageMaker provides fully managed machine learning services that make it easy to train, deploy, and manage GPT models at scale. Amazon Elastic Compute Cloud (EC2) provides a range of compute resources to improve the performance of GPT training. AWS Deep Learning Containers, an optimized environment for deep learning, can also be used to accelerate GPT model training.

Amazon Elastic File System (EFS) can be used to store GPT model training data in the cloud and make it accessible to the training process. Amazon Relational Database Service (RDS) can also be used to store GPT model training data and provide an easy-to-use interface that GPT models can connect to. Additionally, Amazon Simple Storage Service (S3) can be used to store and manage GPT models and training data in the cloud.

How Goldman Sachs Leverages AWS PrivateLink for Amazon S3

As a financial services company, Goldman Sachs (GS) stores a vast amount of data that must be secure and compliant with regulations. Goldman Sachs leverages Amazon Virtual Private Clouds (VPC) to provide secure environments for the deployment of resources within AWS, while also providing a secure connection to on-premises networks.

To further strengthen security, GS uses AWS PrivateLink to create private connections between AWS services and their VPCs. AWS PrivateLink creates a private connection between VPCs and Amazon S3, establishing a secure tunnel for data to be transferred. This helps to protect GS data from malicious users and any other external threats.

AWS PrivateLink also enables GS to monitor and manage the connections between their VPCs and Amazon S3, allowing them to quickly identify and address any security issues. Additionally, AWS PrivateLink helps GS to maintain compliance with GDPR, HIPAA, and other data privacy regulations.

At KeyCore, our team of experienced AWS consultants can help you to set up and maintain the secure connections between VPCs and Amazon S3. We have extensive experience in setting up AWS PrivateLink and can help you to create and manage secure connections between your VPCs and Amazon S3. We can also help you to ensure that your data is securely stored and transferred, and that your environment is meeting any necessary compliance requirements.

Read the full blog posts from AWS

AWS Architecture Blog

Using Amazon Cognito and Web3 with API Gateway for dApp Authentication

When developing a Decentralized Application (dApp) that must interact directly with AWS services like Amazon S3 or Amazon API Gateway, authorization of users must be done by granting them temporary AWS credentials. Amazon Cognito, in combination with the users’ digital wallet, can be used to obtain valid Amazon Cognito identities and temporary AWS credentials.

The Benefits of Amazon Cognito

Amazon Cognito is Amazon Web Services’ (AWS) application user identity and data synchronization service that helps developers securely store and access user data in the cloud. It also provides authentication and authorization for applications so that they can securely access AWS services.

Amazon Cognito is a great choice for dApp authentication because it supports a wide range of user authentication methods, including username/password or token-based authentication. It can also be used to store user profiles and synchronize user data across devices as well as shared with other applications. Additionally, Cognito supports Amazon’s Single Sign-On (SSO) feature, which allows users to authenticate using their existing Amazon, Google, or Facebook accounts. Furthermore, it allows for the generation of temporary credentials, which is essential for dApps that require access to AWS services.

Using Web3 Proxy with Amazon API Gateway

In order to allow a dApp to interact with AWS services, an Amazon API Gateway can be used to act as a proxy between the dApp and the AWS services. This proxy allows the dApp to call the Amazon API Gateway with the Web3 protocol, which is necessary for a dApp to interact with AWS services.

The API Gateway then makes the necessary calls to the AWS services on the user’s behalf and passes back the responses. This allows the dApp to securely access the AWS services without having to manage the authentication process itself.

KeyCore Can Help

At KeyCore, our experienced team of AWS experts is well-versed in leveraging the power of Amazon Cognito and Amazon API Gateway for dApp authentication and authorization. We can help you to design and implement a secure and reliable solution that meets your needs. Contact us to learn more.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

Unlock Business Value with AWS Partners

AWS Partner Network (APN) is a global community of companies and solutions providers that specialize in helping customers take full advantage of the AWS cloud. Through the APN, AWS customers can find the expertise, solutions, and services they need to power their business. In April, 154 new AWS Partners were added or renewed across specializations in workload, solution, and industry. In this blog, we explore how AWS Partners are helping customers unlock data, simplify processes, and accelerate security and monitoring, as well as introduce generative AI.

Unlock Mainframe Data with Precisely Connect and Amazon Aurora

Many businesses are looking to reduce costs and unlock data scale by transforming legacy systems into next-generation cloud and data platforms. To make this process easier, Precisely Connect offers a solution that integrates data seamlessly from legacy systems into AWS. Using the Precisely Connect solution, data in the form of sequential files, VSAM datasets, or databases like IMS and Db2 can be transferred to Amazon RDS, such as Amazon Aurora. This makes it possible to unlock mainframe data and move it to the cloud quickly and easily.

Simplifying Talent Acquisition Processes with Quantiphi and a Modern Data Strategy on AWS

Talent acquisition is a critical part of any business, and many companies are turning to online databases for talent sourcing and recruiting. Quantiphi provides a cloud-native data platform that facilitates the convergence of talent and recruiter performance data, allowing for key insights into the talent acquisition process. It does this through a serverless, fully-managed ETL pipeline, centralized lake house solution, and AWS. By leveraging AWS and Quantiphi’s platform, companies can more efficiently manage their talent acquisition processes.

Automate Security and Monitoring with Amazon EKS Blueprints, Terraform, and Sysdig

The biggest challenge for many businesses when adopting Kubernetes is a lack of in-house skills. To mitigate this gap, Sysdig launched an add-on for Amazon EKS as well as Sysdig EKS Blueprints, which enable organizations to deploy instrumented Kubernetes clusters using Terraform. This accelerates hands-on experience, provides a reproducible foundation to configure, provision, and destroy clusters easily, and automates security and monitoring.

Replicate SAP to AWS in Real-Time with Business Logic Intact Using BryteFlow

Getting SAP data into AWS in real-time enables businesses to gain valuable insights, realize competitive advantages, enhance sharing and collaboration, and improve operational performance. BryteFlow’s SAP Data Lake Builder on AWS offers a solution to extract and integrate SAP data on AWS for use cases like analytics, reporting, AI/ML, and IoT in real-time. This provides the opportunity to integrate data from SAP and non-SAP sources seamlessly.

Reinventing Your Customers’ Business with Generative AI on AWS

Generative AI has the potential to revolutionize businesses of all sizes and industries. To ensure customers realize the full value of generative AI offerings, AWS and partners work together to guide the development of business-enhancing innovations and solutions. Ruba Borno, VP, WW Channels & Alliances at AWS, explains the integral role AWS Partners play in helping customers unlock the power of generative AI.

Building a Cloud-Native Architecture for Vertical Federated Learning on AWS

DOCOMO Innovations focuses on federated learning and vertical federated learning (VFL) in particular. VFL has the potential to get better model performance by collaborating with other data providers, and DOCOMO Innovations has been investigating the algorithm and its implementation on AWS for real-world scenarios. To build a cloud-native architecture for VFL on AWS, organizations need distributed machine learning techniques that don’t require data to be centralized and don’t disclose data to other parties while building the model.

How KeyCore Can Help

At KeyCore, we understand the importance of unlocking business value with AWS Partners. Our team of AWS experts can provide the expertise and ensure your company is utilizing the latest AWS solutions and services to their full potential. With our professional services and managed services, we’ll help you make the most of the AWS Partner Network and ensure you’re getting the most out of your cloud environment. Contact us today to learn more.

Read the full blog posts from AWS

AWS HPC Blog

Benchmarking Oxford Nanopore Technologies on AWS

Oxford Nanopore Technologies has enabled direct, real-time analysis of long DNA or RNA fragments through monitoring changes to an electrical current as nucleic acids are passed through a protein nanopore. The resulting signal is decoded to provide the specific DNA or RNA sequence by virtue of compute-intensive algorithms called basecallers. In a collaboration between G42 Healthcare, Oxford Nanopore Technologies, and AWS, this blog post presents the benchmarking results for two of those basecallers — Guppy and Dorado — on AWS.

Guppy Basecaller Benchmarking

The Guppy basecaller is the default Oxford Nanopore basecaller for quick and accurate sequencing. It is computationally intensive, with extended basecalling times for longer reads. To benchmark the performance of Guppy on AWS, a range of Amazon EC2 instances were used, from C4 instances to P3dn.24xlarge instances.

The benchmarking results showed that the Guppy basecaller achieved a maximum speedup of 13.5x when using P3dn.24xlarge instances, in comparison to the C4.2xlarge instance. Additionally, the P3dn.24xlarge instances yielded an average speedup of 8.3x, with a sustained basecalling rate of 25,000 reads per second and 10–20-fold reductions in basecalling times for longer reads.

Dorado Basecaller Benchmarking

For more accurate, high-throughput basecalling, the Dorado basecaller is the go-to option for Oxford Nanopore sequencing. It is capable of basecalling at speeds 5–10 times faster than that of the Guppy basecaller. To benchmark the performance of the Dorado basecaller on AWS, the same range of Amazon EC2 instances were used, from C4 instances to P3dn.24xlarge instances.

The benchmarking results showed that Dorado basecaller achieved a maximum speedup of 22.9x when using P3dn.24xlarge instances, in comparison to the C4.2xlarge instance. Additionally, the P3dn.24xlarge instances yielded an average speedup of 14.1x, with a sustained basecalling rate of 40,000 reads per second and 40–50-fold reductions in basecalling times for longer reads.

Using AWS for Oxford Nanopore Basecalling

These benchmarking results demonstrate that using AWS for Oxford Nanopore basecalling can substantially reduce basecalling times. The benchmarking was conducted using a range of EC2 instances, from C4 to P3dn.24xlarge. While the maximum speedup was achieved when using P3dn.24xlarge instances, the same speedup can be achieved with cheaper instances depending on the sequencing length.

At KeyCore, we specialize in helping our customers streamline their workflows and reduce costs with the help of cloud technology. Our team of AWS Certified Solutions Architects can assist with the setup and optimization of Oxford Nanopore basecalling infrastructure on AWS. Contact us to learn more about how KeyCore can help you get started with Oxford Nanopore basecalling on AWS.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

AWS Cloud Operations & Migrations Blog – An Overview of Best Practices

Organizations increasingly rely on cloud services to power their applications and workloads. AWS offers a wide range of services to help organizations deploy, manage, and scale their cloud operations. To ensure optimal performance, reliability, and security, it is essential to follow best practices when setting up and managing cloud operations. This article provides an overview of best practices for building and managing cloud operations on AWS.

Building CIS Hardened Golden Images and Pipelines with EC2 Image Builder

To meet Center for Internet Security (CIS) Benchmark guidelines, customers need to create custom components to harden the operating systems used in their cloud operations. Until recently, customers had to navigate to the AWS Marketplace Console and search for compatible Amazon Machine Images (AMIs) for their image pipelines. Now, with EC2 Image Builder, customers can automate the process of creating custom images with the required components to ensure that their cloud operations meet security benchmarks. EC2 Image Builder provides a simple interface and tools to assemble custom images from a collection of available AWS and third-party components.

Visualizing and Gaining Insights from VPC Flow Logs with Amazon Managed Grafana

Cloud infrastructure is becoming increasingly distributed and data-intensive. To gain better visibility into network traffic, organizations must analyze the increasing amount of data being transmitted across networks. With Amazon Managed Grafana, customers can visualize and gain insights into VPC flow logs. Amazon Managed Grafana provides an interactive dashboard to analyze network traffic and detect potential security threats or performance issues. It helps customers to monitor the health and performance of their cloud infrastructure.

AWS Application Migration Service Best Practices

Large-scale cloud migrations require multiple tasks, scaling complexities, manual processes, numerous tools, and stakeholders’ involvement. To simplify and speed up the process, organizations can use AWS Application Migration Service (AWS MGN). This service is designed to simplify and speed up complex migrations that require re-hosting. AWS MGN helps organizations to move applications without having to rewrite them, automate the process of identifying and analyzing components, and quickly move the applications to the cloud.

Monitoring Best Practices for AWS Outposts

AWS Outposts is designed to allow customers to run AWS infrastructure and services on-premises. This helps customers to manage their hybrid workloads with low latency access to on-premises systems, local data processing, data residency, and application migration with local system inter-dependencies. To ensure optimal performance, customers must monitor their Outposts using Amazon CloudWatch metrics and AWS Health events. This will help them to detect and diagnose any performance issues quickly.

At KeyCore, we provide professional and managed services to help organizations build, deploy, and manage their cloud operations on AWS. Our team of AWS experts can help you ensure that your cloud operations are secure, reliable, and optimized for performance. We can help you set up automated pipelines for cost optimization, ensure compliance with various industry standards, and provide on-going monitoring and maintenance services. Contact us today to learn more about our services.

Read the full blog posts from AWS

AWS for Industries

AWS for Industries

TC Energy Improves Document Consistency and Asset Management

TC Energy operates an extensive energy infrastructure portfolio in North America, with a network of 93,300 km of natural gas pipelines supplying more than 25 percent of the daily clean-burning energy needs for millions of people in the region. To improve document consistency and asset management, TC Energy has implemented AWS services, allowing them to reduce costs and improve customer experience. AWS allows TC Energy to manage their data across different departments and locations, while providing secure access to the data, which is critical for the energy industry.

Discovery Mining Data with Elsevier Geofacets on OSDU Data Platform

The Open Group OSDU Forum is aiding digital transformation in the energy industry by facilitating collaboration on the OSDU Data Platform. This platform helps break the barriers to innovation by allowing organizations to quickly find and access their data. Elsevier and AWS have teamed up to provide the OSDU Data Platform to customers, utilizing AWS cloud services to enable the discovery of mining data. This allows customers to gain insights from their data and develop new applications.

Introducing Amazon FinSpace with Managed kdb Insights

AWS has launched Amazon FinSpace with Managed kdb Insights, a new capability that makes it easy to configure, run, and manage kdb Insights on AWS. KX Systems’ kdb Insights is an analytics engine optimized for analyzing real-time and multi-petabyte historical time series data and is widely used in capital markets to power sophisticated investment decisions. Amazon FinSpace with Managed kdb Insights makes it easy to scale and process large amounts of data quickly and securely.

Top re:Invent 2022 Takeaways for the Advertising and Marketing Technology Industry

At re:Invent 2022, AWS announced a range of new service, partner, and solution announcements that are particularly relevant to the advertising and marketing technology industry. This includes AWS S3 Access Analyzer which provides insights into who is accessing your data, and AWS Babylon which provides low-latency speech recognition using natural language processing to analyze customer conversations.

Deploying Dynamic 5G Edge Discovery Architectures with AWS Wavelength

AWS Hybrid Cloud and Edge Computing services are expanding the global footprint of AWS infrastructure. In the United States alone, 19 AWS Wavelength Zones are now available, allowing developers to have access to low latency compute and storage services. The deployment of AWS Wavelength Zones has enabled the development of dynamic 5G Edge Discovery architectures, which can handle mission-critical real-time applications.

Arriva’s Data Journey: Building an Enterprise Data Hub with AWS

Arriva, a leading provider of passenger transport in Europe, had struggled to measure business performance due to a lack of consistent data sources. By utilizing AWS services, Arriva was able to build an enterprise Data Hub, allowing them to access their data from across different businesses and locations. This enabled them to begin to measure their business performance and make data-driven decisions.

DHgate Scales Low-Latency Live Streams with Amazon IVS

The retail landscape in China has shifted towards ecommerce and social media platforms, leading to an increase in online sales. DHgate is leveraging Amazon IVS to provide low latency live streams to their customers, making it easier for customers to interact with their products and services. Amazon IVS provides DHgate with a simple and cost-effective way to stream live content.

AWS Travel and Hospitality Competency Partners You Should Know: Sendbird

Tedd Evers, Global Partner Leader for Travel and Hospitality at Amazon Web Services, spoke with Sarang Paramhans from Sendbird about the importance of instant communication for businesses and their customers. Sendbird is an AWS Travel and Hospitality Competency Partner, providing a comprehensive platform for customer support, customer service, and engagement for travel and hospitality customers.

CPG Partner Conversations: Lemongrass’s Cloud-Native Approach Fuels CPG Companies

The COVID-19 pandemic has caused major disruptions for the CPG industry, including supply chain disruptions and labor shortages. Lemongrass provides cloud-native solutions to help CPG companies manage and optimize their operations. Lemongrass’s platform allows CPG companies to access their data across different locations, enabling them to make informed decisions about their business.

Marathon Oil Scales Intelligent Alerts to Over 4,000 Wells Using AWS Partner Seeq

Marathon Oil used AWS to improve their alert development time from months to hours. With 4,000 wells to manage, Marathon Oil utilized AWS to improve their alert development time and make informed decisions. AWS Partner Seeq provided Marathon Oil with the ability to process large amounts of data quickly and securely, making it easier to scale their intelligent alerts.

At KeyCore, our AWS experts can help companies in the energy, advertising and marketing, travel and hospitality, and CPG industries take advantage of the power of AWS. We can help design and implement a cloud strategy that meets your needs, whether that is deploying dynamic 5G Edge Discovery architectures, building an enterprise Data Hub, or scaling intelligent alerts for your business. Contact our team of AWS experts today to learn more.

Read the full blog posts from AWS

AWS Messaging & Targeting Blog

Discover How to Test Email Sending and Environmental Monitoring with AWS

Testing the setup of email sending infrastructure is essential to ensuring your application is running effectively. To verify this, monitoring and testing is necessary, and Amazon Web Services (AWS) offers several automation and monitoring tools that make it easier. In this blog post, we will explore how to use AWS for email sending and environmental monitoring.

Setting Up Email Sending Infrastructure

When setting up your email sending infrastructure and connections to API’s, it is important to ensure that after making changes to your sending pipeline, you verify that your application is working as expected. This means testing the sending process and environmental monitoring.

Choosing the Right Domain for Optimal Deliverability

Selecting the right domain for the visible From header of your outbound messages is key for optimal deliverability when using Amazon Simple Email Service (SES). This blog post will guide you through the process of selecting the best domain to use with SES.

Having the right domain can impact the deliverability and authenticity of outgoing emails. SES provides built-in support for email authentication protocols such as DKIM, SPF, and DMARC, which are designed to ensure emails are received and authenticated as legitimate.

Setting Up EasyDKIM for a New Domain

Email authentication is a process of verifying the identity of the sender. This is an important step to ensure that emails are being sent from a verified sender and are not from a malicious source.

EasyDKIM is a command line tool that simplifies the process of setting up DKIM for domains in Amazon SES. EasyDKIM provides a DNS template, making the deployment of DKIM records for a domain easier.

How KeyCore Can Help

At KeyCore, we have experienced AWS consultants who can help you set up and manage your email sending infrastructure. Our engineers can assist with setting up and configuring an Amazon SES account, setting up EasyDKIM, configuring DNS records, and more. Contact our team today to learn more about how we can help you set up a reliable email sending infrastructure.

Read the full blog posts from AWS

AWS Marketplace

Leveraging DataMasque to Mask Healthcare Data for Regulatory Compliance with Amazon HealthLake

Healthcare data is subject to stringent regulations that require patient data to be anonymized or encrypted to protect patient privacy. In this blog post, we’ll cover how to use DataMasque’s template for Amazon HealthLake in order to mask healthcare data for regulatory compliance.

Overview of Masking Healthcare Data with DataMasque

DataMasque’s template for Amazon HealthLake is an easy-to-use solution for masking healthcare data for regulatory compliance. DataMasque offers functionality for replacing sensitive data fields with placeholder values, without revealing the patient’s identity. DataMasque also enables users to query and analyze masked data fields without having to worry about compliance issues.

Masking with DataMasque

In order to mask healthcare data, DataMasque provides users with a range of options for masking sensitive data fields. Users can choose to replace sensitive data fields with anonymized values, or they can opt to encrypt the data and store the encrypted values in a separate database. DataMasque also offers the ability to create custom masks, allowing users to specify the masking logic for each field.

DataMasque’s masking process is designed to ensure that patient data is not revealed, regardless of how the data is accessed. The masking process is performed at the data level, which means that queries of the masked data will not reveal the original values.

KeyCore’s Services for Amazon HealthLake

At KeyCore, we provide professional services and managed services for Amazon HealthLake. Our experts can help you configure and deploy DataMasque’s template for Amazon HealthLake, as well as provide guidance on how to utilize DataMasque’s masking functionality to ensure compliance with healthcare regulations. We also offer ongoing support and maintenance services to ensure that your Amazon HealthLake installation is running smoothly.

By leveraging KeyCore’s services for Amazon HealthLake, you can be sure that your healthcare data is masked properly and that your installation is compliant with regulatory requirements. Contact us today to learn more about how KeyCore can help you with your Amazon HealthLake installation.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

The Latest AWS Security, Identity, and Compliance Launches and Announcements

Highlights from RSA Conference 2023
RSA Conference 2023 brought together thousands of cybersecurity professionals in San Francisco from April 24-27. The two stages featured over 30 presentations from experts such as renowned physicist Dr. Michio Kaku and Grammy-winning musician Chris Stapleton. There were topics to move beyond the traditional areas of security, identity, and compliance, including discussions of emerging best practices.

Your Guide to the Threat Detection and Incident Response Track at re:Inforce 2023
AWS re:Inforce returns to Anaheim, CA, on June 13-14. It’s a chance for security builders to get the skills and confidence they need in the industry. With a full conference pass, attendees will get sessions featuring the latest advancements in threat detection and incident response, as well as opportunities for networking and hands-on experience. Register for the conference now with the code secure150off to receive a limited-time $150 discount.

Spring 2023 SOC Reports Now Available with 158 Services in Scope
AWS continues to assure customers of their security, availability, confidentiality, and privacy with the Spring 2023 System and Organization Controls (SOC) 1, 2, and 3 reports. This covers October 1, 2022 to March 31, 2023 and provides assurance over the AWS control environment.

AWS Completes the 2023 Cyber Essentials Plus Certification and NHS Data Security and Protection Toolkit Assessment
AWS is proud to announce that it has completed the United Kingdom Cyber Essentials Plus certification and the National Health Service Data Security and Protection Toolkit (NHS DSPT) assessment. These certificates are valid for one year until March 28, 2024 and June 30, 2024, respectively.

Share and Query Encrypted Data in AWS Clean Rooms
The cryptographic computing feature of AWS Clean Rooms allows customers to run collaborative data-query sessions on sensitive data sets that live in different AWS accounts without having to share, aggregate, or replicate the data. This allows customers to work with each other’s data without compromising confidentiality or security.

At KeyCore, our team of experts is well-versed in the latest AWS security, identity, and compliance launches and announcements. We can help you protect your data and take advantage of the latest advancements in the industry. Contact us today to learn more about how we can help you secure your data with the latest AWS technologies.

Read the full blog posts from AWS

AWS Startups Blog

Autonomous Driving and AI/ML Cost Innovation With AWS

Autonomous driving technology holds immense potential to transform the future of mobility. In the automotive industry, disruptive startups like TIER IV are leveraging the power of AWS to build platforms for building autonomous vehicles. Founded in 2015 by Shinpei Kato in Japan, TIER IV manages their platforms using AWS to deliver an open source software experience.

Training and Inference for Lower Costs

When startups begin to implement machine learning (ML) workloads, there is an important step of consideration as to how best approach training and inference. Training is the process of building and tuning a model for a specific task by learning from existing data. Inference is the process of using this model to make predictions from new input data. Over the last five years, AWS has been investing in its own purpose-built accelerators to improve performance and reduce compute costs for ML workloads. The Trainium and Inferentia accelerators enable the lowest cost for model training and running inference in the cloud.

How KeyCore Can Help

At KeyCore, we help startups leverage the power of AWS to reduce cost and innovate with AI/ML. Our team of AWS professionals can help you optimize your ML workloads and determine the best strategies for training and inference. We offer both professional services and managed services to help you get the most out of your AWS platform. To learn more about KeyCore and our services, please visit www.keycore.dk.

Read the full blog posts from AWS

Front-End Web & Mobile

Accessing Private Networks with AWS Device Farm

Testing a mobile or web app on a real device requires a secure connection to private endpoints. These endpoints may be hosted on AWS inside a VPC, on-premises, a cloud provider, or a combination of these configurations. Additionally, you may want the host machines, to which your devices are connected, to be accessed through a secure connection.

AWS Device Farm helps connect real mobile devices from anywhere to private networks, with the use of AWS PrivateLink and AWS Client VPN. By leveraging AWS PrivateLink, Device Farm can securely connect your real device tests to your private endpoints. With AWS Client VPN, Device Farm allows you to connect the host machines to your private network.

Amplify Studio Form Builder: Storage Manager and Relationship Support

AWS Amplify announces the launch of two new features for Amplify Studio Form Builder: Storage Manager and Relationship support. This expansion of the Form Builder enables developers to create custom React forms that can be easily connected to Amplify Storage, which is powered by Amazon S3. Storage Manager allows developers to quickly store and retrieve data related to the forms they have created, while Relationship support allows developers to link multiple forms together.

AWS Amplify provides developers with the necessary tools to quickly create and deploy web and mobile applications. With the new features of the Form Builder, developers are now able to quickly link forms to Amplify Storage, allowing them to store and retrieve data related to each form. Additionally, the Relationship support feature provides developers with the ability to link multiple forms together and create complex forms with ease.

At KeyCore, we can help you get the most out of AWS Amplify’s Form Builder. We have the experience and expertise to ensure that your applications are utilizing the features of Form Builder to their fullest potential. Our certified AWS consultants can help you to implement the features of Form Builder, as well as other AWS services. Contact us today to learn more about how we can help you get the most out of your web and mobile applications.

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector

Public sector organizations are increasingly providing digital services to citizens and taking advantage of the cloud to secure and optimize the delivery of public services. In this blog, we discuss two case studies of public sector organizations using the cloud: the first example covers strategic government continuity through cloud adoption, and the second example addresses the optimization of operations for ground-based, extremely large telescopes. Lastly, we will discuss the new Software Marketplace and Related Cloud Services cooperative contract from OMNIA Partners.

Creating a Strategic Approach to Government Continuity

Moving digital assets to the cloud is an essential first step for governments to secure their public services against disruption. However, this transition can bring challenges that are more organizational than technical. KeyCore has worked with a range of public sector customers and uncovered three key takeaways:

Leveraging Cloud Solutions to Increase Business Continuity: Moving to the cloud can provide a reliable, cost-effective way to ensure public services don’t suffer from large-scale disruptions. Cloud computing offers governments the ability to flexibly and easily scale their infrastructure to meet demand and free up technical resources to support other key initiatives.

Develop a Strategic Roadmap: To ensure a successful cloud transition, it’s important to develop a clear roadmap with clear goals and objectives, taking into account the specific needs and challenges of the public sector organization. This roadmap should include a thorough assessment of the existing IT infrastructure, an evaluation of the cloud computing options available, and an analysis of the challenges of migration.

Enabling Cloud Adoption: The successful adoption of cloud computing requires the involvement of stakeholders from across the organization. It’s important to involve IT professionals, but also people from other departments such as procurement, finance, and legal. This will help ensure the cloud transition is successful and that all stakeholders understand the benefits and challenges of cloud migration.

Optimizing Operations for Ground-Based, Extremely Large Telescopes

Ground-based, extremely large telescopes (ELTs) such as the Giant Magellan Telescope will play a key role in modern astronomy by providing clear, detailed observations of the universe. Although ELTs generate large amounts of data, managing this data and supporting optimal performance can be challenging. AWS offers a suite of cloud-based solutions to address these challenges and streamline operations for ELTs.

Optimizing Data Storage: AWS provides solutions for the secure storage of data, such as Amazon S3 and Amazon EFS, which are designed to provide scalability, durability, and availability for ELT data. Additionally, AWS tools such as AWS Snowball and AWS Snowmobile can be used to securely move large amounts of data into and out of the AWS cloud.

Improving Data Management and Processing: AWS solutions such as Amazon Redshift, Amazon EMR, and Amazon SageMaker can be used to manage and process ELT data quickly and cost-effectively, providing scalability and increased performance.

Advanced Monitoring and Remote Continuity: AWS also provides advanced monitoring solutions, such as Amazon CloudWatch and AWS CloudTrail, that can be used to track the performance and health of ELT operations and ensure optimal performance. Additionally, AWS can provide remote continuity solutions that enable ELT operations to be securely continued in the event of a disruption.

AWS Marketplace Announces New Cooperative Contract Available Through OMNIA Partners

AWS recently announced a new nationwide cooperative contract for AWS services, including AWS Marketplace, through OMNIA Partners. This contract is available to OMNIA Partners participating agencies, including state and local government agencies, public and private K12 school districts, and higher education institutions. The availability of this contract is an opportunity for organizations in the public sector to gain access to the capabilities of AWS at an attractive rate.

At KeyCore, we understand the challenges that the public sector is facing and have extensive experience helping organizations in this sector take advantage of the cloud to streamline their operations and optimize their services. Our team of certified AWS Solutions Architects can help public sector organizations develop an effective cloud strategy, assess the options available, and implement cloud solutions that meet their needs and budget. Contact us today to learn more about how we can help you take advantage of the cloud.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

Managing Device State and Certificates with AWS IoT Device Shadow Service and Greengrass

When developing Internet of Things (IoT) applications, developers often need to manage the state of IoT devices either locally or remotely. An example is a smart home radiator, a device where you can adjust the temperature (state) locally from a control panel, or remotely by triggering a temperature adjustment message. The challenge is to implement a robust mechanism that can handle the device state efficiently.

AWS IoT Device Shadow Service

AWS IoT Device Shadow Service is a useful tool for managing the state of devices connected to the AWS IoT platform. It enables users to store, retrieve, and manage the current state of IoT devices. With the Shadow Service, customers can store metadata and values of a device in a “shadow” document. This document reflects the current state of the device, which can be updated with the latest information from the device. The AWS IoT Device Shadow Service allows customers to manage the state of their IoT devices from anywhere.

AWS IoT Greengrass

AWS IoT Greengrass is a platform that brings the cloud closer to edge devices to support applications requiring local data processing and low latency. To ensure the security of edge devices, AWS IoT Greengrass provides certificate rotation capabilities with the Certificate Rotator component. With this, customers can create, rotate, and manage the device certificates used to connect their devices to the AWS IoT Greengrass Core.

KeyCore Services

At KeyCore, we provide professional and managed services for companies who need help setting up and managing their AWS IoT Device Shadow Service and AWS IoT Greengrass. We have the expertise and industry-specific experience to help you deploy and maintain your IoT infrastructure. With our help, you can be sure that your IoT device state is managed efficiently and securely.

Read the full blog posts from AWS

AWS Open Source Blog

Automation of Open Source Mail Server Deployment on Amazon Web Services

Open source mail servers can be difficult and time-consuming to deploy and maintain, as well as prone to data loss. This guide will provide an overview of how to automate the deployment of an open source mail server on Amazon Web Services (AWS), as well as how to quickly and effortlessly restore from a backup.

Benefits of Automated Deployment on AWS

Deploying an open source mail server on AWS provides several advantages. For one, it allows for scalability since the server can be easily added to or removed from the AWS infrastructure based on usage needs. Additionally, AWS provides a wide range of services that can be combined to enhance the server’s performance and data security. Finally, by leveraging AWS’s automated deployment processes, users can save both time and resources.

Automating Deployment on AWS

The process of automating deployment on AWS requires the following steps. First, create an Amazon Machine Image (AMI) that contains all the necessary software components to run the mail server. Next, create an Amazon Elastic Compute Cloud (EC2) instance that is based on the AMI. Third, create a Virtual Private Cloud (VPC) to host the EC2 instance. Next, configure the security and networking settings for the VPC, as well as create a security group to ensure that only authorized users can access the mail server. Finally, set up the EC2 instance to run the mail server software and configure the mail server software for the specific environment.

Restoring from a Backup

Once the open source mail server is deployed on AWS, it is important to regularly back up the server to ensure that data is not lost in the event of an outage. AWS provides several options for backing up the server, including Amazon EBS and Amazon S3. Additionally, AWS provides tools such as Amazon Data Lifecycle Manager and AWS Backup to automatically create and manage backups. To restore from a backup, simply select the backup file and follow the instructions provided by the AWS console.

KeyCore’s Expertise

At KeyCore, we are experts in AWS automation and have successfully deployed open source mail servers for our customers. Our team of AWS professionals can help you with the deployment and configuration of your mail server, so you can get back to focusing on your business. We also provide ongoing support services to ensure that your mail server is running optimally at all times.

Read the full blog posts from AWS

Scroll to Top