Summary of AWS blogs for the week of monday Mon Jun 05

In the week of Mon Jun 05 2023 AWS published 122 blog posts – here is an overview of what happened.

Topics Covered

Desktop and Application Streaming

Gartner Digital Workplace Summit: AWS EUC Solutions for North America

The Gartner Digital Workplace Summit (North America), taking place on June 12-13 in San Diego, is the perfect event for end-user computing (EUC) leaders, practitioners, and service providers from around the world. Attendees get the opportunity to share information and discuss the latest developments in the industry.

Using Amazon CloudWatch to Analyze User Access to Amazon WorkSpaces

At the AWS Summit in Washington D.C. on June 8th, a session (EUC202) will be hosted to discuss how to use Amazon CloudWatch to gain insight into how users are connecting to Amazon WorkSpaces. This session will cover topics such as tracking and reporting on IP addresses, platforms, and client versions being used to access Amazon WorkSpaces.

AWS Summit Washington, DC: What’s New with AWS and End User Computing Services

At the AWS Summit in Washington, DC, Tushar Dhanani and other speakers are hosting a chalk talk (session EUC 201) focused on what’s new with AWS and End User Computing (EUC) services. They will provide a high-level overview of AWS EUC services and new features for Amazon WorkSpaces, including the recently released Amazon AppStream 2.0.

How KeyCore can Help with Desktop and Application Streaming

At KeyCore, we offer both professional and managed services for any business looking to set up their desktop and application streaming. Our team is highly advanced in AWS, and can provide you with the guidance and support you need to successfully implement streaming solutions. We have extensive experience in setting up Amazon WorkSpaces, and can provide tailored solutions for each of our clients.

Read the full blog posts from AWS

AWS DevOps Blog

How to Balance Governance and Agility with AWS Developer Tools for a Multicloud Environment

As organizations move towards a “cloud first” strategy, many are migrating to a multicloud environment. Although primary cloud providers can offer the best experience, performance, and cost structure, some organizations choose to use multiple cloud services for a variety of reasons. To ensure success, organizations must balance governance and agility while utilizing AWS developer tools for their multicloud environment.

Balancing Governance and Agility

When implementing DevSecOps, organizations seek to strike a balance between governance and agility. Governance refers to security and best practices that organizations should adhere to in order to protect their systems, data, and users. Agility, on the other hand, enables developers to move quickly and innovate. AWS CodeBuild is a fully managed continuous integration service that helps organizations achieve this balance by automating the build, test, and deployment process.

CodeBuild helps to reduce the time and cost associated with testing and deploying applications and services to the cloud. It allows organizations to ensure adherence to best practices and security policies without sacrificing agility. Additionally, CodeBuild automates the build process so that developers can focus on innovation and improvement rather than manually running tests.

Modernizing for Cloud Operations

Over the past decade, the relationship between IT operations and application developers has changed rapidly. Where IT operations teams used to be responsible for maintaining the servers, storage, DNS, and other components for the application, developers are now taking on more of the responsibility. This change is due to the increased complexity of cloud applications and the need for organizations to embrace DevSecOps.

AWS provides a suite of developer tools to help organizations modernize their cloud operations. These tools enable developers to quickly build, test, and deploy applications on the cloud without sacrificing security or compliance. Additionally, these tools provide organizations with the flexibility to quickly adapt to changes in the cloud environment.

How KeyCore Can Help

At KeyCore, we are experts in AWS and cloud computing. Our team of AWS professionals can help you deploy workloads in a multicloud environment using AWS developer tools. We can also assist you in balancing governance and agility with AWS CodeBuild, and modernizing your cloud operations with the full suite of AWS developer tools. Contact us today for more information.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Unlock the Potential of Machine Learning with Amazon Web Services

ONNX Models

Amazon SageMaker makes it simple to host ML models using Triton. ONNX (Open Neural Network Exchange) is an open-source standard for representing deep learning models. It enables optimization and quantization of models to reduce the memory and compute needed to run ML models. ONNX provides a standardized format, allowing for models to be shared between many providers.

GraphStorm

GraphStorm is an open-source, low-code enterprise graph ML framework. It enables users to build, train, and deploy graph ML solutions on complex enterprise-scale graphs in days instead of months. With GraphStorm, solutions can take into account the structure of relationships or interactions between billions of items.

Amazon SageMaker Distribution

Data scientists need a secure, consistent, and reproducible environment for ML and data science workloads. AWS Deep Learning Containers provide pre-built Docker images for training and serving models in common frameworks like TensorFlow, PyTorch, MXNet, and more. With the release of Amazon SageMaker Distribution, data scientists can now improve the experience through this public beta.

Generative AI in Conversational Experiences

Customers expect quick and efficient service from businesses. With generative AI, businesses can provide personalized and efficient customer service. Amazon Lex, Langchain, and SageMaker Jumpstart offer a powerful combination of generative AI capabilities.

Popularity Tuning for Similar-Items in Amazon Personalize

Amazon Personalize offers the ability to generate recommendations based on previous user behavior and item metadata. Now, with popularity tuning, customers can discover new items in the catalog with the Similar-Items recipe.

PyTorch 2.0 on AWS

PyTorch is widely used for a variety of applications, such as computer vision, natural language processing, and more. With PyTorch 2.0, customers can now do the same at scale with improved performance. This two-part blog series covers how to build high-performance ML models with PyTorch 2.0.

Arrange your Transcripts with Amazon Transcribe

Amazon Transcribe is a speech recognition service that generates transcripts from video and audio files. It comes with a rich set of features, including automatic language identification and multi-speaker support. It also supports two modes of operation – batch and streaming.

Build ML-Ready Datasets with Amazon SageMaker

Amazon SageMaker Feature Store is a purpose-built service for storing and retrieving feature data for ML models. It handles the synchronization of data between the online and offline stores. The Amazon SageMaker Python SDK makes it easy to build ML-ready datasets from the Amazon SageMaker offline Feature Store.

Amazon SageMaker Automatic Model Tuning

Automatic Model Tuning has a new feature called Autotune to automatically choose hyperparameters. This provides an accelerated and more efficient way to find hyperparameter ranges, optimizing budget and time management for model tuning jobs.

Train a Large Language Model on a Single Amazon SageMaker GPU

Large language models are providing valuable insights over massive datasets and several tasks. With Amazon SageMaker GPU, developers can now train a Large Language Model on a single machine.

Hugging Face LLM Inference Containers on Amazon SageMaker

As part of the AWS partnership with Hugging Face, they have released a new Hugging Face Deep Learning Container for inference with Large Language Models. This new Hugging Face LLM DLC is powered by Amazon SageMaker.

Enhanced Table Extractions with Amazon Textract

Amazon Textract is a ML service that automatically extracts text, handwriting, and data from documents and images. It has a Tables feature that offers the ability to automatically extract tabular structures from any document. Recently, there have been improvements made to the Tables feature to improve accuracy and performance.

Technology Innovation Institute Trains the State-of-the-Art

The Technology Innovation Institute (TII) has launched Falcon LLM, a foundational large language model for datasets in multiple languages. TII trained the model on Amazon SageMaker with the goal of creating language models for different languages and use cases.

Retrain ML Models with Updated Datasets in Amazon SageMaker Canvas

Amazon SageMaker Canvas now enables users to retrain ML models and automate batch prediction workflows with updated datasets. By constantly learning and improving the model performance, users can drive efficiency.

Expedite the Amazon Lex Chatbot Development Lifecycle

Amazon Lex has introduced Test Workbench, a new bot testing solution that simplifies and automates the testing process. Test Workbench helps developers identify errors, defects, or bugs in the system before scaling.

Unlock the Potential of Machine Learning with KeyCore

At KeyCore, the leading Danish AWS consultancy, we provide professional services and managed services that help our customers unlock the potential of machine learning with Amazon Web Services. Our team of experts can help you set up state-of-the-art training techniques for distributed training, such as mixed precision support, gradient accumulation, and checkpointing. We can also help you develop and test bots, retrain models on updated datasets, build ML-ready datasets, tune models, and train large language models. We have the skill and knowledge to help you make the most out of your ML projects with AWS. Contact us today to learn more about how KeyCore can help you unlock the potential of machine learning.

Read the full blog posts from AWS

Announcements, Updates, and Launches

Welcome the Latest AWS Heroes and Take Advantage of New SQS APIs – June 2023

This month, we would like to thank and welcome the latest members of the AWS Heroes program – a group of dedicated individuals who dedicate their time and expertise to helping others build better and faster on the AWS platform. AWS Heroes contribute to open source projects, host meetups, organize AWS Community Days, give talks at conferences, mentor builders, and much more.

In addition to the welcome of the new AWS Heroes, AWS is also launching a set of new APIs for Amazon SQS Dead-Letter Queue Redrive. By using the AWS SDKs or AWS CLI, users can now programmatically move messages from the DLQ to their original queue or to a custom queue destination. This provides customers with an easier way to manage their DLQs.

Finally, this week’s AWS Week in Review featured the general availability of Amazon Security Lake, new Actions on AWS Fault Injection Simulator, and more. With Amazon Security Lake, customers can collect, store, and analyze all of their security-related data in one place for better visibility into potential threats. AWS Fault Injection Simulator also provides a cost-effective way for customers to test and improve the resiliency of their applications.

Reap the Benefits with KeyCore

At KeyCore, we specialize in both professional services and managed services related to the AWS platform. Our services can help you take advantage of the latest AWS updates, such as the new SQS APIs for DLQ redrive, and Amazon Security Lake for improved security visibility. We can also help you get the most out of AWS Fault Injection Simulator for application resiliency testing. Contact us today to learn more about how we can help you get the most out of the AWS platform.

Read the full blog posts from AWS

Containers

Migrating Cron Jobs to Containers with Amazon ECS and Amazon EventBridge

Many customers use cron jobs in on-premise systems to schedule tasks. But the scalability of cloud systems is difficult to unlock with this traditional approach. Amazon Elastic Compute Cloud (Amazon EC2) can be used for lift-and-shift migration, but it doesn’t make the most of cloud-native services. The solution is to migrate cron jobs to event-driven architectures, such as Amazon Elastic Container Service (Amazon ECS) and Amazon EventBridge.

Amazon ECS provides a unified platform for running containers, while Amazon EventBridge enables customers to build event-driven serverless applications. Both of these services can be used to migrate cron jobs to containers, providing customers with a simple and efficient approach.

Amazon ECS can be used to schedule tasks as containers and run them using AWS Fargate. This allows customers to quickly deploy and scale their applications, and provides a unified platform for managing containers. Amazon EventBridge can be used to trigger container-based tasks, such as creating, updating, and deleting resources. In addition, customers can use Amazon EventBridge to trigger other AWS services, such as Amazon CloudWatch, Amazon DynamoDB, and AWS Lambda.

Amazon ECS and Amazon EventBridge can also be used to monitor the performance of containers. By tracking the performance of containers in Amazon ECS, customers can detect and troubleshoot any issues quickly and easily.

Container Image Signing with AWS Signer and Amazon EKS

Today we are excited to announce the launch of AWS Signer Container Image Signing, a new capability that gives customers native AWS support for signing and verifying container images stored in container registries like Amazon Elastic Container Registry (Amazon ECR). AWS Signer is a fully managed code signing service to ensure trust and integrity of container images stored in Amazon ECR. This new service makes it easy for customers to protect their applications from malicious container images.

AWS Signer automatically signs container images stored in Amazon ECR. It also verifies that the container image was signed by an authenticated source, which helps prevent tampering of the container image. This makes it easier for customers to ensure the security of their container images.

AWS Signer also works with Amazon EKS, allowing customers to run signed container images on their Amazon EKS clusters. This helps customers create secure and compliant container-based applications.

Happy 5th Birthday Amazon EKS!

We are thrilled to celebrate the 5th anniversary of Amazon Elastic Kubernetes Service (Amazon EKS). Since its launch in 2018, Amazon EKS has served tens of thousands of customers worldwide in running resilient, secure, and scalable container-based applications. Amazon EKS, using upstream Kubernetes, has become the most widely adopted Kubernetes-as-a-Service offering, and it is now the fastest growing service in AWS.

Amazon EKS enables customers to run Kubernetes on AWS with just a few clicks. It provides customers with a managed, secure environment for running container-based applications, and it can be used to manage clusters with up to thousands of nodes. It also helps customers save time and money by providing built-in observability and scalability features.

At KeyCore, we are committed to helping our customers make the most of modern technologies like Amazon EKS. We’re here to provide guidance and best practices for customers who are new to Amazon EKS, as well as to help those who are already using it to improve their deployment and management processes. We also provide a full range of professional services and managed services to help customers get the most out of Amazon EKS. Contact us today to learn more about how we can help you unlock the full potential of the cloud with Amazon EKS.

Read the full blog posts from AWS

AWS Smart Business Blog

How the Latest Manufacturing Technology Is Transforming Small and Medium Businesses

In the last few years, manufacturing technologies have had a ripple effect on businesses across many different industries. This means that small and medium businesses (SMBs) in any sector need to keep an eye out for emerging trends that are transforming the manufacturing landscape.

By staying updated on these trends, SMBs can increase their competitive edge and make better-informed decisions. In this article, we’ll dive into the five major trends that are currently changing manufacturing technology and how SMBs can leverage them.

1. Automation

Automation is the process of using technology to reduce manual labor. It’s one of the most prevalent trends in the manufacturing industry and is helping to increase efficiency and reduce costs. Automation can also help with quality control and improve product consistency. SMBs can take advantage of automation by investing in new technology such as robotic process automation (RPA) or AI-augmented solutions.

2. Industry 4.0

Industry 4.0 is a term used to describe the fourth industrial revolution. It refers to the integration of digital technologies—including automation, data exchange, analytics, and AI—into the manufacturing process. By leveraging Industry 4.0 technologies, SMBs can increase their operational efficiency and reduce their production costs.

3. Additive Manufacturing

Additive manufacturing is a process of creating 3D objects by building up layers of material. It’s often used in the manufacturing of small parts and components, and can help to reduce costs and streamline production. SMBs can take advantage of additive manufacturing by investing in 3D printing technology.

4. Digital Twins

Digital twins are virtual representations of physical products. They’re used to monitor and analyze the performance of products in real time, and can help to identify potential issues before they become major problems. SMBs can use digital twins to better understand their products and make more informed decisions.

5. Cloud Computing

Cloud computing is a type of computing that enables organizations to access data and applications over the internet. It’s becoming increasingly popular in the manufacturing industry, as it can help to reduce costs and increase scalability. SMBs can take advantage of cloud computing by using cloud-based software and services to store and analyze data.

How KeyCore Can Help

At KeyCore, we understand that staying up to date on the latest trends in manufacturing technology can be daunting for SMBs. That’s why we offer comprehensive AWS cloud consulting services designed to help SMBs leverage the latest technologies and stay ahead of the competition. Our team of experienced AWS experts can help you identify the right solutions for your needs and implement them in your manufacturing process. Contact us today to learn more.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Power Utilities Analyze and Detect Harmonic Issues With Amazon Timestream

In this two-part series, we demonstrated how power utilities can use Amazon Timestream database and its time series functionalities to identify harmonic issues at scale. An electricity utility normally engages in both electricity generation and distribution, and in order to maintain the quality of the service, it is important that the electrical current is free of harmonics and distortions.

Using Timestream for this purpose, allows power utilities to analyze customer usage data as well as power quality data, in order to detect any possible harmonic issues quickly. Through correlating metrics for millions of customers, the process can be automated for large-scale data handling.

Additional Software Components on Amazon RDS Custom for Oracle

Amazon Relational Database Service (Amazon RDS) Custom is a managed database service that provides greater flexibility compared to a typical managed relational database service. Specifically, Amazon RDS Custom for Oracle is built for legacy, custom, and packaged applications, as well as for customers who want to customize the database, underlying server, and/or operating system configurations.

Optimizing SQL Server Costs with Bring Your Own Media (BYOM) on Amazon RDS Custom for SQL Server

Organizations are migrating their Microsoft SQL Server workloads to AWS managed database services like Amazon Relational Database Service (Amazon RDS) for SQL Server or Amazon RDS Custom for SQL Server. This makes it easy to set up, operate, and scale SQL Server deployments in the cloud. To optimize Amazon RDS costs, customers can also use bring your own media (BYOM) on Amazon RDS Custom for SQL Server.

Migrating an Informix Database to Amazon Aurora PostgreSQL Using CData Connect Cloud from Within AWS Glue Studio

Amazon Aurora PostgreSQL-Compatible Edition is a fully managed PostgreSQL-compatible database engine running in AWS. It is a drop-in replacement for PostgreSQL, cost-effective to set up, operate, and scale, and can be deployed for new or existing applications. Meanwhile, Informix is a relational database management system from IBM and supports OLTP and other workloads.

With CData Connect Cloud from within AWS Glue Studio, customers can migrate an Informix database to Amazon Aurora PostgreSQL. This process can be automated and simplified with CData Connect Cloud’s pre-built connectors to Informix and Amazon Aurora PostgreSQL, and customers can quickly and securely move the data between the two databases.

Improving Application Performance on Amazon RDS for MySQL and MariaDB Instances and MySQL Multi-AZ DB Clusters with Optimized Writes

Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale deployments of MySQL and MariaDB in the cloud. Customers run different types of workloads on Amazon RDS for MySQL and Amazon RDS for MariaDB, and can use read replicas to scale read options of their workloads. To further improve application performance, customers can enable Optimized Writes.

Optimized Writes provides up to a two times performance improvement by reducing disk I/O and increasing utilization of RAM. This feature is available for Amazon RDS for MySQL and MariaDB instances and MySQL Multi-AZ DB clusters, and provides a simple and cost-effective way to improve the performance of your applications.

Working with Date and Timestamp Data Types in Amazon DynamoDB

Amazon DynamoDB customers often need to work with dates and times in DynamoDB tables. To support this, it’s important to be able to efficiently query date and time data inside a DynamoDB table. DynamoDB supports data types for date and time in the form of strings, numbers, and binary types.

Using the native date and timestamp data types in DynamoDB ensures that the data is represented accurately and allows for more efficient query processing. Additionally, DynamoDB also provides a number of functions to work with date and timestamp data, which simplifies the process of querying date and time data.

Generating Suggestions for Leisure Activities in Real Time With Amazon Neptune

DoGet App is a mobile application that connects friends for sharing in-person moments together. Suggestions for activities to engage in with friends are presented to users in a card deck format. Swiping up indicates no interest in an activity, and swiping down indicates interest and prompts a follow-up on when a user is available.

To provide this functionality, DoGet App uses Amazon Neptune to generate suggestions for activities in real time. By leveraging the graph-based data model of Neptune, activity suggestions are generated quickly and accurately, and are personalized based on the user’s interests and preferences.

Securing Applications Running on Amazon RDS for SQL Server

Amazon Relational Database Service (Amazon RDS) for SQL Server supports several security features that protect data both in transit and at rest. These features provide separation of duties and auditing capabilities, the majority of which are built into SQL Server. Examples of these features include authentication, encryption, auditing, network isolation, and data masking.

By using these features, customers can better protect their application data on AWS, and ensure that it is secure both in transit and at rest.

Announcing Amazon Keyspaces Multi-Region Replication

Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and managed Apache Cassandra-compatible database service. With the introduction of Amazon Keyspaces Multi-Region Replication, customers can now run their Cassandra workloads on AWS with the same Cassandra application code and developer tools they use today.

Amazon Keyspaces Multi-Region Replication provides write-once, read-anywhere replication across multiple regions, with automatic synchronous data replication to ensure that all data is maintained across the multiple regions. This allows customers to build applications that require low latency and high availability, while reducing the complexity of multi-region deployment.

Best Practices for Migrating SQL Server MERGE Statements to Babelfish for Aurora PostgreSQL

When migrating a SQL Server database to Babelfish for Aurora PostgreSQL, both automated and manual tasks need to be performed. The automated tasks involve automatic code conversion using the Babelfish Compass tool with the -rewrite flag and data migration using AWS Database Migration Service (AWS DMS). The manual tasks involve database compatibility check using an evaluation tool and a code review.

When migrating MERGE statements, it is important to pay close attention to the target database and the features it supports. In the case of Babelfish for Aurora PostgreSQL, the MERGE statement syntax is different than that used by SQL Server. Additionally, Babelfish for Aurora PostgreSQL does not support certain features, such as MERGE INTO..FROM, which is supported by SQL Server.

By following these best practices for migrating MERGE statements to Babelfish for Aurora PostgreSQL, customers can ensure that their code is migrated accurately, and can take advantage of the features and benefits offered by Babelfish for Aurora PostgreSQL.

KeyCore’s Expertise in Cloud Migration and Database Services

At KeyCore, we provide both professional services and managed services for our customers. Our team of AWS experts have extensive experience in cloud migration and database management, and are well-versed in best practices and optimization techniques for AWS managed database services and related technologies.

Whether you’re looking to migrate an existing database to the cloud, or are interested in setting up a new application on an Amazon RDS instance, KeyCore is here to help. Our team can assist with the entire process: from designing the database architecture, to migrating the data, to ensuring that security and performance are optimized.

Read the full blog posts from AWS

AWS for Games Blog

Using Amazon GameLift to bring Crossfar and Super Mega Baseball 4 to Life

Crossfar and Super Mega Baseball 4 are two popular video games that use Amazon GameLift to provide a smooth and reliable gaming experience. Crossfar, an indie game, is a sci-fi take on esports where gamers battle head-to-head in a zero-gravity sphere arena set amongst the stars. Super Mega Baseball 4, developed by Metalhead and part of EA SPORTS, provides users with a cross-play experience for six platforms.

Realizing an Out-of-This-World Vision with Amazon GameLift

Crossfar was developed in only two years by a single indie developer. It utilizes Amazon GameLift to provide a smooth and reliable gaming experience for the players as they defend their gate, destroy enemy defenses, and rely on tactical decision-making to come out on top. By using Amazon GameLift, Crossfar allows players to experience the game with minimal lag, low latency, and no disruptions.

Cross-Platform Gaming with Amazon GameLift

Metalhead, as part of EA SPORTS, has taken the Super Mega Baseball franchise to new heights with Super Mega Baseball 4. Not only is the game visually stunning, it also utilizes Amazon GameLift to provide players with a cross-play experience for six platforms. Amazon GameLift allows for a reliable connection between players on different platforms, providing them with an optimized gaming experience with minimal lag, low latency, and no disruptions.

KeyCore and Amazon GameLift

At KeyCore, we provide professional and managed services to help our clients leverage Amazon GameLift to provide their players with the best gaming experience. Our team of AWS Certified Solutions Architects and DevOps engineers can help you build and maintain a scalable infrastructure that supports your gaming needs. Whether you’re looking to deploy Crossfar or Super Mega Baseball 4, our team has the skills and experience to help make your vision a reality.

Read the full blog posts from AWS

AWS Training and Certification Blog

Grow Your Serverless Expertise with AWS Training and Certification

As businesses continue to leverage serverless, event-driven architectures to power their applications, the need for individuals with serverless expertise has grown. To help professionals stay ahead of the curve, AWS Training and Certification has created a learning path to build serverless skills and earn digital badges.

Build Your Knowledge with AWS Training

The AWS Training program is designed to give professionals the skills they need to build, deploy, and scale applications on AWS. AWS Training offers a range of courses, both free and paid, that teach the fundamentals of serverless architectures. Courses such as Developing on AWS and Architecting on AWS teach the best practices for using AWS services in serverless applications. Additionally, AWS Training also offers courses such as the AWS Serverless Application Bootcamp that are tailored specifically for serverless.

In addition to courses, AWS Training also offers hands-on workshops that help individuals learn by doing. In AWS workshops, participants will use their own AWS account to build their own serverless applications. This immersive approach gives them the practical experience they need to start building their own serverless applications.

Prove Your Skills with AWS Digital Badges

Once individuals have built their knowledge of serverless architectures, they can prove their expertise by earning digital badges. AWS digital badges are awarded after individuals have completed AWS Trainings such as the AWS Serverless Application Bootcamp. They are the official recognition from AWS for the serverless skills individuals have learned and demonstrate that they understand the best practices for building, deploying, and managing serverless applications.

Individuals can add their digital badges to their resume, LinkedIn profile, and other professional networks to demonstrate their expertise and stand out in the job market. Additionally, digital badges can also be shared with colleagues and peers to demonstrate knowledge and promote collaboration.

KeyCore Can Help You Get Started

At KeyCore, we have helped numerous clients harness the power of AWS to build and deploy serverless applications. Our experienced team of AWS professionals can provide the guidance you need to get started with serverless and provide the best practices for building and deploying applications. We also have a range of managed services that can help you scale your serverless applications.

If you are looking to get started with serverless, contact us today to learn more about how we can help.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

Monitoring Costs and Optimizing EMR Deployments with Amazon EKS

Amazon EMR is the industry-leading cloud big data solution, providing a collection of open-source frameworks such as Spark, Hive, Hudi, and Presto, all managed and billed per second. Amazon EMR on Amazon EKS is a deployment option that allows users to deploy EMR on the same Amazon Elastic Kubernetes Service (Amazon EKS) clusters. This allows users to maximize the cost efficiency of running big data workloads on AWS.

Amazon EMR on EKS provides users with the ability to optimize their clusters and to monitor their costs. With the help of EKS, users can easily scale their Amazon EMR clusters to process big data workloads, while leveraging the cost-efficiency of the EKS services. Additionally, users can use EKS to monitor the usage of their Amazon EMR clusters, helping to ensure that they are running as efficiently and cost effectively as possible.

Choosing an Open Table Format for Your Transactional Data Lake on AWS

A modern data architecture enables companies to ingest virtually any type of data through automated pipelines into a data lake, which provides highly durable and cost-effective object storage at petabyte or exabyte scale. This data is then projected into analytics services such as data warehouses, search systems, stream processors, query editors, notebooks, and machine learning. To ensure that the data used to power these services is up to date and accurate, organizations must ensure that they use the right data storage format.

Organizations have multiple options when choosing which format to store their data in. Open table formats such as Apache Parquet, Apache ORC, and Apache Avro are becoming increasingly popular for transactional data lakes due to their ability to store data in a columnar format, which increases speed and efficiency when querying data. Additionally, these formats provide a rich feature set, including the ability to support nested data models and various compression techniques.

Organizations of any size can benefit from using open table formats in their data lakes. With the right data storage format, organizations can more easily access and analyze data to generate actionable insights and drive business decisions. By leveraging the features of open table formats, organizations can ensure that their data is up to date and accurate, and can quickly act on it.

Implementing Alerts in Amazon OpenSearch Service with PagerDuty

In today’s fast-paced digital world, businesses rely heavily on their data to make informed decisions. This data is often stored and analyzed using various tools, such as Amazon OpenSearch Service, a powerful search and analytics service offered by AWS. OpenSearch Service provides real-time insights into data to support use cases like interactive log analytics, clickstream analysis for personalization, and even fraud detection.

To ensure that your data is being stored and analyzed correctly, it is important to set up alerts for any issues or anomalies that may arise. With the help of PagerDuty and Amazon OpenSearch Service, users can easily set up alerts to be notified of any issues that may arise. PagerDuty’s flexible alerting capabilities allow users to customize their alerts and set up escalations to ensure any issues are quickly addressed.

By leveraging the power of PagerDuty and OpenSearch Service, users can ensure that their data is stored and analyzed correctly, and that any issues are quickly addressed. This enables businesses to make informed decisions quickly and confidently.

Automating and Accelerating Your Amazon QuickSight Asset Deployments with New APIs

Business intelligence (BI) and IT operations (BIOps) teams often need to automate and accelerate the deployment of BI assets to ensure business continuity. To address this, AWS now offers new APIs that allow BIOps teams to deploy, back up, and replicate Amazon QuickSight assets at scale.

The new APIs allow BIOps teams to quickly and reliably manage and deploy QuickSight assets. For example, they can use the APIs to quickly provision new data sources, create new dashboards and datasets, and update existing dashboards. Additionally, they can use the APIs to copy and replicate assets across multiple accounts, ensuring that data is available in the right place at the right time.

By leveraging the new APIs for Amazon QuickSight, BIOps teams can automate and accelerate the deployment of BI assets. This ensures that businesses can access the data they need quickly and reliably, allowing them to make informed decisions and drive better business outcomes.

How Cargotec Uses Metadata Replication to Enable Cross-Account Data Sharing

This is a guest blog post co-written by Sumesh M R from Cargotec and Tero Karttunen from Knowit Finland. Cargotec is a Finnish company that specializes in cargo handling solutions and services, with operations in over 100 countries worldwide. Cargotec needed a way to easily replicate their metadata between different accounts while ensuring that the data remained secure and compliant. To address this, they implemented metadata replication using AWS Glue and AWS Lake Formation.

With AWS Glue and Lake Formation, Cargotec was able to replicate their metadata while ensuring that any changes to the source were automatically propagated to the target, without the need to manually reconcile the data. This allowed Cargotec to easily share data across accounts while ensuring that their data remained secure and compliant. Additionally, Cargotec was able to take advantage of the scalability and cost savings offered by AWS.

Metadata replication is a powerful tool for organizations that need to share data across accounts. With the help of AWS Glue and AWS Lake Formation, organizations can replicate their metadata while ensuring that any changes are automatically propagated, allowing them to easily share data across accounts while remaining secure and compliant.

Introducing Amazon EMR on EKS Job Submission with Spark Operator and spark-submit

Amazon EMR on EKS provides users with a deployment option for Amazon EMR, allowing organizations to run open-source big data frameworks on Amazon Elastic Kubernetes Service (Amazon EKS). With EMR on EKS, users can take advantage of the performance-optimized EMR runtime for Apache Spark. This runtime allows users to run Spark jobs quickly and efficiently.

Additionally, users can now submit Spark jobs to their Amazon EMR on EKS clusters using the new spark-submit and Spark Operator. This allows users to easily manage and submit Spark jobs to their Amazon EMR on EKS clusters. With these tools, users can easily manage their Amazon EMR on EKS clusters, ensuring that they are running efficiently and cost effectively.

By leveraging the new spark-submit and Spark Operator tools, users can now easily manage and submit Spark jobs to their Amazon EMR on EKS clusters. This allows users to take advantage of the performance-optimized EMR runtime for Apache Spark and ensures that their clusters are running as efficiently and cost effectively as possible.

AWS Glue Data Quality is Now Generally Available

AWS is excited to announce the General Availability of AWS Glue Data Quality. AWS Glue Data Quality is a tool that enables customers to measure and monitor the quality of data in their data repositories. With Data Quality, users can ensure that their data is accurate and up to date, allowing them to make confident business decisions and maintain data integrity.

Data Quality provides a range of features that allow customers to measure, monitor, and manage data quality. For example, customers can validate data against a range of criteria, generate real-time quality scores, and set up monitored alerts and notifications. Additionally, users can leverage the power of Data Quality to automate data quality checks, which helps ensure that their data is accurate and up to date.

By leveraging the power of AWS Glue Data Quality, customers can ensure that their data is accurate and up to date. This helps customers make better decisions and drive better business outcomes.

Visualizing Data Quality Scores and Metrics Generated by AWS Glue Data Quality

AWS Glue Data Quality enables customers to measure and monitor the quality of their data. To help customers gain insight into the data quality, Data Quality generates a range of operational runtime information. This includes data quality scores, metrics, and other data quality-related information.

To ensure that customers can easily access and analyze this information, Data Quality provides a range of visualization tools. These tools allow customers to visualize their data quality scores and metrics, helping them gain insight into the quality of their data. Additionally, customers can use the visualization tools to debug any data quality issues that may arise.

By leveraging the visualization tools provided by AWS Glue Data Quality, customers can easily access and analyze their data quality scores and metrics. This helps customers gain insight into their data quality and debug any issues that may arise.

Setting Up Alert

Read the full blog posts from AWS

Networking & Content Delivery

Automating NLB Target Groups Scaling for Networking Connections Performance

When workload performance depends on the number of networking connections, traditional load balancing metrics like CPU load or memory utilization might not provide you with the information you need in order to make scaling decisions. In this post, we explore a solution that automatically scales backend connections of a Network Load Balancer (NLB) target group based on the number of connections available.

Using CloudWatch Metrics to Get Connection Data

The solution leverages CloudWatch metrics that measure the number of connections available and the number of connections used by the target group. These metrics are available for each of the targets in the target group. To track connections available, the solution uses the “Networking/ELB/Targets/EstablishedConnections” and to track connections used, the solution uses the “Networking/ELB/Targets/TargetConnections” metric.

Using Lambda to Automate Scaling of the Target Group

The solution uses AWS Lambda functions to process the CloudWatch metrics and assess the number of connections available to the target group. If the number of connections available is below a threshold, the Lambda function scales out the target group by adding more targets. If the number of connections available is above the threshold, the Lambda function removes targets from the target group.

Conclusion

This solution provides a way to automate the scaling of NLB target groups based on connection performance. With this solution, you can ensure that the number of connections available to the target group is always within an acceptable threshold, without manual intervention.

At KeyCore, we have vast knowledge and experience with NLBs and managing workloads in the cloud. We can help you design a suitable solution for your workloads and guide you in its implementation. Contact us today to learn more about our AWS-based consulting services.

Read the full blog posts from AWS

AWS Compute Blog

Cost-Effective Capacity Reservations for Business-Critical Workloads with Amazon EC2

Everything fails all the time – this is a famous saying from Amazon CTO Werner Vogels. Building resilient systems with scalability, fault tolerance, and business continuity is a key factor in system design. Amazon EC2 offers customers Capacity Reservations to help manage their business-critical workloads with cost-effectiveness.

What are Capacity Reservations?

Capacity Reservations are a way for an AWS customer to reserve Amazon EC2 computing capacity for their applications. This allows customers to have increased control over their computing capacity, and to have the ability to provision resources to meet their business demands. Capacity Reservations are also cost-effective since they can be purchased upfront for a lower hourly rate than the on-demand rate.

Benefits of Capacity Reservations

Capacity Reservations can offer a number of benefits to AWS customers. First, they provide customers with the ability to reserve EC2 instances with a particular instance type, operating system, and Availability Zone for a longer period of time. This helps customers to ensure that their applications can run on the same instance type and OS for a longer duration. Secondly, Capacity Reservations offer customers a cost savings since they can be purchased upfront for a lower cost than on-demand instances. Lastly, Capacity Reservations provide customers with more flexibility in provisioning their resources since they can purchase the capacity upfront and use it when needed.

How to Utilize Capacity Reservations for Business-Critical Workloads

Capacity Reservations can be used to manage business-critical workloads more effectively. When utilizing Capacity Reservations, customers should consider the following:

  • Understand the business requirements: Customers should understand their business requirements and the capacity that is required to meet these requirements. This will help them determine the duration and the type of Capacity Reservations to purchase.
  • Choose the right instance type: Customers should choose the instance type that is optimal for their workloads. This will help them ensure that the Capacity Reservations are cost-effective and that their workloads are running with the right performance.
  • Monitor usage: Customers should monitor their usage and adjust their Capacity Reservations accordingly. This will help them utilize their Capacity Reservations more effectively.

Ruby 3.2 Runtime Now Available in AWS Lambda

Developers can now take advantage of the Ruby 3.2 runtime when building applications with AWS Lambda. Ruby 3.2 is now available in Lambda, allowing developers to create and manage their Lambda functions with this runtime. To use Ruby 3.2, developers should make necessary changes for compatibility and specify a runtime parameter value of ruby3.2 when creating or updating their Lambda functions.

Implementing Custom Domain Names for Amazon API Gateway Private Endpoints

Customers who need to utilize private endpoints securely with Amazon API Gateway across AWS accounts and within a VPC network can use a reverse proxy with a custom domain name. This solution simplifies the mapping between private endpoints with API Gateway and custom domain names, ensuring secure connectivity. To implement this solution, customers should set up a reverse proxy with a custom domain name that is pointed to the API Gateway endpoint in the customer’s VPC.

Why KeyCore?

At KeyCore, we understand the complexities of AWS and can help you design and implement systems for your business-critical workloads. Our team of AWS experts have the knowledge and experience to help you design your systems for failure and ensure availability, scalability, fault tolerance, and business continuity. We can help you with Capacity Reservations and with implementing custom domain names for your Amazon API Gateway private endpoints using a reverse proxy. Contact us today to learn more about how we can help with your AWS needs.

Read the full blog posts from AWS

AWS for M&E Blog

How AWS Helps Improve Quality of Service in Virtual Linear OTT Channels with Dynamic Ad Insertion

Organizations such as media companies, broadcasters and streaming services are increasingly using virtual linear OTT channels to deliver both video on demand (VOD) and live streaming video to their audiences. To ensure a good quality of service, they need to carefully manage these channels. AWS Elemental MediaTailor helps them do just that, by providing the ability to define a channel schedule and ensure the highest quality of service for viewers.

Creating Virtual Linear OTT Channels with AWS Elemental MediaTailor

AWS Elemental MediaTailor enables the creation of virtual linear OTT channels with a clear and simple channel scheduling process. The channels can be created using Video on Demand (VOD) assets, live streaming assets, or a mix of both. Once the schedule is defined, MediaTailor automatically applies the appropriate settings for each asset, ensuring a high quality of service at low latency.

In addition, MediaTailor makes it possible to insert dynamic ads into each asset in the channel, without the need for manual intervention. This ensures that viewers get an engaging viewing experience, while the channel owner maximizes the revenue generated from the ads.

Improving Quality of Service with Custom Metrics

To ensure that viewers have the best possible experience, MediaTailor also provides a variety of custom metrics that can be used to monitor the network for quality of service (QoS). These metrics measure things like session start time, playback time, and latency, as well as any errors or data loss that might occur. Using these metrics, organizations can identify and address any issues that might be impacting the quality of their video streams.

Sustainability Through Remote Production Using AWS

Sustainability is becoming increasingly important in the media and entertainment industry. To help organizations reduce their carbon footprint, AWS provides a range of services that can be used to facilitate remote production. This includes services such as AWS Elemental MediaLive, which can be used to securely deliver video streams to remote locations, and AWS Elemental MediaStore, which provides a secure storage solution for media assets.

Using AWS for remote production allows organizations to reduce their carbon emissions while still delivering high quality content to their viewers.

Opportunity Analysis Allows for New Insight into the Quality of Shots

The NHL EDGE IQ powered by Amazon Web Services (AWS) has introduced a new analytic, called Opportunity Analysis, which uses machine learning to analyze millions of historical and real-time data points to show fans how difficult a shot was at the moment of release. By analyzing various parameters, such as speed, angle, and location, the Opportunity Analysis can provide viewers with an estimate of the difficulty of a shot, calculated within seconds of the play.

KeyCore Can Help Improve Quality of Service with AWS

KeyCore is the leading Danish AWS consultancy, offering both professional services and managed services to help organizations improve the quality of service for their virtual linear OTT channels with dynamic ad insertion. Our advanced knowledge of AWS enables us to provide organizations with tailored solutions that address their specific needs, while also helping them reduce their carbon emissions with our remote production services.

Read the full blog posts from AWS

AWS Storage Blog

Making the Right Choices for Cloud Native CI/CD on Amazon EKS

Building and testing software can be a resource-intensive task that involves powerful servers waiting for build jobs. With the adoption of cloud native CI/CD (Continuous Integration/Continuous Delivery) systems on Kubernetes, there is now a shift away from the large, and often over-provisioned, static fleets of build servers.

Choosing the Right Storage Solution

Choosing the right storage solution for running CI/CD on Amazon EKS is key for successful deployments. AWS offers a variety of storage solutions that can be used, each with their own set of features and benefits. Amazon Elastic Block Store (EBS) is a block-level storage solution that provides persistent block-level storage volumes for use with Amazon EC2 instances. While EBS provides consistent, low-latency performance, it is often more expensive than other storage solutions. Amazon S3 is a highly available, low-cost object storage solution that can be used for storing objects such as files, images, videos, and application data. Amazon EFS is a file-level storage solution that provides a simple, scalable, fully managed file system for use with AWS resources. AWS also offers Amazon FSx for NetApp ONTAP, which provides an enterprise-grade file storage system that is highly available, scalable, and durable.

Disabling Access Control Lists (ACLs)

Access control lists (ACLs) are used to define user access, and the operations users can take on specific resources. Amazon S3 was launched in 2006 using ACLs as the primary authorization mechanism. While ACLs are still supported, Amazon S3 now recommends using AWS Identity and Access Management (IAM) policies for managing access to S3 buckets. Disabling ACLs can help reduce the attack surface and improve the security of Amazon S3 workloads.

Why S&P Global Chose Amazon FSx for NetApp ONTAP

For organizations looking to build high availability and disaster recovery (HADR) solutions for their SQL Server infrastructure, Amazon FSx for NetApp ONTAP is an excellent choice. This enterprise-grade file storage system is highly available, scalable, and durable, making it perfect for mission-critical workloads. It is also backed by AWS’ secure, cloud-native infrastructure and provides the agility and flexibility needed for modern analytics approaches.

Data Lake Protection Best Practices

Data lakes, powered by Amazon S3, provide organizations with the availability, agility, and flexibility needed for modern analytics. Protecting sensitive or business-critical data stored in these S3 buckets is a top priority. To help simplify this process, AWS Backup for Amazon S3 can be used to centrally automate the creation and management of backup policies.

Enhancing Upstream Workloads with Amazon FSx for NetApp ONTAP

Organizations in the Upstream Energy industry often face an operational burden of copying data to multiple solutions for different protocols. With Amazon FSx for NetApp ONTAP, this is no longer necessary as it provides a single, unified file storage system with high performance and low latency. This makes it perfect for G&G workloads, such as Reservoir Simulation, Subsurface Interpretation, and Drilling and Completions.

Massive Cross-Region Data Migration with Amazon S3 Batch Replication

Kurtosys provides secure, cloud-based solutions designed to make the lives of their clients easier. To help them quickly complete a massive cross-region data migration, they used Amazon S3 Batch Replication. This allowed them to replicate data across multiple Regions, while maintaining the security, durability, and availability of their data.

Simplifying Amazon EBS Volume Migration and Modifications on Kubernetes

When running critical applications in containers, access to a persistent storage layer is often needed. Amazon EBS provides high performance, low latency, and a persistent storage layer which ensures that data can be re-attached to different container instances. To help simplify the process, the Amazon EBS Container Storage Interface (CSI) driver can be used to easily migrate and modify Amazon EBS volumes on Kubernetes.

KeyCore Can Help

At KeyCore, we understand the importance of choosing the right storage solution for CI/CD on Amazon EKS. Our team of experienced AWS professionals can help you choose the best storage solution for your deployment, and help you implement it. We also offer managed services for Amazon S3 and Amazon EBS, so we can help you ensure your data is protected and available. Contact us today to learn more.

Read the full blog posts from AWS

AWS Developer Tools Blog

Introducing Amazon S3 Checksums in the AWS SDK for Kotlin

We at KeyCore are excited to announce the support for Amazon S3 checksums in the AWS SDK for Kotlin! Checksums play an important role in maintaining data integrity during data transfers, and the new SDK allows developers to quickly and easily configure these checksums.

What is a Checksum?

Checksums are fingerprints calculated from a set of data. They are used to check whether the data has been changed or corrupted during a transfer. Checksums are an invaluable precaution for developers to have in place to ensure the accuracy of the data they are dealing with.

How Does the SDK Help?

The AWS SDK for Kotlin simplifies the process of configuring checksums in S3. In the past, developers had to manually calculate and configure checksums. This process was time-consuming and could be prone to errors. With the SDK, developers can now take advantage of automated checksums for S3 transfers.

What are the Benefits of Checksums?

Checksums offer several benefits for developers. First, they help to ensure data accuracy and integrity by allowing developers to identify and correct errors such as data corruption. Furthermore, they reduce the amount of time spent manually verifying data accuracy. Finally, checksums provide an extra layer of security, as malicious actors would need to know the checksum in order to successfully modify the data.

How Can KeyCore Help?

KeyCore can help developers make the most of the new AWS SDK for Kotlin. Our team of experienced AWS professionals can provide guidance on the best ways to configure checksums and ensure data accuracy and security. We can also help developers deploy their applications in the cloud and ensure that their systems are running smoothly. Contact us today to learn more about how we can help.

Read the full blog posts from AWS

AWS Architecture Blog

How to Simulate and Design Architectures for Multi-Tenancy on AWS

Simulating Kubernetes-Workload AZ Failures with AWS Fault Injection Simulator

Ensuring applications function correctly, even during infrastructure failures such as an entire Availability Zone (AZ) becoming unavailable, is essential for cloud environments like AWS. Kubernetes workloads running on AWS often use multiple AZs for high availability and fault tolerance. To simulate failures and make sure applications can handle them gracefully, AWS provides a Fault Injection Simulator (FIS).

FIS is a tool that helps customers test their applications’ resiliency without causing any outages or interruptions. It lets customers inject faults at the AZ level, simulating the complete removal of certain AZs from the system. This helps customers understand the behavior of their applications during an AZ failure. FIS can be used in Kubernetes-based workloads using the AWS Cloud Development Kit (CDK).

With the CDK, customers can use FIS to simulate an existing Kubernetes cluster running on AWS. This will allow them to inject faults into the cluster and verify that their application can handle the failures gracefully. The CDK also allows customers to define the fault injection type, such as retry or abort, and the period of time in which the fault will be injected. This makes it easy to simulate various fault injection scenarios.

Designing Architectures For Multi-Tenancy

Multi-tenancy is a critical factor for software-as-a-service (SaaS) providers, as they are responsible for safely isolating tenant data. Architects and developers must understand architectural patterns for multi-tenancy to deliver scalable, secure, and cost-effective solutions.

When designing a multi-tenant architecture for AWS, there are three main approaches: single tenant, multi-tenant, and hybrid. The single tenant approach involves creating a completely isolated environment for each customer. This provides the highest level of security and isolation, but may result in higher costs.

The multi-tenant approach involves sharing resources among multiple tenants in a single environment. This reduces costs, but also increases complexity and the risk of data leakage. The hybrid approach combines the single tenant and multi-tenant approaches, creating a multi-tenant environment while also offering isolated environments for specific customers.

To make sure each tenant’s data is isolated, AWS recommends using a multi-region approach. This involves replicating data across multiple regions, allowing customers to access their data from any region in the event of a service disruption or data loss. Additionally, AWS also recommends using encryption to protect tenant data.

KeyCore’s Expertise

At KeyCore, we have extensive experience in helping our clients design and deploy multi-tenant architectures on AWS. Our team of AWS experts can help you identify the best approach for your business and develop a reliable solution that meets your security and scalability needs.

Our managed services can help you keep your multi-tenant environment running smoothly and securely. We offer a full range of services, including monitoring and logging, deployment automation, and incident response. We also offer professional services to help you set up your multi-tenant environment, configure security controls, and optimize your costs.

For more information about our services, please visit https://www.keycore.dk.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

APN Blog – How to Leverage AWS to Enhance Financial Control, Cybersecurity Insights, Virtual Care, Database Migration, Private Connectivity, Sensor Data Ingestion, Continuous Compliance Solutions, Identity and Permissions System, Data Management, Image Manipulation Workflows, and Drug Analyzer

Optimizing Financial Control with AWS

Ganit helps clients reduce time spent on reports and instead drive action for improvement in top- and bottom-line numbers. As an example, Ganit helped a financial control division of an Indian subsidiary of a large apparel manufacturer to optimize expenses across divisions and ensure functions operated efficiently with a data lake and data warehouse deployed on AWS.

Delivering Comprehensive Cybersecurity Insights on AWS

Tenable One Exposure Management Platform helps organizations view their attack surface and vulnerabilities to prevent likely attacks, accurately communicate cyber risk, and gain business insights from an exposure management platform. Tenable uses AWS to ingest data from multiple sources and transform it into a single standard structure.

Harnessing the Power of AI/ML and Cloud Services for Virtual Care

Virtual care is a key tool in shifting to desired outcomes. HARMAN Intelligent Healthcare Platform leverages AI/ML, cloud services, and data to unlock value, transform data, analytics, intelligence, and governance functions in a secure, cost effective, and privacy-preserving process. It offers improved customer experience and engagement through predictive analytics and actionable insights.

Migrating to CockroachDB with AWS DMS

CockroachDB is a cloud-native, distributed SQL database designed for applications with data-intensive workloads. AWS Database Migration Service can help migrate data to CockroachDB without downtime and data loss. An example migration is described in detail.

Establishing Private Connectivity with AWS PrivateLink for TiDB Cloud

AWS PrivateLink can connect a customer’s VPC to TiDB Cloud services on AWS as if it were in their own VPCs, without requiring VPC peering. This post provides guidance on how to use AWS PrivateLink to build trusted and secure private connectivity between data and TiDB Cloud.

Ingesting Sensor Data with AWS IoT Core

IoT enables customers to support a wide variety of use cases. Data can be streamed into a data lake where insights can be gathered by leveraging AWS machine learning services. Noser Engineering helped build a general digital twin platform capable of adapting to rapid changes through integration of many different sensor types.

Accelerating SaaS Providers’ Journey to Compliance with Drata

Drata’s continuous automated compliance solutions can help accelerate SaaS providers’ journey to compliance frameworks. They offer the ability to monitor diverse workloads across multiple accounts and customize the controls needed for the chosen compliance framework.

Katanemo Simplifies Onboarding and Builds Safety Features for SaaS on AWS

Katanemo’s identity and fine-grained permissions system enables developers to effortlessly onboard customers and build modern safety features for their SaaS applications on AWS. Additional posts and guides by Katanemo are available on how tenants can set up SSO, and how developers can build ABAC authorization and meet compliance requirements.

CloudCover Data Pipes Simplify Data Management on AWS

CloudCover Data Pipes provide a cloud-native data management platform for organizations to gain insights from their data. Leveraging AWS services, it can help collect, transform, and make data reusable.

Scaling Image Manipulation Workflows with Adobe Photoshop API

Adobe Photoshop APIs allow for powerful integrations and scalability of content workflows. Organizations can now get Photoshop technology in the cloud and use different AWS services to build scalable image manipulation workflows.

Drug Analyzer on AWS Informs Treatment Decisions and Supports Therapies

Drug Analyzer, a commercial launch application powered by AWS, provides life sciences organizations with data insights that inform treatment decisions and support the development of new therapies. It is built on Change Healthcare’s security-enabled analytic cloud service for persistent compliance monitoring and highly customizable analytics capabilities.

KeyCore’s Role in Leveraging AWS

At KeyCore, we provide professional and managed services to help customers get the most out of their data and AWS services. Our experienced consultants, cloud engineers, and data scientists can help with deploying business intelligence systems, using AWS to ingest and transform data, leveraging AI/ML and cloud services, migrating data and workloads, establishing private connectivity, ingesting sensor data, accelerating compliance journeys, simplifying onboarding and building safety features, simplifying data management, scaling image manipulation workflows, and setting up Drug Analyzer on AWS, among many other services. Contact us today to learn more about how KeyCore can help your organization.

Read the full blog posts from AWS

AWS HPC Blog

Unlock Policy Design with Agent-Based Computational Economics (ACE)

Economists are constantly striving to provide insights into how economic systems work. Agent-Based Computational Economics (ACE) is a powerful tool that provides economists with the flexibility to customize and adjust the behavior of agents in a range of economic systems. With extreme scale computing, ACE can be used to better understand economic behavior and inform policy decisions.

What is Agent-Based Computational Economics?

Agent-Based Computational Economics (ACE) is a simulation tool used by economists to model the behavior of agents in a wide variety of economic systems. ACE is based on the principle of agent-based modeling (ABM), which is a simulation approach used to study the behavior of entities within a system. With ACE, economists set parameters to define the behavior of agents in an economic system and then observe how the economic system responds to changes in those parameters.

The Benefits of Extreme Scale Computing

Extreme scale computing is a powerful tool for evaluating the behavior of agents in an ACE simulation. With extreme scale computing, large numbers of agents can be simulated in a fraction of the time required by traditional computing models. This allows economists to analyze the impact of changes to system parameters with greater accuracy in shorter amounts of time.

Using ACE for Policy Design

By combining extreme scale computing with ACE simulations, economists can gain valuable insights into how economic systems work and how policy changes might impact those systems. This can assist governments and policymakers in making more informed decisions on economic policy and ensure that policies are designed to be effective in the long-term.

KeyCore: Your AWS Solutions Provider

At KeyCore, we are experienced in designing and developing cloud-based solutions for a wide range of businesses. Our team of AWS experts can help you develop and deploy ACE simulations using extreme scale computing, so you can gain valuable insights into your economic system. With our knowledge and expertise, we can help you make informed decisions and ensure that your policy design is optimized for success. To learn more about how we can help you, contact us today.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

Maintaining Resilience with AWS Elastic Disaster Recovery

Organizations must maintain application and data resilience against an ever-evolving risk landscape, which can include ransomware attacks, natural disasters, user error, and hardware faults. To ensure that they can recover workloads with minimal data loss, organizations must invest in an effective disaster recovery (DR) strategy. AWS Elastic Disaster Recovery (EDR) is a service that helps organizations achieve cost-effective DR.

Understanding AWS Elastic Disaster Recovery

AWS EDR is a cost-effective and efficient DR solution that allows organizations to back up and restore their applications and data. Through the use of AWS EDR, organizations can replicate workloads to an off-site location, configure automated backups, store data in the cloud, and easily recover their data in the event of a disaster.

AWS EDR has a range of features aimed at helping organizations create a robust and reliable DR strategy. These features include automated backups and continuous replication of data, as well as multi-region failover and auto-scaling of resources to meet changing demand.

Testing AWS Elastic Disaster Recovery

Organizations must test their AWS EDR setup to ensure that it is working properly. The best way to do this is to simulate a disaster and see how the system responds. This can be done by running a series of tests that involve shutting down infrastructure, simulating a network failure, or simulating a system-wide outage.

When testing AWS EDR, organizations should ensure that their backups are working properly, that their data is correctly replicated, and that their system can successfully failover to a new region. Organizations should also take the time to test the recovery process, as this will help ensure that workloads can be recovered quickly and with minimal data loss.

Provisioning Products with ServiceNow

Organizations can use ServiceNow to manage incidents, track scheduled and planned infrastructure changes, manage new service requests, and track configuration items across IT systems. ServiceNow can also be used to provision new instances in AWS, as well as to raise patch change requests.

To provision products in AWS using ServiceNow, organizations must first configure the ServiceNow platform to communicate with their AWS account. This is done by creating a ServiceNow integration user and providing ServiceNow with the required security credentials. Once this is done, organizations can use ServiceNow to provision products, such as Amazon EC2 instances, and to raise patch change requests.

How KeyCore Can Help

At KeyCore, we are experts in AWS and can help organizations develop an effective AWS EDR strategy. We can help organizations develop a robust testing plan for their AWS EDR setup, as well as assist with the provisioning of products and raising of patch change requests through ServiceNow. Our team of AWS certified professionals has extensive experience developing and implementing cloud-based solutions and can help organizations ensure their DR strategy is effective and reliable.

Read the full blog posts from AWS

AWS for Industries

Achieve Sustainability in Manufacturing with AWS Databases

Manufacturing organizations are facing increasing expectations to pursue both profitability and sustainability, and are becoming aware of the importance of reducing carbon footprints. Consequently, consumers and corporate purchasers are taking carbon footprints into account when making buying and investment decisions. Industry 4.0 solutions, such as AWS databases, can help organizations achieve their goal of sustainability through improving asset performance, streamlining processes, and reducing energy consumption.

AWS databases help manufacturers to optimize their assets and processes, and create detailed visualizations of their data. This allows the organization to identify potential issues and make real-time decisions to improve efficiency. Furthermore, it enables them to store and access data from multiple locations, allowing them to quickly react to changes in production and customer orders.

AWS databases also allow manufacturing organizations to reduce their energy costs. Combining analytics with machine learning makes it possible to predict energy usage and optimize energy consumption. Organizations can also use AWS to deploy energy-efficient solutions, create a connected factory, and reduce operational costs.

In addition, AWS databases enable manufacturers to analyze production data in a secure and compliant environment. This allows them to understand the performance of their products, identify areas for improvement, and ensure compliance with industry standards. Furthermore, it helps them to increase customer satisfaction and prevent product recalls.

Executive Conversations: Building the Brain Knowledge Platform, with Shoaib Mufti, Data and Technology Lead at the Allen Institute for Brain Science

Shoaib Mufti, Head of Data and Technology at the Allen Institute for Brain Science, joined Lisa McFerrin, Worldwide Lead for Research, Discovery, and Translational Medicine at Amazon Web Services (AWS), to discuss how the Allen Institute is using the cloud to build the Brain Knowledge Platform (BKP) for the U.S. National Institutes of Health (NIH). BKP is an integrated platform for discovering, exploring, and sharing knowledge about the brain. It combines high-throughput data and machine learning to support research and discovery about the brain.

AWS enables the Allen Institute to store and process the large amounts of data they are collecting from the NIH, and to quickly develop and deploy machine learning algorithms. This enables the Allen Institute to identify patterns in the data and generate insights into brain processes. Additionally, AWS helps the institute to develop data visualizations and share their data with the scientific community. This helps researchers to understand the biological processes in the brain and advance the development of treatments for neurological diseases.

New FHIR API capabilities on Amazon HealthLake helps customers accelerate data exchange and meet ONC and CMS interoperability and patient access rules

Healthcare and life sciences organizations generate large amounts of structured and unstructured health data on an hourly basis, and secure exchange and use of this data is essential. The Fast Healthcare Interoperability Resources (FHIR) standard enables the secure exchange and use of this data, and can lead to better clinical decisions, clinical trials, and operational efficiency.

Amazon HealthLake provides organizations with the tools to process and store FHIR data, and is compliant with the standards set by the ONC and CMS. It also supports the development of FHIR APIs, which makes it easier for organizations to share data with external systems, such as payers, providers, and government entities. Additionally, Amazon HealthLake enables organizations to comply with patient access rules, and provide patients with access to their own health data.

How We Build This on AWS: Zafiro by Entertainment Solutions

Entertainment Solutions has been providing interactive connectivity solutions to hotels, hospitals, stadiums, and airports since 2006. They use AWS to provide their services and have a presence in over 100,000 rooms in 60 countries.

AWS enables Entertainment Solutions to quickly and securely deploy their solutions on a global scale. AWS’s comprehensive suite of services also provides the company with the tools to process payments, manage customer accounts, and create customer experiences. Furthermore, AWS’s secure storage solutions enable Entertainment Solutions to store and protect customer data.

How AWS is helping thredUP revolutionize the resale model for brands

Retailers are struggling to address the issue of product returns, which cost companies an estimated $50 billion in the United States. thredUP, a subsidiary of E-commerce retailer, Macy’s, is using AWS to revolutionize the resale model for brands.

AWS provides thredUP with the tools to securely store and process data, and develop predictive analytics models. This enables thredUP to identify trends in customer behavior and better manage inventory. Furthermore, AWS enables thredUP to quickly develop and deploy new features, such as their resale marketplace.

AWS also provides thredUP with the scalability to expand their services to more customers, and the tools to quickly respond to changing customer demands. This enables thredUP to reduce waste and increase profits for their customers.

The Splunk Immersive Commerce Experience powered by AWS launches in London

The Splunk Immersive Experience center (SIE) in London provides retailers and consumer brands with an inspirational discovery to create memorable customer experiences. Powered by AWS, SIE brings retail data together in an immersive environment, providing customers with real-time views into customer behavior, demand forecasting, supply chain optimization, and more.

AWS provides Splunk with the tools to process and store large amounts of data, as well as the scalability to expand their services as needed. Additionally, AWS’s suite of services enables Splunk to develop interactive experiences and deploy new features quickly.

Treasure Data’s Customer Data Platform helps Danone Indonesia nourish the future

Danone Indonesia has been committed to sustainable and equitable nutrition since 1954, and is using Treasure Data’s Customer Data Platform (CDP), powered by AWS, to support their mission.

Treasure Data’s CDP allows Danone to collect, store, analyze, and act on customer data from multiple sources. This helps Danone to identify customer trends and quickly develop personalized marketing strategies. Furthermore, AWS provides Treasure Data with the scalability to process large amounts of data, and the security to protect customer data.

Transform supply chain planning into your competitive advantage with Anaplan and Amazon Forecast

Consumer goods companies and retailers are facing increasing pressure to optimize their supply chain and reduce waste. Anaplan and Amazon Forecast provide organizations with the tools to transform their supply chain planning process and gain a competitive advantage.

Anaplan enables organizations to develop predictive models and quickly respond to changing customer demands. Amazon Forecast allows organizations to develop accurate forecasts based on historical data, machine learning, and deep learning. Additionally, Amazon Forecast enables organizations to reduce forecasting errors and improve accuracy.

AWS provides Anaplan and Amazon Forecast with the scalability to process large amounts of data, and the security to protect customer data. This allows organizations to gain insights into their supply chain and make informed decisions quickly.

VTEX Industry Research: Top 3 Investments for Digital Commerce 2023

The past three years have been some of the most tumultuous in the history of retail, and a recent survey by Publicis Sapient revealed the top three investments for digital commerce in 2023.

The survey revealed that the top three investments for digital commerce in 2023 are AI and machine learning, advanced analytics, and cloud infrastructure. AI and machine learning will be used to develop personalized customer experiences and improve customer service. Advanced analytics will be used to gain insights into customer behavior and develop predictive models. Finally, cloud infrastructure will be used to process and store large amounts of data and quickly deploy new features.

AWS provides organizations with the tools to develop AI and machine learning models, advanced analytics, and cloud infrastructure. Additionally, AWS enables organizations to quickly deploy new features and securely store and process customer data.

Harnessing the Power of Generative AI for Automotive Technologies on AWS

Generative AI for Automotive is enabling Software Defined Vehicles and Autonomous Mobility. With the ready availability of scalable compute capacity, massive proliferation of data, and rapid advancement of ML technologies, customers across the automotive and manufacturing industries are transforming their businesses with AWS’s machine learning (ML) capabilities.

AWS provides organizations with the tools to process and store large amounts of data, and develop predictive models. Additionally, AWS’s suite of services enables organizations to develop generative AI models and quickly deploy new features. This helps organizations to gain insights into customer behavior and develop personalized experiences.

At KeyCore, our team provides professional services and managed services to help customers harness the power of generative AI and machine learning. Our experienced team of AWS Consultants has the expertise to support customers in developing and deploying AI and machine learning models. We can also help with the development of cloud infrastructure solutions, such as data lakes and data warehouses, and provide guidance on how to securely store and process customer data.

Read the full blog posts from AWS

AWS Messaging & Targeting Blog

Maintaining a Healthy Email Database

Having a well-maintained email database is essential for businesses that want to communicate with their customers. Emails are used for marketing campaigns, customer service updates, and important announcements. However, managing an email database is more than just storing email addresses. Companies should also ensure that all emails sent are in compliance with the CAN-SPAM Act, as well as any other applicable laws, such as GDPR.

Keeping Your Email Database Compliant

The CAN-SPAM Act defines rules that companies must follow when sending out emails. This includes requirements such as having a valid physical address in the email, providing a way for customers to unsubscribe, and using only the customer’s consent email address to send emails. Additionally, the GDPR requires companies to obtain explicit consent from their customers before sending them emails. Companies should also regularly clean their email database and remove any invalid or unsubscribed addresses.

Accelerated Mobile Pages with SES

Accelerated Mobile Pages (AMP) can be used with Amazon Simple Email Service (SES) to increase email engagement. AMP emails support interactive elements and dynamic content, and can be customized with colors, logos, and images. Using the SES API, companies can also send personalized emails based on customer preferences and interests. KeyCore can help businesses configure AMP with SES, as well as provide a working code example for sending AMP emails.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

The Latest AWS Security, Identity, and Compliance Launches and Announcements

Using the Hosted UI or Creating a Custom UI in Amazon Cognito

Amazon Cognito is an authentication, authorization, and user management service for web and mobile applications. AWS provides a hosted UI, or users can create a custom UI to authenticate their users with native accounts within Amazon Cognito or through third-party social logins such as Facebook, Amazon, or Google. Federation can also be configured through a third-party OpenID Connect provider. Once authenticated, Amazon Cognito securely stores user profiles and access tokens.

Temporary Elevated Access Management with IAM Identity Center

AWS recommends using automation to keep people away from systems, but some operations might require access by human users. For these operations, special treatment such as temporary elevated access, also known as just-in-time access, might be necessary. To enable this, AWS Identity and Access Management (IAM) Identity Center allows customers to easily manage, control, and audit access to AWS resources.

The AWS Security Profile Series

The AWS Security Profile Series interviews AWS thought leaders about cloud security. In one such interview, Matt Campagna, Senior Principal, Security Engineering at AWS Cryptography, shared his thoughts on data protection, cloud security, post-quantum cryptography, and more. Valerie Lambert, Senior Software Development Engineer at Crypto Tools, was also interviewed and shared her insights on data protection, cloud security, and cryptography tools.

2023 ISO and CSA STAR Certificates Now Available

To help customers secure their AWS environment, AWS offers digital training, whitepapers, blog posts, videos, workshops, and documentation. AWS recently completed a special onboarding audit with no findings for ISO 9001, 27001, 27017, 27018, 27701, and 22301, and Cloud Security Alliance (CSA) STAR CCM v4.0. Certificates were issued on May 23, 2023. This audit assessed AWS’s risk management program, information security program, and security operations with respect to the AWS Shared Responsibility Model.

Our Commitment to Shared Cybersecurity Goals

AWS is committed to working with the United States Government to achieve their National Cybersecurity Strategy goals. The Strategy outlines an ambitious vision for building a secure future in the United States and around the world. AWS is part of the solution, providing secure cloud computing that enables organizations to take advantage of the benefits of the cloud.

Updated AWS Ramp-Up Guide Available for Security, Identity, and Compliance

The AWS Ramp-Up Guide: Security is designed to help customers learn about security in the cloud. It provides useful information about how to configure and secure AWS services, secure identities, meet compliance requirements, and identify and respond to security incidents. It also helps customers identify the best practices for tracking and monitoring their resources.

KeyCore Can Help

KeyCore offers both professional services and managed services to help customers secure their AWS environment. Our team of AWS certified consultants have expertise in identity and access management, security operations, compliance, and more. We can help customers plan, deploy, and maintain secure and compliant solutions. We can also provide monitoring, alerting, and incident response services to help protect customer data and applications.

Read the full blog posts from AWS

Front-End Web & Mobile

Next.js and AWS AppSync: Using Merged APIs to Simplify Full-Stack React Apps

Next.js is a popular React framework that makes building full-stack React apps incredibly simple. It automates the difficult configuration required for server-side rendering and static site generation, and provides built-in support for styling, routing, and data fetching. Additionally, teams can use Next.js API Routes with AWS Amplify to help manage authentication, analytics, storage, and other feature requirements.

Merging Resources with AWS AppSync

AWS AppSync is a serverless GraphQL service which makes it easy to create, manage, monitor, and secure your GraphQL APIs. Recently, it launched Merged APIs, which enable developers to merge resources, including types, data sources, functions, and resolvers, from multiple source AppSync APIs into a single API. This simplifies the process of managing multiple APIs, and makes it easier to share resources between them. Additionally, teams can use Cross Account Merged APIs with AWS Resource Access Manager (RAM) to allow users with different AWS accounts to access the same API.

How KeyCore Can Help

At KeyCore, we specialize in helping companies build and deploy their applications using the latest AWS technologies. Our experts are highly experienced in helping teams get the most out of Next.js and AppSync, and can provide assistance with setting up Merged APIs that make the most of both systems. We also provide professional and managed services to help design, deploy, and maintain your application. To learn more about how KeyCore can help you make the most of your application, please visit our website at www.keycore.dk.

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector

At the AWS Public Sector Partner Forum, Jeff Kratz highlighted how AWS is inventing and simplifying for AWS Partners, and innovating on behalf of customers. More than 350 AWS Partners attended the forum, with thousands more located around the world. On the same day, the NYUMets team, led by Dr. Eric Oermann at NYU Langone Medical Center, collaborated with AWS Open Data, NVIDIA, and Medical Open Network for Artificial Intelligence (MONAI) to make the largest metastatic cancer dataset available at no cost to researchers worldwide. This was made possible by the AWS Open Data Sponsorship Program.

Professors Monica Chiarini Tremblay and Rajiv Kohli of William & Mary’s Raymond A. Mason School of Business detail how Carlos Rivero, the former chief data officer of the Commonwealth of Virginia, created a foundation for data sharing in Virginia powered by multiple AWS solutions. At the AWS Summit Washington DC keynote, Max Peterson, vice president of worldwide public sector at AWS, shared how public sector customers are using cloud technology to make the world a better place. To help government technology (GovTech) startups build solutions for government agencies, AWS launched its first AWS GovTech Accelerator.

AWS also expanded its Partner Transformation Program (PTP) with Targeted Transformation Modules (TTMs), providing topic-specific workshops to help AWS Partners. Deloitte’s Smart Factory Believers Program was expanded to the District of Columbia Public Schools (DCPS), impacting over 1,600 students in the DC metro area. With the help of AWS and Halcyon, the Climate Resilience in Latin America and the Caribbean Fellowship was launched to accelerate solutions that address climate change. Finally, AWS launched new skills and education programs to inspire cloud-curious students and empower girls and women across Virginia and beyond.

At KeyCore, we are your go-to AWS consultancy. Our professional and managed services give you access to a team of experienced AWS certified architects and developers. As a certified AWS Partner, we can help you with your own Partner Transformation Program. Additionally, our experts can help you utilize the same cloud technology used by public sector customers to make the world a better place. Our technical experience and knowledge, combined with our customer-focused approach, will ensure you get the best solutions for your business. Contact us today to learn more about how we can help you with your AWS needs.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

How NVIDIA Isaac Sim and ROS 2 Navigation Simplifies High-Fidelity Simulation on AWS RoboMaker

High-fidelity simulations are becoming increasingly important for developing and testing robots that perform sophisticated tasks in the physical world. Virtual environments with realistic objects, as well as robot models with accurate physics, are crucial for reliable robots. But setting up a simulation, training, and testing environment poses several challenges. Installing and configuring various components such as NVIDIA Isaac Sim and Robot Operating System (ROS) 2 can be time-consuming and complex.

AWS RoboMaker Simplifies Installation and Configuration

AWS RoboMaker simplifies the installation of these components and the configuration of the simulation environment. With RoboMaker, users can easily create public container images with NVIDIA Isaac Sim, ROS 2, and other components pre-installed and configured. They can deploy and manage robots and robot applications in the cloud, using the same development environment as on-premises.

Using NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker

With NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker, users can create virtual environments with photo-realistic objects and robot models that accurately simulate robot behavior. This allows users to test their robotics applications in the cloud without needing to install and configure the components.

Benefits of Using NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker

Using NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker has several benefits. Users can focus on developing and testing their robotics applications without needing to worry about installing and configuring components. Additionally, they can access the same development environment in the cloud as they do on-premises. Finally, the virtual environments and robot models can accurately simulate robot behavior, ensuring reliable robots.

Get Started with NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker

At KeyCore, we specialize in providing professional and managed services for NVIDIA Isaac Sim and ROS 2 Navigation on AWS RoboMaker. Our team of experienced engineers can help you set up your simulation environment quickly and efficiently, while ensuring high-fidelity performance. Contact us today to learn more about how we can help you develop and test your robotics applications.

Read the full blog posts from AWS

AWS Open Source Blog

Creating RESTful APIs with OpenAPI Specification and AWS Serverless Technologies

AWS provides powerful serverless technologies, allowing developers to create scalable and secure applications with minimal effort. By using the open source OpenAPI Specification, developers can define APIs rapidly and direct their focus towards development of the endpoints.

The OpenAPI Specification (formerly known as Swagger) is a standard format for describing RESTful APIs. It allows developers to define an API and generate the necessary documentation and code for the API. This makes it easy for developers to collaborate on designing and implementing a comprehensive API.

The OpenAPI Specification also facilitates the creation of an API that is compatible with AWS serverless technologies, such as Amazon API Gateway and AWS Lambda. With these technologies, developers can implement a serverless API that is secure, reliable, and fully integrated with the AWS ecosystem.

For example, developers can use Amazon API Gateway to create a REST API with an HTTPS endpoint, and then configure AWS Lambda functions to invoke whenever an API request is made. This allows developers to create an API without having to worry about managing servers or writing complicated code.

Additionally, developers can use the OpenAPI Specification to create a simple API with minimal effort. With the Swagger editor, developers can easily define an API, generate the necessary documentation, and generate code for the API.

The OpenAPI Specification and AWS serverless technologies provide developers with the tools to quickly create secure, reliable, and scalable APIs. By leveraging these technologies, developers can spend less time managing servers and more time focusing on the development of their API.

At KeyCore, we specialize in helping companies with their AWS solutions. Our team of experienced AWS developers can help you create and manage a serverless API with the OpenAPI Specification and AWS serverless technologies. Contact us today to learn more about our services and how we can help your business.

Read the full blog posts from AWS

Scroll to Top