Summary of AWS blogs for the week of monday Mon Jun 26

In the week of Mon Jun 26 2023 AWS published 105 blog posts – here is an overview of what happened.

Topics Covered

Desktop and Application Streaming

Streamline Your App Onboarding with AppStream 2.0 Applications Manager

AppStream 2.0 Applications Manager is a powerful feature that simplifies the process of setting up a new application. With this feature, you can create and connect to an app block builder instance and use the new application builder assistant to monitor your app installation. This feature increases the compatibility of apps with elastic fleets and shortens the setup process.

Process Automation

AppStream 2.0 Applications Manager allows you to automate the setup process. The assistant takes care of the application installation, connections, and configurations in one go. This removes the need to go back and forth between settings and makes sure everything is configured correctly. You can also control access to certain resources, so that only authorized personnel can access the system.

Compatibility with Elastic Fleets

The feature also ensures compatibility with elastic fleets. This means applications that are built on AppStream 2.0 can easily run on a variety of platforms. This ensures that applications can be deployed quickly and easily, without any compatibility issues.

Simplified Administration

The Applications Manager also simplifies the administration process. You can easily manage your applications and track their performance. This helps to ensure that applications are running efficiently and that any performance issues can be addressed quickly.

How KeyCore Can Help

At KeyCore, we provide professional and managed services for AppStream 2.0. Our team of experienced AWS professionals can help you implement the Applications Manager feature with ease. We can also help you troubleshoot any issues and ensure that your applications are running optimally. Contact us today to learn more about how we can help you streamline your application onboarding process.

Read the full blog posts from AWS

AWS for SAP

Automate invoice processing with the AWS SDK for SAP ABAP

The traditional method of manually processing large volumes of structured and unstructured data, such as invoices, contracts, and financial reports, can be time-consuming and prone to errors. To reduce this burden, customers are turning to Intelligent Document Processing (IDP), an automated process that leverages machine learning and computer vision to extract information from documents. The AWS SDK for SAP ABAP provides an easy-to-use interface for customers to integrate IDP into their SAP solutions.

Getting Started with the AWS SDK for SAP ABAP

SAP customers running workloads on AWS can develop and improve their business processes using ABAP. Custom ABAP code and teams of ABAP developers are often used to build, maintain, and innovate for the company. The AWS SDK for SAP ABAP makes it easy for customers to integrate AWS services into their ABAP applications, allowing them to take advantage of AWS’s powerful cloud computing capabilities.

For customers looking to leverage the AWS SDK for SAP ABAP, KeyCore provides the expertise to help get started. Our team of experienced consultants are well-versed in the latest AWS technologies and best practices, and can help you develop the cloud infrastructure and applications that meet your needs. In addition, our managed services provide ongoing support and maintenance for your applications, helping you maintain compliance and stay up-to-date with the latest technology changes. With our help, you can quickly and easily automate your processes with the AWS SDK for SAP ABAP.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Machine Learning Blog of Amazon Web Services

Democratize Computer Vision Defect Detection for Manufacturing Quality Using No-Code Machine Learning with Amazon SageMaker Canvas

Manufacturing quality is on the minds of many. Quality defects cause scrap and rework costs, decrease throughput, and can negatively impact customers and companies’ reputation. Quality inspection on the production line is essential to maintain quality standards. In many cases, human visual inspection is used to assess the quality and detect defects, which can be labor-intensive and time-consuming.

Amazon SageMaker Canvas provides a no-code experience for manufacturers to quickly and confidently deploy computer vision models to help automate defect detection. It is a visual workbench in Amazon SageMaker that helps users quickly develop, debug, and deploy models. With it, users can drag-and-drop components to construct a machine learning pipeline. Amazon SageMaker Canvas makes it easy to add custom logic and workflows to seamlessly integrate with existing systems.

Using Amazon SageMaker Canvas, users can access the tools and components they need to build, modify, and deploy computer vision models for defect detection. The Canvas environment provides a fast and flexible way to build, debug, and deploy models without having to write code. Additionally, users can easily create custom workflows using simple drag-and-drop operations.

At KeyCore, we provide professional and managed services to help you maximize the value of Amazon SageMaker Canvas. Our team of experts can help ensure the entire process runs smoothly, from creating models to deploying them for defect detection. Contact us today to learn more.

Recommend and Dynamically Filter Items Based on User Context in Amazon Personalize

Organizations are investing time and resources to develop intelligent recommendation systems to provide tailored and relevant content to their users. This can help transform the user experience, generate meaningful interaction, and drive content consumption. Some of these solutions take advantage of common machine learning models built on historical interaction patterns, user demographic attributes, and item related features.

Amazon Personalize provides a fully managed machine learning service that simplifies building, deploying, and maintaining recommendation models. It uses an algorithm portfolio consisting of the most popular models used for recommendation. It also provides a feature store that allows customers to store and retrieve the user profile, item features, and interaction features to create and use for training and serving models.

Amazon Personalize enables users to recommend relevant content with high accuracy and scale. It also has the capability to filter content based on user context. This means users can send a list of items to Amazon Personalize that they wish to filter, and Amazon Personalize will respond with items that match the user’s criteria.

At KeyCore, we offer professional and managed services to help you leverage Amazon Personalize to its fullest potential. Our team of experts can help you with building, deploying, and maintaining recommendation models. Contact us today to learn more.

Interactively Fine-Tune Falcon-40B and Other LLMs on Amazon SageMaker Studio Notebooks Using QLoRA

Fine-tuning large language models (LLMs) can help adjust open-source foundational models to improve performance on domain-specific tasks. Amazon SageMaker notebooks provide a great way to interactively fine-tune state-of-the-art open-source models. They can be used to leverage Hugging Face’s parameter-efficient fine-tuning (PEFT) library and quantization techniques through bitsandbytes to support interactive fine-tuning of LLMs like Falcon-40B.

Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single, web-based interface to quickly build, debug, and deploy ML models. It also provides tools to help users interactively explore their data and quickly iterate on algorithms. With Amazon SageMaker Studio, developers and data scientists can use PEFT and bitsandbytes to fine-tune LLMs in just a few clicks.

At KeyCore, we provide professional and managed services to help you maximize the value of Amazon SageMaker Studio. Our team of experts can help you with building, deploying, and maintaining models. Contact us today to learn more.

Capture Public Health Insights More Quickly With No-Code Machine Learning Using Amazon SageMaker Canvas

Public health organizations often have significant amounts of data about different types of diseases, health trends, and risk factors. Until recently, staff used statistical models and regression analyses to make decisions such as targeting populations with the highest risk factors for a disease with therapeutics, or forecasting the progression of concerning outbreaks.

Amazon SageMaker Canvas provides a no-code experience for public health organizations to quickly and confidently deploy machine learning models to capture and analyze data. It is a visual workbench in Amazon SageMaker that helps users quickly develop, debug, and deploy models. With it, users can drag-and-drop components to construct a machine learning pipeline.

Using Amazon SageMaker Canvas, users can access the tools and components they need to build, modify, and deploy machine learning models for public health insights. The Canvas environment provides a fast and flexible way to build, debug, and deploy models without having to write code. Additionally, users can easily create custom workflows using simple drag-and-drop operations.

At KeyCore, we provide professional and managed services to help you maximize the value of Amazon SageMaker Canvas. Our team of experts can help ensure the entire process runs smoothly, from creating models to deploying them for public health insights. Contact us today to learn more.

Safe Image Generation and Diffusion Models with Amazon AI Content Moderation Services

Generative AI technology is advancing quickly, and it’s now possible to generate text and images based on text input. Stable Diffusion is a text-to-image model that enables you to create photorealistic applications. It can generate images from text using Stable Diffusion models through Amazon SageMaker JumpStart.

Amazon AI Content Moderation Services provide developers with a way to validate content generated by Stable Diffusion. The services provide machine learning models that detect explicit content and determine whether it is safe for publication. With these models, developers can quickly build applications without having to worry about the content being generated.

At KeyCore, we offer professional and managed services to help you leverage Amazon AI Content Moderation Services to its fullest potential. Our team of experts can help you with building, deploying, and maintaining content moderation models. Contact us today to learn more.

Use Proprietary Foundation Models from Amazon SageMaker JumpStart in Amazon SageMaker Studio

Amazon SageMaker JumpStart is an ML hub that can help you accelerate your ML journey. With SageMaker JumpStart, you can discover and deploy publicly available and proprietary foundation models to dedicated Amazon SageMaker instances for your generative AI applications.

SageMaker JumpStart allows you to deploy foundation models from a network isolated environment, and users can use Amazon SageMaker Studio to run experiments and fine-tune the models. Amazon SageMaker Studio is an integrated development environment (IDE) that provides a single, web-based interface to quickly build, debug, and deploy ML models.

At KeyCore, we provide professional and managed services to help you maximize the value of Amazon SageMaker Studio and Amazon SageMaker JumpStart. Our team of experts can help you with building, deploying, and maintaining models. Contact us today to learn more.

How Earth.com and Provectus Implemented Their MLOps Infrastructure with Amazon SageMaker

When ML models are deployed to production to drive business decisions, the challenge often lies in the operation and management of multiple models. MLOps provides the technical solution to this issue, helping organizations manage, monitor, and control models.

Earth.com and Provectus implemented an MLOps infrastructure with Amazon SageMaker. With this approach, they automated the infrastructure for building, training, and deploying ML models. They also built an interactive platform for visualization and monitoring of model performance.

At KeyCore, we offer professional and managed services to help you leverage Amazon SageMaker for MLOps. Our team of experts can help you with building, deploying, and maintaining ML models. Contact us today to learn more.

Define Customized Permissions in Minutes with Amazon SageMaker Role Manager via the AWS CDK

ML administrators have an important role in maintaining the security and integrity of ML workloads. Their primary focus is ensuring users adhere to the principle of least privilege. However, creating appropriate permission policies to accommodate different user needs can sometimes slow down agility.

Amazon SageMaker Role Manager makes it easy to define customized permissions in minutes. It allows administrators to define multiple roles with different levels and types of access, and assign them to users in one step. Additionally, it helps ensure that users have the minimum set of permissions they need to access Amazon SageMaker resources.

At KeyCore, we provide professional and managed services to help you maximize the value of Amazon SageMaker Role Manager. Our team of experts can help you with building, deploying, and maintaining role-based permissions. Contact us today to learn more.

Read the full blog posts from AWS

Announcements, Updates, and Launches

Generative AI & AWS AppFabric Launches

Generative AI is taking the world by storm, and this week we saw the release of two related products: a new hands-on course by DeepLearning.AI and AWS, and AWS AppFabric, an application observability product for SaaS applications. We also got updates from Amazon Web Services in the form of Step Functions versions and aliases, EC2 instances with Graviton3E processors, and more.

Generative AI Course by DeepLearning.AI and AWS

Generative AI allows us to create novel content and ideas, including conversations, stories, images, videos, and music. This new online course from DeepLearning.AI and AWS will let us take a deeper dive into the technology and discover how to make the most of it for business applications. We’ll learn how to design deep learning models and create art, music, and text with a variety of generative techniques.

AWS AppFabric

Many companies are turning to Software-as-a-Service (SaaS) applications to optimize their workflows and enhance employee productivity. AWS AppFabric helps improve application observability for these SaaS applications, allowing companies to quickly detect and diagnose issues. With AppFabric, users can access rich metrics and logs, apply AI-driven analysis and troubleshooting, and review relevant application context across all services.

AWS Week in Review – June 26, 2023

This week we got updates from Amazon Web Services, including the release of Step Functions versions and aliases, EC2 instances with Graviton3E processors, and more. Step Functions versions and aliases let developers create multiple versions of a workflow and switch between them at runtime. The new Graviton3E processor can achieve up to 40% savings for memory-bound workloads and up to 45% savings for compute-bound workloads.

KeyCore and Generative AI & AWS AppFabric

KeyCore is the leading Danish AWS consultancy. We provide both professional services and managed services to help our customers get the most out of AWS offerings. Our team of AWS-certified experts can help you deploy the latest AWS technologies, such as the AWS AppFabric and Generative AI course, and ensure that your applications are taking full advantage of the benefits these technologies offer.

Read the full blog posts from AWS

Containers

The Use of Containers on AWS for Data Science Workflows

Data science and engineering have made Apache Airflow a popular open-source tool for creating data pipelines, due to its active community, Python development, and a library of pre-built integrations. To make this process easier, Amazon Managed Workflows for Apache Airflow (MWAA) is a managed service for Apache Airflow that simplifies this process. MWAA allows developers and data scientists to quickly set up and run Apache Airflow workflows on AWS.

Amazon ECS Task Launch Behavior

Amazon Elastic Container Service (Amazon ECS) is a container orchestrator that helps launch and track application containers. With the recent improvement to Amazon ECS task launch behavior, tasks can now launch faster on container instances that are running tasks with prolonged shutdown periods. This provides customers with faster workload scaling and better infrastructure utilization.

CoStar Uses Karpenter to Optimize Their Amazon EKS Resources

CoStar is well-known for their leading Commercial Real Estate data, but they also have major home, rental, and apartment websites, such as apartments.com. Their traditional customers are well-informed and use complex data to make business decisions. To ensure success, they needed an application that can handle their ever-increasing workloads. This led to the implementation of Karpenter, a cost-optimization platform for Amazon Elastic Kubernetes Service (Amazon EKS). This allowed them to adjust their resources and reduce their costs by up to 15%, while maintaining service performance.

Life360’s Journey to a Multi-Cluster Amazon EKS Architecture

Life360 offers advanced driving, digital, and location safety features, as well as location sharing for the entire family. To improve reliability and scalability, they moved from a single-cluster to a multi-cluster Amazon EKS architecture. This allowed them to move to a more resilient architecture by adding more clusters, all while reducing the time it takes to deploy applications and improving platform performance.

Using containers on AWS is a great way to streamline data science workflows and increase infrastructure utilization and performance. With the help of services such as Amazon ECS, MWAA, and Amazon EKS, customers can create and manage container applications with ease. For businesses with complex workloads, cost-optimization platforms such as Karpenter can help them adjust their resources and reduce their costs, while still maintaining service performance.

At KeyCore, we strive to help our customers utilize the best technologies and services that AWS has to offer. Our team of AWS certified professionals have extensive experience deploying and managing applications on AWS. Our managed services team can help you design and deploy a solution that meets your business needs, as well as provide ongoing support and maintenance. Contact us today to learn more about how KeyCore can help you with containers on AWS.

Read the full blog posts from AWS

AWS Quantum Technologies Blog

The Winning Team of the First Ever Neutral-Atom Computer Hackathon

Earlier this year, QuEra and AWS sponsored the first-ever hackathon on a neutral-atom computer. In a 24-hour marathon, the teams of participants from around the globe solved hard problems using real quantum computers. It was an intense, stimulating, and exciting experience. We asked the winning team to tell us their story.

The Team:

The winning team, dubbed the “Quantum Knights,” consisted of four members from three countries: Luca De Feo (France), Matthieu Dumont (Belgium), Piotr Migdał (Poland), and Donatas Tamosauskas (Lithuania). Each member has expertise in various scientific, technical, and mathematical fields.

The Challenge:

The challenge required teams to program a quantum computer using their own software and hardware. The task was to create a quantum circuit that can solve a problem, like a game or a puzzle, without being able to “look” at the solution. The team had to design their own algorithms and write code to control the hardware.

The Solution:

The Quantum Knights faced a steep learning curve and initially faced considerable difficulties working with the quantum computer. But with the help of the QuEra community and AWS cloud services, they quickly got up to speed and created their own quantum circuit. Using the QuEra software and hardware, they solved the challenge in less than 24 hours.

The Impact of the Hackathon:

The results of the first-ever Quantum Knights Hackathon have shown that it is possible to program a quantum computer in a short period of time. It is also clear that working with quantum computers is challenging and requires a high level of technical skill. However, with the help of the QuEra community and AWS cloud services, the Quantum Knights were able to overcome those obstacles and complete the challenge.

How KeyCore can Help:

At KeyCore, we understand the power of quantum computing and are passionate about helping our clients make the most of its capabilities. We are experts in AWS and can help you with every step of your quantum computing journey, from the initial setup to the development of your own quantum applications. Our team of quantum computing experts will work with you to design and develop an effective quantum computing strategy that will help you unlock the full potential of this emerging technology.

Read the full blog posts from AWS

AWS Smart Business Blog

Navigating IT Challenges with AWS

Small and medium businesses (SMBs) often experience rapid growth, and with that, come many operational and cultural challenges. Integrating new acquisitions can be especially tricky when it comes to IT, as many are tempted to rely on legacy, on-premises technology, rather than exploring cloud migration opportunities.

The Story of a Medium-Sized Insurance Brokerage

One SMB that took the leap and embraced cloud-based solutions is a medium-sized insurance brokerage. Without in-house tech talent to understand the challenges and opportunities offered by the cloud, they put their trust in an AWS Partner. This partner provided the expertise to create an IT system that could support their rapid growth.

The Benefits of Migrating to AWS

The insurance brokerage realized the potential of AWS to help them save costs and time, while also increasing their service quality and scalability. With AWS, they were able to quickly set up a high-performance IT system that allowed them to scale up quickly and cost-effectively to meet their customers’ demands. They were also able to reduce their IT costs, as they no longer needed to purchase and maintain expensive on-premises hardware.

How KeyCore Can Help

At KeyCore, we understand the challenges and opportunities that come with rapid growth for SMBs. We can help your business take advantage of the cost and scalability benefits of migrating to AWS. With our experience in cloud-based solutions, we can provide the expertise you need to create an IT system that can support your rapid growth and provide a thorough evaluation of the economic implications. With our professional services and managed services, we can help you achieve the scalability, cost savings, and service quality you need.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Optimizing Performance and Cost for Amazon Neptune Serverless and using AWR Reports for Amazon RDS for Oracle Read Replicas

Amazon Neptune Serverless is a fully managed database service that makes it easier to build and run graph application. It offers support for RDF and Property Graph models, allowing developers to easily create relationships between the data. Additionally, Amazon Relational Database Service (Amazon RDS) for Oracle provides read replicas to offload read-only workloads. In this article, we explore use cases and best practices for Amazon Neptune Serverless, as well as how to generate AWR reports for Amazon RDS for Oracle read replicas.

Using Amazon Neptune Serverless

Amazon Neptune Serverless makes it easier for users to build and run graph applications. It offers support for RDF and Property Graph models, allowing developers to easily create relationships between the data. Additionally, Neptune Serverless has no set up costs, allowing users to pay only for the resources they use.

When using Neptune Serverless, users should be aware of best practices in order to optimize cost and performance. For cost optimization, users should aim to minimize the amount of storage and compute used. Additionally, users should employ fault-tolerance techniques to minimize downtime. For performance optimization, users should be aware of query execution times and latency, and optimize queries to reduce the amount of time required to execute.

Generating AWR Reports for Amazon RDS for Oracle Read Replicas

Oracle database administrators use tools such as Oracle’s Automatic Workload Repository (AWR) report to identify and resolve issues occurring in a database. However, because Amazon RDS for Oracle read replicas are read-only, users cannot generate AWR reports to monitor the performance of their read replicas.

Fortunately, Amazon RDS for Oracle offers a solution for this issue. The Babelfish for Aurora PostgreSQL supports the SQL Server wire-protocol and T-SQL, the query language used in Microsoft SQL Server. This means that developers can use Babelfish to run their existing SQL Server applications on Amazon Aurora PostgreSQL-Compatible Edition without having to switch database drivers or completely rewrite their queries.

Migrating On-Premises SQL Server Workloads to Amazon RDS Custom for SQL Server

In order to reduce migration downtime, Amazon RDS Custom for SQL Server provides a solution using distributed availability groups. This solution helps provide continuous data synchronization combined with a failover process. Additionally, users can also use this solution for high availability and disaster recovery as needed.

How KeyCore Can Help

At KeyCore, we provide professional services and managed services to help customers optimize their AWS deployments. Our team of experienced engineers can help you identify and address any issues you may be having with your Amazon Neptune Serverless or Amazon RDS for Oracle read replicas. Additionally, our experts can help you migrate your on-premises SQL Server workloads to Amazon RDS Custom for SQL Server, and can provide best-practices recommendations for both cost and performance optimization.

Reach out to us today to get started with optimizing your AWS deployments for performance and cost!

Read the full blog posts from AWS

AWS for Games Blog

Unlock New Possibilities with Graviton Cores on Unreal Engine-Based Games

Unreal Engine-based games developers can now take advantage of Graviton processors to unlock new possibilities and unleash greater performance. This post from Yahav Biran, Principal Solutions Architect, and Matt Trescot, Games SA Leader – Americas, outlines the advantages of this development and how KeyCore can help you take advantage of them.

Historical Challenges

Historically, creating and running complex game servers has locked developers into a single CPU architecture, typically Intel/AMD. Developers have found it difficult to introduce different CPU architectures into their existing game servers.

Graviton Cores for Unreal Engine-Based Games

Graviton cores now offer the potential to overcome this challenge. Amazon EC2 A1 instances powered by AWS Graviton processors are designed for scale-out workloads that are supported by the Unreal Engine. With EC2 A1 instances, you can compile Unreal Engine applications that leverage the latest optimizations for Graviton processors.

Graviton Processor Advantages

Graviton processors provide cost savings, improved performance, and greater efficiency for many workloads. Since Graviton processors are designed for scale-out workloads, they can handle high levels of concurrent requests and scale effortlessly with your workload demands. This makes Graviton ideal for games built on the Unreal Engine, enabling developers to access more compute power at lower costs.

Get Started with KeyCore

At KeyCore, we have the expertise to help you get started with Graviton processor-powered instances. We can help you build the right system architecture for your game, optimize costs and performance, and ensure that your server is running smoothly. Our team of experienced AWS consultants can help you unleash the power of Graviton processors and maximize the performance of your Unreal Engine-based game.

Read the full blog posts from AWS

AWS Training and Certification Blog

AWS Training and Certification Blog

Anna Malanchuk was a dentist who had almost completed her training, when the war broke out in her native Ukraine in February 2022. Fleeing to Norway, then Portugal, her husband found a job. Anna stumbled across ITSkills4U, a free training program for Ukrainians to transition a career in IT and decided to retrain.

New Courses and Updates from AWS Training and Certification

In May and June 2023, AWS released five digital training products on AWS Skill Builder, to help grow cloud expertise. AWS created a sustainability course, a new AWS Builder Lab to configure VPC Traffic Monitoring, an updated AWS Certified Security – Specialty Official Practice Exam, and AWS Cloud Quest Tournaments. Additionally, AWS created a guide to help plan a certification path, and a new AWS Migration Essentials classroom course.

New Training Series – Starting your Career with AWS Cloud

AWS released their new training series, Starting your Career with AWS Cloud, as a way to learn more about in-demand cloud careers and the skills employers are looking for. This training series is available on Coursera and edX, providing an overview of cloud fundamentals, introduction to cloud computing roles, and the skills needed. Learners will hear from real AWS experts in varied cloud roles as they share their journey to IT, what they do daily, and the questions, challenges, and opportunities they experience.

When Cloud Transformation Creates Personal Transformation

In 2018, Ian Butler, a Cloud Capability Lead for ANZ, never would have thought he would earn all 12 AWS Certifications. But, with his company’s accelerating cloud journey, the COVID-19 lockdown, his ambition, and newfound passion for training others, Butler successfully passed all 12 AWS Certifications over a two-year period.

KeyCore – Leading the Way in AWS Consulting Services

At KeyCore, we are the leading AWS Consultancy in Denmark. We provide professional services and managed services, helping our clients with their cloud transformation journey. We have access to a wide range of AWS services, tools, and specialists, allowing us to customize our solutions to fit the needs of our clients. Whether you are new to AWS or are already on your cloud journey, we can help you get where you want to go.

Read the full blog posts from AWS

Microsoft Workloads on AWS

Microsoft Workloads on AWS

AWS offers a broad range of tools and services for running Microsoft workloads. This post explains how to upgrade and modernize Windows Server 2012 and 2012 R2 using Windows containers on AWS, as well as how to migrate a Microsoft SQL Server database from an Azure SQL Managed Instance to SQL Server on Amazon Web Services (AWS).

Upgrading and Modernizing Windows Server 2012

Deploying Windows Server 2012 or 2012 R2 on AWS can provide an organization with the ability to quickly and easily scale up their infrastructure with the help of Amazon Web Services. The following four-part series provides options to handle the upcoming end-of-support event in October. Part one of this series overviews the end-of-support dilemma, plus how to perform an in-place, manual upgrade. It also provides insight into the new features introduced in Windows Server 2016 and how Windows containers can help modernize existing applications.

Part two of the series focuses on how to prepare for a Windows Server 2016 migration. This includes setting up the AWS environment, configuring the EC2 instances, and connecting to the Windows Server 2016 instance. Part three explains how to upgrade and modernize Windows Server 2012 with Windows containers on AWS. It covers how to install, configure, and deploy Windows containers and how to migrate existing applications to them. Lastly, part four reviews how to migrate legacy applications to Windows Server 2016.

Migrating SQL Server Databases

It is possible to migrate a Microsoft SQL Server database from an Azure SQL Managed Instance to SQL Server on AWS using a COPY_ONLY backup from Azure SQL Managed Instance. This method copies/moves all objects in a database and supports all editions. However, it is important to keep in mind that the Azure SQL Managed Instance must be up and running to perform the migration.

The process of migrating an Azure SQL Managed Instance to AWS requires creating the Amazon EC2 instance that will host the target database. To do this, administrators need to set up the instance, install SQL Server, and create an Amazon RDS database instance. Then, they can connect to the Azure SQL Managed Instance to create the COPY_ONLY backup, restore the backup to the AWS RDS instance, and verify the data.

KeyCore Can Help

At KeyCore, we provide expert advice and managed services to help customers prepare for their migration to AWS and upgrade their Windows Server 2012 and 2012 R2. Our team of experienced engineers can help review customer requirements, plan the migration, and ensure that all applications are running correctly in the new environment. Our team can also provide ongoing support and maintenance for your AWS-based Microsoft workloads.

For additional information on how KeyCore’s professional services can help with Microsoft workloads on AWS, contact us today.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

How Position2, iostudio, and AWS Are Utilizing Amazon Web Services for Data Driven Solutions

Position2’s Arena Calibrate Helps Customers Drive Marketing Efficiency with Amazon QuickSight Embedded

Position2, a leading US-based growth marketing services provider, has established a clientele that includes American Express, Lenovo, Fujitsu, and Thales. Position2 developed Arena Calibrate to enhance customer marketing efficiency with Amazon QuickSight Embedded. Arena Calibrate integrates Position2’s data science and technology with Amazon QuickSight to deliver insights to customers and make data-driven decisions. For example, it helps marketers identify high-performing digital channels and optimize campaigns to drive more revenue.

Migrating from Amazon Kinesis Data Analytics for SQL Applications to Amazon Kinesis Data Analytics for Apache Flink

AWS recommends that customers move from Amazon Kinesis Data Analytics for SQL Applications to Amazon Kinesis Data Analytics for Apache Flink in order to take advantage of Apache Flink’s advanced streaming capabilities. Data Analytics for Apache Flink is a fully managed, streaming data analytics service that can analyze streaming data in real time. The service also helps customers to quickly create streaming data pipelines without having to manage any infrastructure or write complex code.

Centralizing Near-Real-Time Governance Through Alerts on Amazon Redshift Data Warehouses for Sensitive Queries

Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud that can analyze data and deliver insights on all your data with the best price-performance. Amazon Redshift also provides users with the ability to centralize near-real-time governance through alerts on data warehouses for sensitive queries. This includes the ability to set up alerts and notifications for certain types of queries that could potentially lead to data leakage or unauthorized access.

Getting Started with Near-Real Time Operational Analytics Using Amazon Aurora Zero-ETL Integration with Amazon Redshift

Amazon Aurora zero-ETL integration with Amazon Redshift was announced at AWS re:Invent 2022 and is now available in public preview. This integration enables users to quickly and easily transfer data from Amazon Aurora databases to Amazon Redshift data warehouses with zero-ETL. This helps users to quickly build near-real-time operational analytics pipelines and eliminate latency and costs associated with traditional ETL jobs.

Taking Advantage of the Zero-ETL Approach from AWS

Data is at the center of every application, process, and business decision. AWS’ zero-ETL approach enables users to quickly build near-real-time analytics pipelines and easily derive insights from data. This helps users to innovate and drive business growth. Advanced insights-driven businesses are 8.5 times more likely than beginners to report at least 20% revenue growth.

iostudio Delivers Key Metrics to Public Sector Recruiters with Amazon QuickSight

iostudio, an award-winning marketing agency, wrote this guest post in collaboration with Sumitha AP from AWS. iostudio is using Amazon QuickSight to deliver key metrics to public sector recruiters. Amazon QuickSight provides users with the ability to quickly and easily visualize data from multiple sources. This helps users to quickly and easily derive insights from their data and make more informed decisions.

Harmonizing Data Using AWS Glue and AWS Lake Formation FindMatches ML to Build a Customer 360 View

In order to provide outstanding customer experiences, companies need to ingest data from multiple sources and cleanse it to provide insights. AWS Glue and AWS Lake Formation FindMatches ML can be used to harmonize data to build a customer 360 view. AWS Glue is a fully managed extract, transform, and load (ETL) service that makes it easy for customers to prepare and load their data for analytics. AWS Lake Formation FindMatches ML is a machine learning (ML) powered categorization service that helps automate the process of matching and merging data from multiple sources.

How KeyCore Can Help With These Solutions

At KeyCore, we help customers take full advantage of the AWS services and solutions discussed in this post. Our team of experienced AWS professionals can help customers set up and manage their data pipelines, as well as provide insights and advice on how to get the most out of the AWS services. We also provide managed services that help customers to quickly and easily set up their data pipelines and streamline their processes. Contact us today to learn more about how we can help you achieve your data-driven goals.

Read the full blog posts from AWS

Networking & Content Delivery

Improving Availability & Resiliency for Events with AWS Private 5G & Elastic Load Balancers

Introduction

Recently, AWS has added several new features to their Elastic Load Balancers (ELB) which give users control over when traffic is shifted between targets. These new capabilities can be used to improve the availability and resiliency of applications. Two types of Elastic Load Balancer health thresholds are available: target group health checks and static health check thresholds.

Target group health checks allow users to configure a target group threshold, which is the amount of healthy targets that it should maintain. If the number of healthy targets falls below this threshold, then traffic will automatically be shifted to other healthy targets. This helps ensure that the application is always running at its best.

Static health check thresholds provide an additional layer of control. When enabled, they allow users to configure a static threshold, which is the amount of healthy targets that must be maintained for traffic to be routed to them. This is useful for applications that require a higher level of availability.

The Lightning in a Bottle Festival 2023: DDR.Live Deploys AWS Private 5G

DDR.Live, a digital events platform, recently deployed AWS Private 5G to power the 2023 edition of the Lightning in a Bottle Festival. With this solution, DDR.Live was able to provide guests with a variety of technologies, such as Point of Sale (PoS), access control, ticketing, and check-in.

The AWS Private 5G solution allowed DDR.Live to easily manage the large amount of traffic that the event was generating. It also provided the guests with a reliable connection, even in remote locations. Furthermore, the solution integrated seamlessly with the ELB health check thresholds, ensuring that only healthy targets were receiving traffic.

The Benefits of AWS Private 5G & ELB Health Check Thresholds

Using AWS Private 5G and ELB health check thresholds together, DDR.Live was able to provide a seamless and reliable experience for their guests. The solution ensured that the event was always running at its best, even in remote locations. Additionally, the integration of ELB health check thresholds allowed DDR.Live to ensure that only healthy targets were receiving traffic. This helped improve the overall availability and resiliency of the event.

How KeyCore Can Help

At KeyCore, we understand the importance of providing a reliable and seamless experience for event guests. Our team of AWS certified professionals can help you design, deploy, and maintain a reliable infrastructure using AWS Private 5G and ELB health check thresholds. Whether you are looking to build an application from the ground up or optimize an existing one, we can provide you with the expertise and guidance you need to make sure your event runs smoothly and securely. Contact us today to learn more.

Read the full blog posts from AWS

AWS Compute Blog

Hybrid Cloud Storage on AWS Local Zones

AWS Local Zones are a type of infrastructure deployment that places compute, storage, database, and other select AWS services close to large population and industry centers. With Local Zones close to large population centers in metro areas, customers can provide low-latency access to services such as Amazon S3 and Amazon DynamoDB. This post will walk through how to set up hybrid storage on AWS using AWS Storage Gateway and Local Zones.

Retrieving Parameters and Secrets with AWS Powertools

When building serverless applications using AWS Lambda, developers often need to retrieve parameters, such as database connection details, API secrets, or global configuration values at runtime. AWS Powertools is an open source library for serverless developers that provides a consistent API across multiple services and platforms, simplifying the process of retrieving such parameters or secrets securely and reliably.

Implementing AWS Well-Architected Best Practices for Amazon SQS

Amazon Simple Queue Service (Amazon SQS) is a fully managed message queuing service that makes it easy to decouple and scale microservices, distributed systems, and serverless applications. This blog post series demonstrates best practices for Amazon SQS using the AWS Well-Architected Framework. Part 1 covers Reliability, Part 2 covers Security, and Part 3 covers Performance Efficiency.

By following best practices for Amazon SQS, customers can create more reliable, secure, and efficient applications utilizing this service. This blog post series covers topics such as the importance of designing for failure, using encryption for message payloads, and monitoring queues for performance and cost efficiency.

KeyCore can help with building and deploying serverless applications that utilize Amazon SQS. Our AWS certified experts have experience working with the AWS Well-Architected Framework, and can help you ensure that your applications are optimized for cost, security, performance, and reliability.

Read the full blog posts from AWS

AWS for M&E Blog

How NASCAR and Sky Italia Leverage AWS for Media and Entertainment

The National Association for Stock Car Auto Racing (NASCAR)

The National Association for Stock Car Auto Racing (NASCAR) has a long and illustrious history that dates back to the late 1940s. It has become one of the most renowned motorsport organizations in the world, giving fans memorable and exciting experiences. To make sure these experiences are delivered in the best possible way, NASCAR leverages Amazon Web Services (AWS) to deliver real-time racing data to broadcasters, racing teams, and fans.

Through the use of AWS, NASCAR has improved its data delivery capabilities, while also providing broadcasters and racing teams with insights from the data they gather. This includes data on track conditions, driver performance, and car details. By leveraging the power of AWS, NASCAR has been able to analyze data from multiple sources in real-time, helping teams make more informed decisions during the race.

The data collected from AWS has also been used to create interactive experiences for fans. This includes providing insights into car performance and driver behavior in real-time. These experiences have created an immersive and engaging fan experience that keeps them coming back for more.

Sky Italia

Sky Italia is one of Europe’s leading media and entertainment companies and the largest pay TV provider in Italy. A key component of its success is its award-winning TV music competition show, “X-Factor”. During the show, the home viewing audience is able to vote for their favorite performers. To make this possible, Sky Italia needed a voting platform that could handle millions of requests from viewers.

Sky Italia was able to insource its voting platform with AWS, resulting in a significant increase in performance and an 84% reduction in costs. By leveraging the scalability and cost-effectiveness of AWS, Sky Italia was able to provide viewers with an instant and secure voting experience. This allowed the show to reach more viewers and gave them a more engaging experience.

Interactive Advertising with Amazon IVS

Media and entertainment companies often face the challenge of monetizing their live streaming content. With platforms such as social media, it can be difficult to advertise precisely the way you want within your show or event.

Amazon IVS is a managed live streaming solution that enables media and entertainment companies to deliver a high quality and low latency live streaming experience. With Amazon IVS, companies are also able to create interactive advertising experiences for their viewers. This can be done by leveraging the low latency streaming capabilities of IVS and recognizing events in the stream.

By recognizing the events within the stream, companies are able to display targeted ads in real-time. This helps to create an engaging experience for the viewer and makes it easier for media and entertainment companies to monetize their live streaming content.

How KeyCore Can Help

KeyCore is the leading Danish AWS Consultancy and provides both professional and managed services. Our team of experts can help media and entertainment companies take full advantage of the power of AWS. We can help to set up and manage AWS services, optimize performance, and develop custom solutions that are tailored to your needs.

Whether you need help setting up Amazon IVS or want to take advantage of the scalability and cost-effectiveness of AWS, our team can help. Contact us today to learn more about how we can help you get the most out of AWS.

Read the full blog posts from AWS

AWS Storage Blog

Disaster Recovery and Chaos Engineering with AWS Systems Manager and Amazon EBS

In the digital era, ensuring business continuity through effective disaster recovery measures is crucial for organizations of all sizes. Setting up disaster recovery solutions manually, such as installing recovery agents on multiple servers, can be a significant and time-consuming task. Therefore, many customers are increasingly seeking automation to streamline common administrative tasks and ensure their systems are prepared for any unforeseen events. AWS Systems Manager has long been a tool used by customers to save time and simplify deployment, patching and resource configuration management. More recently, AWS Systems Manager has been extended to support automated disaster recovery at scale.

Deploying AWS Elastic Disaster Recovery at Scale

Using AWS Elastic Disaster Recovery, customers can build orchestrated workflows to automate disaster recovery and minimize failover time between AWS Regions. By leveraging AWS Systems Manager, they can deploy a Disaster Recovery Orchestrator (DRO) solution on a fleet of EC2 instances. The DRO is an agent installed on the instances that will wait for a defined disaster recovery event before initiating a failover to an alternate AWS Region. It will coordinate between the instances, ensuring that they are all running in the correct order and connecting to the correct resources in the target Region. As the DRO works in the background, customers can focus on their core business, knowing that their services will be available in case of a disaster.

In order to ensure that DRO is ready to correctly handle disaster recovery events, customers can use the AWS Fault Injection Simulator to conduct chaos engineering experiments. The Fault Injection Simulator is an AWS-managed service for customers to simulate real-world component failures, such as an Amazon Elastic Block Store (EBS) volumes becoming unavailable, a network disruption between two Availability Zones, or a system reboot. By simulating a variety of scenarios, customers can determine if their DRO solution is ready to handle any kind of incident.

Conducting Chaos Engineering Experiments on Amazon EBS

Chaos engineering experiments help customers to test the resilience of their applications by deliberately injecting faults into them. AWS Fault Injection Simulator helps customers to conduct these experiments in a controlled environment, allowing them to quickly understand if their applications can handle issues that may arise from components failing. By simulating the failure of an Amazon EBS volume, for instance, customers can evaluate how their applications and DRO solutions handle the situation.

Using AWS Fault Injection Simulator, customers can define a fault injection, including the type and the scope of the fault. Then, they can specify the environment in which the experiment should run, with options such as Amazon Virtual Private Cloud, Amazon EC2 instances, or application resources. After validating the fault injection, customers can monitor the results on Amazon CloudWatch Logs. The Fault Injection Simulator will then help to correlate the fault injection to the actions taken by the DRO, including starting and stopping EC2 instances, and the impact on the application.

KeyCore and Disaster Recovery

At KeyCore we understand the importance of robust disaster recovery solutions for businesses of all sizes. We believe in utilizing the power of AWS to build automated and orchestrated systems that enable our customers to focus on their core business, while being prepared for any circumstances. Our team of expert AWS consultants are always on hand to help design, deploy, and manage the optimal disaster recovery solution for your architecture. Contact us today to learn more.

Read the full blog posts from AWS

AWS Architecture Blog

Discovering Microservices with Amazon EC2 and HashiCorp Consul

Organizations of all sizes have been investing in microservices architectures to meet the demands of distributed, resilient, and scalable applications. The challenge of efficient service discovery and configuration management is particularly complex due to the need to span multiple cloud platforms, on-premises data centers, and colocation facilities.

How Service Discovery is Challenging

Service discovery and configuration management are critical to any microservices environment. By dynamically mapping service components and the relationships between them, services can be reliably located without hard-coding IP addresses and ports in application code. This is the key to making microservices applications resilient to changes in their environment, such as scaling and deployment of new services.

The challenge is that service discovery must be reliable and efficient. Services need to be able to locate each other quickly, and service changes must be propagated quickly and accurately. There are several popular solutions for service discovery, but one of the most popular is HashiCorp Consul.

Using Amazon EC2 and HashiCorp Consul for Service Discovery

HashiCorp Consul is an open source service discovery solution that is available on the Amazon EC2 platform. It is lightweight and easy to install and configure, and it provides a range of features to help with service discovery and configuration management.

One of the most important features of Consul is its ability to automatically register and deregister services as they are deployed and undeployed. This helps to ensure that services are always available and their locations always up to date. Additionally, Consul can be used to store and retrieve service configuration data, and to monitor the health of services.

KeyCore Can Help

At KeyCore, we understand how important service discovery is for microservices architectures. We have extensive experience with HashiCorp Consul and Amazon EC2, and can help you to get the most out of these technologies. We can help you to set up Consul on EC2, configure it correctly, and ensure that it is running reliably. Contact us today to learn more.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

The AWS Partner Network (APN) Blog: The Latest AWS Solutions from our Partners

HCLTech Metafinity and AWS:
HCLTech is driving the evolution of two dimensional applications to three dimensional interactives using its flagship offering–Metafinity. Metafinity creates custom 3D avatars and integrates with HCLTech’s OBOL Tokenization framework. With this, customers can experience immersive customer experiences within the metaverse.

Live Troubleshooting of Amazon EKS Applications with Dynamic Instrumentation and Lightrun:
Using Lightrun developer observability platform and Amazon EKS together, organizations can maximize their mean time to resolution of defects, enhance developer productivity, and reduce overall logging costs. Lightrun and Amazon EKS provide full-cycle developer observability.

Scale Your Software and Services with AWS Marketplace:
AWS Marketplace provides customers with added efficiencies, cost savings, and flexible payment and terms. To help partners scale their offerings in AWS Marketplace, AWS has made operational and self-service enhancements. These enhancements help partners drive customer value, lower operating costs, and accelerate time-to-close.

Building a Scalable Machine Learning Model Monitoring System with DataRobot:
DataRobot’s cloud management and orchestration platform, TCS Cloud Exponence, can help customers monitor and manage multiple machine learning models. This helps customers reduce operational overhead and improve efficiency. With the help of Amazon SageMaker, customers can monitor both DataRobot-originated models and SageMaker-originated models in a single place.

Start Your Learning Journey with the Right Roadmap from AWS Training and Certification:
AWS has created several methods, including learning plans, courses with different training styles, and a curated list of popular courses to help partners begin their cloud skills journey. AWS Partner Training and Certification helps partners break down barriers and develop their cloud skills.

Ganit Transforms Fast Fashion Apparel Retail with Intelligent Demand Forecasting on AWS:
Ganit has deployed inventory management systems with intelligent demand forecasting at their core. This has allowed clients to optimize their inventory, leading to efficient working capital deployment and improvement in topline and bottom-line numbers.

Simplify Activity Tracking with TCS Cloud Exponence and AWS CloudTrail Lake:
TCS Cloud Exponence provides resource maintenance, patching, perimeter monitoring, vulnerability protection, observability, and compliance auditing. To support the monitoring and compliance auditing capabilities, TCS leveraged AWS CloudTrail Lake.

3 Simple Ways to Use FactSet’s Financial Data in AWS Workflows:
FactSet provides financial services market and alternative data to help consumers access their data in simple and scalable ways. FactSet has innovative products and capabilities that make it easier to migrate data and workloads to the cloud, as well as enhancing search, collaboration, business process automation, and analytics.

Accelerate Your Analytics Journey on AWS with DXC Analytics and AI Platform:
DXC Technology’s Analytics and AI Platform helps customers develop and deploy analytics applications faster. This platform helps customers look further and deeper, gaining business insights from data they could not previously access or manage.

Subscribe and Ingest AWS Data Exchange Data into Databricks and Visualize it with Amazon QuickSight:
AWS Data Exchange datasets can be consumed to transform and store data in Databricks Lakehouse Platform using Delta Live Tables (DLT). DLT is a framework for building reliable, maintainable, and testable data processing pipelines. Visualizing this data in Amazon QuickSight will further help customers create meaningful and deeper insights.

Automate Data Sharing with Collibra and AWS Lake Formation:
Collibra and AWS have developed products and capabilities to assist customers with data access, data governance, data quality, and observability in the cloud. Innovations include new capabilities that make it easier to migrate data and workloads to the cloud, as well as enhancements to search, collaboration, business process automation, and analytics.

Simplifying Industry 4.0 Advancements for Legacy Manufacturers with Tech Mahindra Factory Information System:
Tech Mahindra FIS helps manufacturers embrace Industry 4.0 technologies, optimizing their operations and unlocking new opportunities. By using Tech Mahindra FIS, manufacturers can benefit from the transformative potential of Industry 4.0, improving productivity, efficiency, and competitiveness.

How Snowflake Optimized its Virtual Warehouses for Sustainability Using AWS Graviton:
Snowflake reduced its carbon emissions footprint and improved performance efficiency by transitioning virtual warehouses to AWS Graviton-based instance types. This enabled Snowflake to reduce the carbon intensity of workloads, while meeting customer demand sustainably as compute requirements increase.

At KeyCore, we help customers create meaningful and deeper insights to improve their customer experience. Our expert cloud consultants can help customers migrate their services and workloads to the cloud, and take full advantage of AWS services — from analytics and AI platforms, to inventory management systems and data sharing solutions. Contact us today to learn more.

Read the full blog posts from AWS

AWS Cloud Enterprise Strategy Blog

What Generative AI Means for Your Business

Generative AI has been a topic of much excitement and speculation in conversations with AWS customer executives. They’re wondering how this technology will affect their businesses. It’s not about what generative AI can do, but what it means for customers.

Generative AI automates the tedious process of creating a digital twin of a product or service and allows users to simulate it in a virtual environment. AI can quickly generate and customize designs, and can also detect anomalies and identify failures. With the help of generative AI, customers can test and analyze multiple solutions and rapidly design a new product or service.

Generative AI has potential to revolutionize certain sectors, from manufacturing to healthcare. For example, in the automotive industry, generative AI can customize the design of a car while maintaining the same safety standards. It can also help reduce time-to-market and lower costs associated with creating a new product.

Generative AI can also aid in the customer experience by helping companies understand customer preferences and tailor their products or services. Also, it can help businesses identify new markets, build predictive models, and forecast demand.

At KeyCore, our team of experienced AWS professionals are well-versed in generative AI and can help you make the most of this technology. We provide professional services such as cloud architecting, migration, and DevOps, as well as managed services like hosting, monitoring, and support. Our team can help you design and deploy a solution that leverages generative AI, benefitting your business.

Continuous Engagement and Innovation

In the digital world, formerly discrete activities have become continuous. In traditional IT, software deliveries were considered singular events. Projects were set, requirements met, code deployed, and then the team moved on to the next project.

However, this is no longer the case. As IT systems become more interconnected and complex, companies need to continuously evaluate their strategies and systems. This requires a shift in mindset from a discrete approach to continuous engagement and innovation.

At KeyCore, our AWS experts can help you with this process. We can help you migrate your legacy systems to the cloud and automate the continuous delivery of software. We can also help you design a secure and cost-effective infrastructure. With our managed services, we can provide visibility, monitoring, and support for your infrastructure.

We understand that cloud infrastructure presents unique security challenges, but our team can help. Using our DevOps services, our team can help you automate security processes, ensuring your systems remain secure from external threats. We can also help you identify and mitigate risks associated with your architecture.

To ensure continuous innovation, our team can help you design a CI/CD pipeline and leverage automated testing to ensure the quality of your applications. With our services, you can quickly deploy your products and services, allowing you to stay competitive in the digital world.

At KeyCore, we understand how important it is to stay engaged and keep innovating. We can help you develop a strategy that allows you to embrace continuous engagement and innovation. Our experienced AWS professionals will work with you to ensure your cloud infrastructure is secure, cost-effective, and optimized to keep you competitive in the digital world.

Read the full blog posts from AWS

AWS HPC Blog

Discover the Benefits of AWS HPC Instances

AWS provides a variety of HPC instance families and sizes to help customers accelerate their most demanding workloads. In this blog post, we will take a deep dive into the benefits of AWS HPC and the Hpc7g instance family, powered by AWS Graviton3E. Additionally, we will explore how SeatGeek is leveraging AWS Batch to simulate massive load and how HPC on AWS is helping to transition to a more sustainable economy.

Instance Sizes in the Amazon EC2 Hpc7 family

The Hpc7 family of instances is the first Amazon EC2 HPC offering with multiple instance sizes. This set up is a bit different than getting smaller instances from other non-HPC instance families. These instance sizes help customers optimize for cost, performance, and memory-to-core ratios for their applications and workloads. Hpc7g is an Amazon EC2 instance powered by AWS Graviton3E, which is an ARM-based processor purpose-built for workloads such as HPC.

Application Performance and Scaling with the Hpc7g Instance

In order to gain a better understanding of how the Hpc7g instance performs with various HPC workloads and disciplines, AWS conducted a series of experiments. These experiments tested the instance’s performance and scaling capabilities. The results showed that the Hpc7g instance was able to handle various applications and workloads and produced significant cost savings over other instances.

How HPC on AWS Is Helping to Transition to a Sustainable Economy

Organizations around the world are aiming to transition to a more sustainable economy and HPC plays a major role in meeting these goals. AWS HPC services provide customers with the ability to rapidly scale their HPC workloads to achieve their goals. AWS has also created an HPC bench-marking suite to provide customers with the information they need to compare their workloads against the best in the industry.

SeatGeek Leverages AWS Batch for Load Testing

SeatGeek needed a system to simulate massive load in order to properly prepare for large event traffic spikes. To achieve this goal, they leveraged AWS Batch to build a load testing system that can simulate 50k simultaneous users. This system now runs weekly to help SeatGeek harden their code. By leveraging AWS services, SeatGeek was able to create an easy-to-use and cost-effective solution to their problem.

KeyCore Can Help

KeyCore is the leading Danish AWS consultancy and provides professional and managed services. We are highly advanced in AWS and can assist customers in leveraging HPC services to meet their goals. Our experienced team of experts can help you evaluate your HPC workloads, select the best instance size, and optimize performance for cost. Contact us today to learn more.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

Observing On-Premises Kubernetes Environments with AWS Managed Services

Using Curated Packages and AWS Managed Open Source Services

Customers running containerized workloads on their own hardware use Amazon EKS Anywhere (EKS-A) to manage their Kubernetes clusters. To observe these modern applications, they look for prescriptive guidance. Using AWS-managed open-source services, such as AWS Distro for OpenTelemetry (ADOT), Amazon Managed Service for Prometheus, and Amazon Managed Grafana, helps customers to take advantage of the latest technology advancements in observability and offload the associated operational overhead.

ADOT is a curated set of open source packages and components, such as OpenTelemetry and Jaeger, that are tuned and tested for optimal performance and compatibility with AWS. With ADOT, customers can easily observe their Kubernetes workloads running on EKS-A and get a comprehensive view of their applications. ADOT easily integrates with the Amazon Managed Service for Prometheus, which gives customers an out-of-the-box monitoring experience without needing to install and manage a Prometheus server.

Using its cloud-native data visualization and dashboarding capabilities, Amazon Managed Service for Grafana helps customers observe their environment and view the collected metrics from Prometheus. The solution also provides pre-configured dashboards for EKS Anywhere, which helps customers get started quickly. Customers can also customize the dashboards with the data from multiple services and applications running on their cluster.

KeyCore Can Help

KeyCore can help customers observe their applications and workloads running on their Kubernetes clusters using AWS managed services. We have extensive experience in Kubernetes implementation and optimization, so we can help customers take full advantage of the benefits of AWS managed services to quickly set up the observability of their Kubernetes clusters. Contact us today to learn more!

Migrating Mainframe Systems to the AWS Cloud

A Comprehensive Mapping Guide

Mainframe systems have been utilized by companies worldwide since the 1950s to operate their core business applications. In the digital transformation era, many businesses are transferring their mainframe data and migrating their workloads to AWS. The COVID-19 pandemic has had a major effect on mainframe modernization due to issues such as remote access and scalability.

To help customers migrate their mainframes to AWS, Amazon offers comprehensive mapping guides. These guides assist customers in understanding the dependencies and complexities of their mainframe systems and identifying the AWS services that best fit their requirements. For example, if a customer’s current mainframe architecture includes IBM Db2, they can leverage Amazon Aurora for their relational databases on AWS.

AWS Migration Hub also provides customers with a comprehensive view of the entire migration process. This includes the migration status, the associated resource types, and the associated migration tools. This helps customers to make informed decisions about their migrations and track progress. Additionally, AWS Database Migration Service (DMS) helps customers move their data securely and with minimal downtime. DMS can also replicate data changes in near real-time, allowing customers to keep up with their current workloads.

KeyCore Can Help

KeyCore can help customers migrate their mainframe systems to AWS. Our consulting services allows customers to identify the best options for their mainframe architecture and provides them with the support they need to migrate their mainframe to the cloud. Furthermore, our managed services can help customers leverage the full potential of the AWS cloud for their business requirements, all with minimal disruption and downtime. Contact us today to learn more!

Business Continuity in the AWS Cloud

Exploring the Flexibility of AWS

The impact of technology on our day-to-day lives is more evident than ever before. Reliability and always-on systems have become the norm, and customer expectations are high when things go wrong. To meet these expectations, IT practitioners must explore the flexibility of AWS to open new doors for business continuity.

AWS provides a number of services to help customers maintain business continuity. This includes the Amazon Elastic Compute Cloud (EC2) and Auto Scaling features, which give customers the ability to scale up or down depending on the demand for their applications. Furthermore, AWS offers a range of disaster recovery solutions to help customers ensure their systems and data remain available. This includes Amazon DynamoDB accelerated storage, Amazon Elastic Block Store (EBS), and Disaster Recovery as a Service (DRaaS).

When it comes to cost optimization, AWS provides customers with a number of options. This includes the Amazon EC2 Reserved Instances (RI) and the Amazon EC2 Spot Instances (SI), which gives customers the ability to use on-demand computing resources at discount rates. Additionally, AWS Trusted Advisor can help customers identify cost optimization opportunities and ensure that their resources are always used efficiently.

KeyCore Can Help

KeyCore can help customers optimize their IT infrastructure and ensure business continuity in the AWS cloud. Our experienced experts can help customers design and build an IT architecture that is tailored to their specific business requirements. Additionally, our managed services provide customers with the resources they need to quickly identify and address any issues they may encounter. Contact us today to learn more!

Read the full blog posts from AWS

AWS for Industries

AWS for Industries

The world is facing an urgent need to rapidly reduce carbon emissions in order to avoid catastrophic climate change. To meet this goal, companies must use carbon value modeling to achieve net-zero emissions and keep the Earth’s mean temperature below 2° C of preindustrial levels. AWS is helping the hospitality industry to migrate legacy systems and applications to the cloud, allowing businesses to take advantage of digital transformation. Additionally, AWS helps industrial companies securely and efficiently leverage their operational and information technology data for insights, optimization and enhanced business decisions.

Coriell Life Sciences is using AWS to provide patients, physicians, and pharmacists with insight into the safest and most effective medications based on an individual’s DNA and other factors. For the retail industry, AWS sponsored the Mach Two conference in Amsterdam. It was a thought leadership forum focused on understanding the benefits of composable technologies for brands and retailers. Moreover, AWS has published guidance on RFID store inventory management. This technology can assist retailers with out-of-stock situations, misplaced items, and shrinkage.

Finally, AWS is partnering with CPGs to help them address major challenges, such as supply chain disruptions and labor shortages. Moreover, AWS is also helping communication service providers take advantage of 5G core networks on the cloud.

At KeyCore, we are highly advanced in AWS and can assist companies in leveraging these technologies to drive business value. Our expertise includes professional services, managed services, and providing technical recommendations. We can also provide guidance on digital transformation and migrating to the cloud. Contact us to learn more about how we can help.

Read the full blog posts from AWS

AWS Messaging & Targeting Blog

What You Need to Know About Amazon Simple Email Service

Amazon Simple Email Service (SES) is a bulk and transactional email sending service for businesses and developers. To make the most of SES, users should be aware of the various features and options available. In this post, we’ll cover how to grant another SES account or user permission to send emails, how to build an email service on SES, how to list over 1000 email addresses from an account-level suppression list, how to verify an email address in SES which does not have an inbox, how to manage global sending of SMS with Amazon Pinpoint, how to investigate what happened to the email that was sent via SES but was never received in the recipient’s inbox, and how to manage SMS opt-outs with Amazon Pinpoint. We’ll also discuss what a spam trap is and why you should care.

Granting Another SES Account or User Permission to Send Emails

To send emails from a particular email address through SES, users have to verify ownership of the email address, the domain used by the email address, or a parent domain of the domain used by the email address. This process can be simplified by granting another user or SES account permission to send emails. To do this, users have to create an identity policy and then add the desired user or SES account to the policy.

Building an Email Service on SES

Customers can send and receive email using SES using a combination of public SMTP interfaces and the SES SDK. To build an email service on SES, customers need to set up an SMTP server, use an email sending library, set up incoming email processing, and set up outgoing email processing. Setting up an SMTP server involves setting up an internet domain name, creating a DNS record, and configuring a TLS certificate. Setting up an email sending library involves setting up a credentials provider and configuring an email sending library. Finally, setting up incoming and outgoing email processing involves configuring a message processor, configuring a message validator, configuring an email address processor, and configuring an email validator.

Listing over 1000 Email Addresses from an Account-Level Suppression List

SES offers an account-level suppression list, which helps customers avoid sending emails to addresses that have previously resulted in bounce or complaint events. To list over 1000 email addresses from this list, users have to make an API call to the ListSuppressedDestinations API action. Alternatively, users can use the Amazon Pinpoint console to export the list.

Verifying an Email Address in SES which Does Not Have an Inbox

Amazon SES allows users to send emails from their own email addresses and domains. To verify an email address in SES which does not have an inbox, users have to set up an AWS user account, set up an SES identity, set up an SMTP user account, and verify the email address. To set up an AWS user account, users have to log in to AWS and create a new user in the IAM console. To set up an SES identity, users have to access the SES console and use the VerifyEmailIdentity API action. To set up an SMTP user account, users have to create a new SMTP user in the SES console. Finally, to verify the email address, users have to click on the verification link in the verification email.

Managing Global Sending of SMS with Amazon Pinpoint

Amazon Pinpoint has a global SMS reach of 240 countries and regions. To use this feature, customers need to set up an Amazon Pinpoint project, configure message channels, set up message templates, and send messages. Setting up an Amazon Pinpoint project involves creating a project in the Amazon Pinpoint console and connecting it to an AWS account. Configuring message channels involves integrating different messaging services with Amazon Pinpoint and setting up message templates. Finally, sending messages involves making an API call to the SendMessages API action.

Investigating What Happened to an Email Sent via SES

At times, emails sent via SES may never reach the recipient’s inbox. To investigate what happened to the email, users have to check the message headers, look up the recipient’s email address, and check the message delivery metrics. To check the message headers, users have to open the message in their email client and view the original message headers. To look up the recipient’s email address, users have to use the SES console or the Amazon Pinpoint console. Finally, to check the message delivery metrics, users have to use the SES console or the Amazon Pinpoint console.

Managing SMS Opt-Outs with Amazon Pinpoint

To meet compliance regulations, companies need to provide customers with the ability to opt out of receiving SMS communications. Amazon Pinpoint allows customers to automatically opt out customers who request it. To set this up, customers need to configure a message template, configure an opt-out handler, and configure an opt-out response. Configuring a message template involves creating a message template in the Amazon Pinpoint console. Configuring an opt-out handler involves setting up an opt-out handler in the Amazon Pinpoint console and configuring an opt-out response involves creating an opt-out response in the Amazon Pinpoint console.

What is a Spam Trap and Why You Should Care

A spam trap is an email address that should not be receiving mail. Spam traps are used by anti-spam companies to identify sources of spam. If a company is sending emails to a spam trap, it can damage the company’s reputation and adversely affect its message delivery rate. Companies should therefore take steps to ensure that no emails they send are sent to spam traps.

How KeyCore Can Help

At KeyCore, our team of experienced AWS consultants can help you make the most of your Amazon Simple Email Service (SES) account. We can assist you with setting up and configuring your SES account, as well as helping you build an email service on SES. We can also help you create an account-level suppression list, verify an email address without an inbox, manage global sending of SMS with Amazon Pinpoint, and investigate what happened to an email sent via SES. Our team can also help you manage SMS opt-outs with Amazon Pinpoint and ensure you are aware of and understand what a spam trap is and why you should care. Contact us today to see how we can help.

Read the full blog posts from AWS

AWS Marketplace

Managing your AWS Marketplace Spend with Purchase Order Features

In this blog post, we will show you how to use the Billing and Cost Management console, and AWS Marketplace purchase order features, to help ensure your invoices for AWS Marketplace purchases reflect the proper purchase order (PO). By enabling the transaction purchase order feature in the AWS Marketplace console, AWS accounts in your AWS organization with permission to subscribe can add specific purchase orders for AWS Marketplace transactions during procurement.

Self-Service Updates to Container Products in AWS Marketplace

AWS Marketplace now enables sellers, Independent Software Vendors (ISVs), and Consulting Partners (CPs) to self-service updates to their container-based product listings. With this feature, they will be Consistent Authorization Experience (CAE) compliant. We’ll show how to use the self-service feature to update the different features of container-based product listings.

How KeyCore Can Help Your Business with AWS Marketplace

At KeyCore, we understand the value of a marketplace that allows customers to purchase products and services from third-party sellers. Our AWS consulting services provide businesses with the support they need to maximize their presence on AWS Marketplace. Our team of experts will help your business implement the purchase order features for AWS Marketplace and self-service updates to container-based products.

We will also work with you to ensure compliance with the CAE and create a secure and effective purchasing process. Our experienced AWS consultants will guide you every step of the way and provide the necessary tools and resources to help you succeed.

Contact us today to learn more about our AWS Consulting Services and how we can help your business with AWS Marketplace.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

AWS re:Inforce: Accelerating Incident Response in the Cloud

AWS re:Inforce 2023 focused on the latest security-focused solutions and best practices to help protect customer workloads. Hundreds of technical and non-technical sessions were held across six tracks, giving customers, partners, and industry peers the opportunity to learn about the latest security trends.

Threat Detection & Incident Response Track

The Threat Detection & Incident Response Track at AWS re:Inforce provided an in-depth look at how AWS customers can use AWS security services and capabilities to detect and respond to threats quickly. Highlights of the track included a session with Michelle McManus, Chief Security Officer at Autodesk, and John Biggs, VP of Security at AWS, on best practices for accelerating incident response.

Best Practices for Accelerating Incident Response

During the session, McManus and Biggs highlighted several best practices for accelerating incident response in the cloud. These include establishing secure cloud foundations, building an effective incident response program, and leveraging automation to speed up the detection and response time.

Establish Secure Cloud Foundations

The first best practice McManus and Biggs suggested was to establish secure cloud foundations. This includes selecting the right services to support secure operations, configuring and deploying them correctly, and monitoring for any misconfigurations or malicious activity.

Build an Effective Incident Response Program

The next best practice is to build an effective incident response program. This involves developing a response plan, conducting regular drills, and creating an incident response team. Additionally, McManus and Biggs suggested leveraging the AWS Security Hub to quickly detect and respond to incidents.

Leverage Automation to Speed Up Detection and Response Time

The last best practice McManus and Biggs discussed was leveraging automation to speed up the detection and response time. Automation can be used to streamline tasks, reduce human error, and quickly detect and respond to threats.

KeyCore Can Help

At KeyCore, we have experience helping customers build effective incident response programs utilizing the latest AWS security services and capabilities. Our team can help you develop a comprehensive incident response plan, create an incident response team, and leverage automation to speed up the detection and response time. Contact us today to learn more.

Read the full blog posts from AWS

AWS Startups Blog

The AWS Global Fintech Accelerator and the AWS Impact Accelerator

The AWS Global Fintech Accelerator

Amazon Web Services (AWS) has recently launched its first Global Fintech Accelerator. This program provides fintech founders the support and mentorship necessary to develop financial services solutions that leverage the power of AI/ML and the cloud.

The AWS Impact Accelerator

Reports show that only 1% of venture-backed founders are Black, 1.8% Latino, and 9% women. To help contribute to this change, AWS launched the AWS Impact Accelerator for startups led by underrepresented founders. This program gives pre-seed startups the tools and knowledge to reach key milestones, such as raising funds or being accepted to a seed-stage accelerator program, while creating powerful solutions in the cloud.

This program has had three cohorts so far, and each cohort has made significant progress. Cohort 1 founders have raised over $1.3 million in seed funding, Cohort 2 founders have raised over $1.6 million, and Cohort 3 founders have raised over $2.9 million.

How KeyCore Can Help

At KeyCore, we specialize in providing advanced AWS services to help fintech startups and other businesses leverage the power of the cloud. Our services include professional and managed services, as well as advice and guidance for AWS users. We also provide highly technical assistance to those using AWS, such as help with CloudFormation YAML, AWS API Calls using Typescript, and AWS SDK for JavaScript v3. Contact us today to see how we can help you make the most of AWS.

Read the full blog posts from AWS

Business Productivity

Improve SaaS Application Security and Observability with AWS AppFabric

The adoption of software-as-a-service (SaaS) applications continues to grow, with many organizations now utilizing hundreds of SaaS apps across their business. While SaaS apps can help improve employee productivity, the data silos they create have the potential to lead to security issues and inefficiencies. IT and security teams often struggle to quickly identify and respond to security threats.

AWS AppFabric offers a solution that allows organizations to centralize visibility and control over SaaS applications, while maintaining data privacy and compliance. AppFabric provides a single point of control that makes it easy to monitor and log activity across SaaS apps. It also provides an audit trail of user actions, enabling security teams to quickly identify and respond to threats.

AppFabric also ensures compliance with applicable regulations. It helps organizations to identify sensitive data in their SaaS applications, and ensures that this data is handled in accordance with industry and regulatory standards. It also provides a way to automate the process of discovering and remediating any compliance issues.

By using AppFabric, organizations can improve the security and observability of their SaaS applications, while ensuring compliance with applicable regulations. KeyCore provides a range of professional services and managed services to help customers implement and optimize AppFabric. Our team of expert AWS consultants can provide expertise and guidance to ensure that AppFabric is deployed and configured correctly, and can provide ongoing support and management to ensure that organizations are getting the most out of AppFabric.

Effortlessly Summarize Phone Conversations with Amazon Chime SDK Call Analytics: Step-by-Step Guide

The Amazon Chime SDK Call Analytics Real-Time Summarizer is a solution that provides real-time summarization of phone conversations held through the Amazon Chime SDK Voice Connector, using Amazon Chime SDK call analytics. This Real-Time Summarizer uses natural language processing (NLP) to generate real-time summaries of phone conversations.

The Real-Time Summarizer simplifies the process of identifying key topics and decisions from phone conversations. It provides users with a summary of the conversation in text format, along with the time that each topic was discussed. This makes it easy for users to quickly review and understand the key points from each conversation. It also provides a transcript of the conversation, making it easy to look up and review any particular point from the conversation.

Using the Amazon Chime SDK Call Analytics Real-Time Summarizer, organizations can easily and effortlessly summarize and review their phone conversations. KeyCore’s team of expert AWS consultants can provide assistance to customers who wish to implement and use this solution. Our team of AWS experts can help customers to ensure that the solution is deployed and configured correctly, and provide ongoing support and optimization to ensure that organizations get the most out of the solution.

Read the full blog posts from AWS

Front-End Web & Mobile

AWS Amplify Hosting & Library Support for WatchOS and tvOS with a New Badge Program

AWS Amplify is a complete solution that helps front-end web and mobile developers easily build, ship, and host full-stack applications on AWS. Front-end developers can leverage more than 175 AWS services for use cases that evolve. Now, Amplify Library for Swift supports watchOS and tvOS platforms.

Share Code between Next.js Apps with Nx on AWS Amplify Hosting

AWS Amplify Hosting offers capabilities to interface with monorepos, specifically Nx. Monorepos are especially useful for deploying multiple applications that all use the same components, such as a mortgage calculator. By using a monorepo with Nx, it is possible to share code between multiple documents. This method helps developers save time, as they would only need to make changes to the code in one place.

Introducing the AWS Amplify Badge Program

To recognize the contributions of the AWS Amplify community, AWS is introducing the AWS Amplify Badge Program. The program rewards customers and partners who have adopted Amplify, and encourages contributions to the open source ecosystem.

At KeyCore, we provide both professional services and managed services to help our clients leverage the power of AWS Amplify. From setting up authentication, storage, and maps to developing custom components, our team of experts can provide you with a comprehensive and customizable solution that meets your needs.

Read the full blog posts from AWS

AWS Contact Center

Managing Prompts Programmatically with Amazon Connect

Contact centers use prompts to interact with customers, obtain information from customers, and provide updates to customers. Prompts are recorded audio files that are played in call flows. Contact center administrators must quickly react to business needs by adding new prompts or changing existing prompts. Tracking and managing large numbers of prompts can be challenging, so Amazon Connect provides developers the ability to manage prompts programmatically.

The Challenge of Managing Prompts

In contact centers, prompts are used to provide customer support. It is important to manage these prompts in a way that is cost-effective and that allows contact center administrators to quickly update and change them as business needs change. Creating a large number of prompts manually is time-consuming and difficult to track. This makes it difficult for businesses to manage their contact centers in a cost-effective and efficient manner.

Managing Prompts Programmatically with Amazon Connect

Amazon Connect provides developers the ability to manage prompts programmatically. This enables contact center administrators to quickly create, update, and delete prompts without having to manually update each prompt. Additionally, Amazon Connect provides developers with the ability to track and manage large numbers of prompts in an efficient manner. This makes it easier for businesses to manage their contact centers in a cost-effective and efficient manner.

KeyCore’s Role

At KeyCore, our team of AWS certified consultants can help businesses with their AWS Contact Centers and leveraging Amazon Connect to manage prompts programmatically. Our team is experienced in utilizing the latest AWS technologies to provide solutions tailored to each business’s needs. We can help set up your contact center, or work with you to optimize your existing setup. If you are looking for a team of experienced AWS certified consultants to help with your contact center, contact us today!

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector with the Cloud and AI

Using data-driven solutions to end homelessness

Through its Built for Zero program, Community Solutions is a US-based nonprofit working with cities across the country to reduce homelessness. By utilizing AWS to unlock data to measure and monitor progress, the Community Solutions team has been able to make a real difference. The AWS Fix This podcast recently discussed how Community Solutions and the nonprofit Coming Home of Middlesex County in New Jersey are working together to build a better future.

Using AWS, Community Solutions can measure its progress by collecting data about homeless populations and integrating that data with other relevant data sets. This data can then be used to create programs tailored to each city’s specific needs. By leveraging the cloud, Community Solutions can also quickly and easily collaborate with partner organizations, such as Coming Home of Middlesex County, to share and analyze data.

AWS also enables Community Solutions to use predictive analytics to identify individuals at risk of becoming homeless and provide support before they are in crisis. The organization has also been able to develop an online platform, called Zero, to provide real-time access to real-time data on homelessness. This platform is available to community stakeholders, such as government agencies, health care providers, and nonprofits, so they can identify and address issues more quickly.

A framework to mitigate bias and improve outcomes in the new age of AI

AI and ML technologies can provide real value to public sector organizations, but they are not without their challenges. Biases in algorithms, a lack of transparency, and a lack of understanding of the technology can all limit the wider adoption of AI and ML.

AWS provides a framework to help address these challenges and ensure better outcomes for those who rely on public services. The framework consists of three key steps and using AWS services to enable each step. The first step is to identify and explain the potential sources of bias in an AI model. AWS offers a suite of services to help organizations understand the behavior of an AI system, such as Amazon Comprehend for Natural Language Processing (NLP) and Amazon SageMaker for Machine Learning (ML).

Once bias sources have been identified, the second step is to develop a strategy to mitigate them. AWS provides services such as Amazon Augmented AI to help identify and mitigate bias in an AI system, and Amazon Personalize for generating recommendations from ML models. In addition to using these services, organizations should also consider using best practices such as data splitting and testing different model architectures to ensure the best possible outcomes.

The third and final step is to monitor and audit AI models to ensure they continue to meet organizational goals. AWS provides services such as Amazon Fraud Detector for detecting fraud and Amazon CodeGuru for code review and improvement.

How KeyCore Can Help

At KeyCore, we provide professional and managed services to help public sector organizations use the cloud and AI to unlock their full potential. Our team of experts can help you identify and mitigate bias in your AI models, develop a strategy to improve outcomes, and monitor and audit your models to ensure they are performing as expected. We can also assist with building, deploying, and optimizing your applications on AWS, so you can focus on innovating and improving service delivery to your constituents. To learn more about how we can help you, contact us today.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

Connected Vehicle Platforms and Edge Computing on AWS IoT

AWS offers customers new and updated architectural guidance and design patterns for modernizing and building Connected Vehicle platforms with AWS IoT. These solutions enable automotive manufacturers (OEMs) to differentiate their portfolios not just through hardware and specs, but also through innovative, software-driven connectivity features. Vehicle connectivity and edge computing helps customers meet stringent requirements such as low-latency processing, poor or no connectivity to the internet, and secure data management.

Connected Vehicle Platforms with AWS IoT

With AWS IoT, OEMs can build digital twins that collect and store data about vehicle components such as engines, batteries, brakes, and tires. They can also create predictive maintenance models to identify anomalies in vehicle performance, alert customers, and help with preventative maintenance. AWS IoT also provides OEMs with the ability to securely connect vehicles to the cloud and store data in AWS. Furthermore, AWs IoT enables the development of software-driven capabilities such as over-the-air updates, remote diagnostics, and fleet management.

Edge Computing with AWS IoT Greengrass

AWS IoT Greengrass allows customers to deploy and benchmark ML models at the edge. It enables customers to run ML processes even when there is no connection to the internet, resulting in low latency and secure data management. Using AWS IoT Greengrass, customers can take advantage of features such as ML inference, local data store, device synchronization, and message brokering.

AWS IoT Solutions from KeyCore

KeyCore is the leading AWS consultancy in Denmark. We provide professional services and managed services to help customers get the most out of their AWS solutions. Our team of AWS-certified experts can help customers with their Connected Vehicle Platforms and Edge Computing projects. Whether you are just getting started with IoT and ML or looking to modernize your existing systems, we can help you identify, deploy, and optimize the right AWS solution for your specific needs. Contact us to learn more about how KeyCore can help your organization with its AWS IoT solutions.

Read the full blog posts from AWS

AWS Open Source Blog

Navigating Cloud Resources on AWS with Steampipe Relationship Graphs

Navigating cloud resources on AWS can be a daunting task. When resources are spread out across multiple services, keeping track of them all can be a big challenge. Luckily, there’s a new open source relationship graph tool that can help. Steampipe AWS Insights dashboards have added new capabilities that make it easier to navigate between resources.

Visualizing Resources with Steampipe

Steampipe is a free open source program that allows users to visualize their AWS resources. It provides an interactive dashboard that displays nodes and edges representing AWS resources and their relationships. Instead of having to manually navigate different services and resources, Steampipe AWS Insights does the work for you, presenting all the necessary information to help users identify relationships between their resources.

The new relationship graph capabilities added to Steampipe AWS Insights make it even easier to keep track of resources. You can now easily view the connections between resources such as EC2 instances, S3 buckets, and Lambda functions. The interactive dashboard allows you to quickly identify and navigate between resources.

Monitoring and Optimizing Performance with Steampipe

In addition to helping you navigate, Steampipe AWS Insights dashboards can also help you monitor and optimize performance. You can access a range of performance metrics, such as CPU utilization, memory usage, and data transfer rates. You can also set up custom alerts to be notified when a resource is not performing as expected.

The insights dashboard can also help you pinpoint the source of issues and get to the root cause quickly. This can help you save time and money, as well as optimize your resources to run more efficiently.

KeyCore’s AWS Services

At KeyCore, we provide both professional services and managed services to help our clients get the most out of their AWS resources. Our experienced team has extensive knowledge and experience of the AWS platform and can provide tailored guidance and advice. We also offer a range of managed services, such as optimization and monitoring, to ensure your resources are running as efficiently as possible.

With Steampipe AWS Insights and the help of KeyCore, you can easily keep track of your resources and monitor their performance. Our experienced team can help you ensure your resources are running optimally so that you can get the most out of your AWS platform.

Read the full blog posts from AWS

Scroll to Top