Summary of AWS blogs for the week of monday Mon May 01

In the week of Mon May 01 2023 AWS published 100 blog posts – here is an overview of what happened.

Topics Covered

Desktop and Application Streaming

A Look Back at IGEL Disrupt 2023 Nashville and How to Create an AS2TrustedDomains DNS TXT Record to Redirect the AppStream 2.0 Native Client

IGEL recently hosted its first US-based in-person event since 2019, IGEL DISRUPT 2023 Nashville. Most major players in the End User Computing (EUC) space were represented at this exciting event. Attendees enjoyed breakout sessions and demonstrations of recent innovations.
The AS2TrustedDomains DNS TXT record can only enable the same domain (or subdomains) in which the DNS TXT record is created. To use a third-party identity provider without owning the domain, an alternative architecture is necessary. This blog outlines the process to create an AS2TrustedDomains DNS TXT record for redirecting the AppStream 2.0 native client.

Understanding the Need for Session Expiration and Countdown Timers in Amazon AppStream 2.0

Amazon AppStream 2.0 is often used to stream resource-intensive applications that require long-running calculations or simulations. If the session ends prematurely, user satisfaction and productivity will suffer. To illustrate, consider a scenario in which users run simulations that typically take three hours to complete. To keep costs manageable, AppStream 2.0 configuration settings may be adjusted so that sessions end after two hours.

How KeyCore Can Help

The engineers at KeyCore can help you design and deploy a solution that meets your organization’s unique needs. Our experts are familiar with the latest EUC technologies, as well as all of the available options related to Amazon AppStream 2.0. We can help you determine the best way to configure AppStream 2.0 to reduce costs without sacrificing user satisfaction. In addition, our team can work with you to create an AS2TrustedDomains DNS TXT record for redirecting the AppStream 2.0 native client to a third-party identity provider.

Read the full blog posts from AWS

AWS DevOps Blog

AWS CloudFormation and DevOps Guru Insights

What is AWS CloudFormation?

AWS CloudFormation is an Infrastructure as Code (IaC) service from AWS that allows customers to model their cloud resources in template files that can be authored or generated in a variety of languages. Customers can deploy these resources and manage their stacks via the AWS Management Console, the AWS Command Line Interface (AWS CLI) or the AWS API. CloudFormation provides customers with an easy way to create and manage their AWS infrastructure using a declarative language.

Integrating DevOps Guru Insights with CloudWatch Dashboard

Amazon DevOps Guru provides advanced machine learning-powered OpsBridge capabilities that allow users to monitor the health and performance of their applications in real time. Many customers use Amazon CloudWatch dashboards to monitor their applications and often ask how they can integrate Amazon DevOps Guru Insights in order to have a unified dashboard for monitoring.

To help customers with this, this blog post will showcase how to integrate DevOps Guru proactive and reactive insights to a CloudWatch dashboard by using Custom Widgets. With this integration, customers can correlate trends in their application’s performance and health with DevOps Guru insights.

How KeyCore Can Help

Whether customers are new to AWS, want to increase their utilization of AWS services, or are looking for advice on how to better integrate DevOps Guru insights with CloudWatch dashboards, KeyCore can help. Our team of certified AWS experts specialize in helping customers get the most out of their AWS services, and can help customers create customized solutions to their specific needs. Contact us today to learn more about how we can help you get the most out of your AWS environment.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Build an Image Search Engine with Amazon Kendra and Amazon Rekognition

Searching and obtaining images has never been easier with the internet. However, searching for complex images, like architecture diagrams with numerous visual icons and text, can be challenging. To tackle this problem, Amazon Web Services (AWS) provides a machine learning (ML) solution with its Amazon Kendra and Amazon Rekognition services.

Use Case: Architecture Diagrams

Architecture diagrams are a great example of complex images that need to be searched. To successfully search an architecture diagram, it needs to be properly labeled with detailed descriptions- especially when the diagram is too complex to be accurately understood with just a visual glance.

How Amazon Kendra and Amazon Rekognition Work Together

Amazon Rekognition can detect different objects within an image and provide a description of them. This will help with the initial labeling and indexing of the architecture diagrams.

Amazon Kendra will then be used to search images using natural language queries. This query-based search model requires the images to be associated with a detailed description of the objects present within the image. That’s where Amazon Rekognition comes in. Amazon Kendra will then use the images’ labels and descriptions to find the most relevant results.

Create High-Quality Datasets with Amazon SageMaker Ground Truth and FiftyOne

Voxel51, the company behind FiftyOne, has developed an open-source toolkit for building high-quality datasets and computer vision models. A retail company wanting to build a mobile app to help customers buy clothes needs a high-quality dataset with clothing images, labeled with different categories. To achieve this, Amazon SageMaker Ground Truth and FiftyOne can be used together.

How to Use Amazon SageMaker Ground Truth and FiftyOne Together

First, a dataset of clothing images needs to be created. Amazon SageMaker Ground Truth can be used to create an accurate and cost-effective dataset. Amazon SageMaker Ground Truth has pre-built UI templates that can be used to quickly create datasets with labels. This is perfect for the retail company, as they can create and label clothing images quickly.

Once the dataset is created, FiftyOne can be used to organize, visualize, and interact with the data. The data can be sorted, filtered, and labeled for different categories and attributes. This helps the customer build better models in a much faster and cost-effective way.

Achieve High Performance with Lowest Cost for Generative AI Inference Using AWS Inferentia2 and AWS Trainium on Amazon SageMaker

Generative AI models, such as GPT4, ChatGPT, DALL-E2, and Bard, have become increasingly popular due to their ability to create human-like text, images, code, and audio. However, they also come at a high cost due to their complexity.

To tackle this problem, Amazon Web Services (AWS) provides a solution with its AWS Inferentia2, AWS Trainium, and Amazon SageMaker services. Together, these services provide a low-cost, high-performance solution for generative AI inference.

How AWS Inferentia2 and AWS Trainium Work

AWS Inferentia2 is a custom-designed machine learning inference chip that provides high performance for genertive AI models. It also offers a low-cost solution as it can be used with lower-cost general-purpose processors.

AWS Trainium is a feature of AWS Inferentia2 that provides high performance for training deep neural networks. It is optimized for faster training times, thus reducing the cost of training generative AI models.

Finally, Amazon SageMaker provides an easy-to-use platform for ML developers to use AWS Inferentia2 and AWS Trainium. It offers an integrated environment so developers can quickly build, train, and deploy generative AI models.

Automate the Deployment of an Amazon Forecast Time-Series Forecasting Model

Time-series forecasting is the process of predicting future values of time-series data. Amazon Forecast offers an AI-powered time-series forecasting service that can help you make more accurate predictions.

Amazon Forecast can be used to quickly deploy time-series forecasting models. It uses machine learning (ML) models to analyze time-series data and make predictions. It is an automated service that requires minimal setup and no ML expertise.

How to Automate the Deployment of an Amazon Forecast Time-Series Forecasting Model

First, you will need to create a dataset of the time-series data that needs to be forecasted. Once the dataset is created, Amazon Forecast will automatically select the best ML model for your data. It uses advanced algorithms to determine the most appropriate ML model for the task at hand.

You can then use Amazon Forecast to forecast future values. Amazon Forecast also provides visualizations of the data, so you can better understand the forecasts. Finally, you can use the Amazon Forecast API to automate the deployment of the time-series forecasting model.

Get Started with Generative AI on AWS Using Amazon SageMaker JumpStart

Generative AI is gaining a lot of public attention and customers have been asking for more information on AWS’s generative AI solutions. Amazon SageMaker JumpStart provides an overview of AWS’s generative AI solutions and how they can be used.

Overview of Generative AI Solutions

Amazon SageMaker JumpStart provides an overview of generative AI solutions available on AWS. These include Amazon SageMaker, Amazon Bedrock, Amazon Titan, Amazon Kendra, and LangChain.

Amazon SageMaker is a fully managed machine learning (ML) platform that provides an easy-to-use environment for ML developers. It offers various services, such as model training, model hosting, and model deployment.

Amazon Bedrock is an ML training platform that enables organizations to train and deploy generative AI models quickly and cost-effectively. It offers a wide range of tools and services for developing, training, and deploying large language models (LLMs).

AWS offers a number of other generative AI solutions, such as Amazon Titan and Amazon Kendra. Amazon Titan is an ML-as-a-Service platform that enables organizations to quickly build, train, and deploy generative AI models. Amazon Kendra is a natural language search platform that can be used to build question answering systems.

Finally, LangChain is an open-source toolkit for building high-accuracy generative AI applications on enterprise data. It enables developers to quickly and easily build, train, and deploy generative AI models.

Optimized PyTorch 2.0 Inference with AWS Graviton Processors

New generations of CPUs offer improved performance in machine learning (ML) inference due to specialized built-in instructions. AWS, Arm, Meta, and others have helped optimize the performance of PyTorch 2.0 inference on AWS Graviton processors.

How PyTorch 2.0 Inference is Optimized on AWS Graviton Processors

AWS Graviton processors are optimized for ML inference using PyTorch 2.0. They provide improved performance compared to other existing hardware solutions, due to their flexibility, speed of development, and low operating cost.

AWS, Arm, Meta, and others have collaborated to optimize the performance of PyTorch 2.0 inference on these processors. This has enabled organizations to use PyTorch 2.0 for ML inference at a lower cost and higher performance.

At KeyCore, our AWS Advanced AI Consultants can help you optimize your ML models for AWS Graviton processors. We offer professional services and managed services that will help you get the most out of your ML models. Contact us today to learn more about our services and how we can help you.

Read the full blog posts from AWS

Announcements, Updates, and Launches

This Week in AWS Announcesment, Updates, and Launches

Introducing Bob’s Used Books

Today, AWS released a new open-source sample application, a fictitious used books eCommerce store called Bob’s Used Books. This application is designed to help .NET developers working with AWS, providing in-depth samples that can help developers create their own applications. With Bob’s Used Books, developers can get the insights they need to build a reliable, real-world application.
KeyCore can help teams that are considering building applications with AWS. We provide both professional services and managed services, and our advanced AWS consultants can help you design, architect, and launch a successful application quickly.

Set Up Your AWS Notifications in One Place

AWS has also released User Notifications, a single place in the AWS console to set up and view AWS notifications across multiple AWS accounts, Regions, and services. This new feature allows you to centrally set up and view notifications from over 100 AWS services, such as Amazon Simple Storage Service (Amazon S3) objects events, Amazon Elastic Compute Cloud (Amazon EC2), and more. This makes it much easier to configure and manage notifications to ensure that your application and infrastructure is running as expected.
KeyCore can help teams that want to take advantage of AWS User Notifications. Our advanced AWS consultants can help you set up notifications quickly and easily, using CloudFormation templates, AWS API calls, and other tools.

AWS Verified Access, Java 17, Amplify Flutter, and More

This week also saw the launch of AWS Verified Access, a new service that helps customers authenticate users and devices using their existing enterprise security controls. Additionally, AWS announced the release of Java 17, released Amplify Flutter, and began conference season with the New York Swifty conference.
KeyCore can help teams that want to take advantage of these new offerings. Our advanced AWS consultants can help you customize your application with Java 17, leverage Flutter for your mobile apps, and use AWS Verified Access for secure authentication.

Read the full blog posts from AWS

Containers

Faster Container Deployment with Pre-fetching Images and Semantic Versioning

Prefetching Images to Start Pods Quicker

Many Amazon Web Services (AWS) customers use Amazon Elastic Kubernetes Service (Amazon EKS) to run machine learning workloads. Containers are a great way for machine learning engineers to package and distribute models, while Kubernetes helps with deploying, scaling, and improving. When helping customers that run machine learning training jobs in Kubernetes, we noticed that as the data set grows larger, it takes longer to start a pod in Kubernetes. To overcome this, we can pre-fetch the image before we submit the job. Pre-fetching involves creating a pod in Kubernetes with a special image-puller image and then having it pull the actual image that needs to be deployed onto the cluster. This saves time as the actual pod does not need to wait for the image to be pulled first.

Using Semantic Versioning for Continuous Deployment

Nowadays, customers are automatically building, testing, and deploying new versions of their application multiple times a day. This process known as continuous deployment ensures quicker updates for end users. One key aspect of continuous deployment is semantic versioning, a system that assigns specific labels to different versions of a software package. This helps manage the release cycle process, as developers can identify which version of the software they are working with quickly.

Using semantic versioning with AWS App Runner, customers can quickly deploy their applications to AWS Fargate or Amazon Elastic Container Service (Amazon ECS) while automatically adding labels to the deployment. With this, developers can track and manage the individual versions of their application and reduce the risk of deploying wrong versions.

How KeyCore Can Help

At KeyCore, we provide both professional and managed services for customers using AWS. Our team of AWS experts have the knowledge and experience to help customers improve their container deployments by using pre-fetching images and semantic versioning. We can help customers optimize the time it takes to start a pod in Kubernetes, and help them manage the release cycle process with semantic versioning. Contact KeyCore today to learn more about our AWS offerings.

Read the full blog posts from AWS

AWS Quantum Technologies Blog

Tracking Quantum Experiments with Amazon Braket Hybrid Jobs and Amazon SageMaker Experiments

When running hybrid quantum-classical experiments on the cloud, it is useful to track and manage them to ensure the best results. Amazon Braket and Amazon SageMaker Experiments, both available on AWS, can help with this.

What are Hybrid Jobs?

Hybrid Jobs is a feature of Amazon Braket that enables developers to run hybrid experiments. These experiments combine quantum algorithms with classical algorithms, allowing developers to use the quantum and classical computing capabilities together to accelerate their computing tasks. Hybrid Jobs runs the experiments on quantum computing devices and any other cloud computing resources, such as virtual machines or Amazon Elastic Compute Cloud (EC2) instances.

What is Amazon SageMaker Experiments?

Amazon SageMaker Experiments is an Amazon Web Services (AWS) service used by machine learning (ML) developers to organize their experiments and trials. It is part of Amazon SageMaker and provides an easy way for developers to track, compare, and manage their experiment runs.

Tracking Hybrid Experiments with Amazon Braket and Amazon SageMaker Experiments

Amazon Braket Hybrid Jobs and Amazon SageMaker Experiments can be used together to track and manage hybrid quantum-classical experiments. By using Amazon SageMaker Experiments to track the experiments, developers can compare different runs of experiments and easily find the best results.

Amazon Braket provides an API that lets developers send their experiment results to Amazon SageMaker Experiments. This allows developers to store their experiment results in an Amazon S3 bucket and then use Amazon SageMaker Experiments to compare and analyze the results.

KeyCore Can Help

At KeyCore, our team of AWS experts can help you get the most out of AWS and its tools, including Amazon Braket and Amazon SageMaker Experiments. We provide comprehensive professional and managed services to help you manage your experiments and get the most out of your quantum-classical experiments. Contact us today to learn more about how we can help you.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Migrating from SAP SQL Anywhere to Amazon RDS for SQL Server or Microsoft SQL Server on Amazon EC2

SAP SQL Anywhere (also known as Sybase SQL Anywhere) is a popular database used in the Information Technology and Services industry. Migrating to Amazon Relational Database Service (RDS) for SQL Server or Microsoft SQL Server on Amazon Elastic Compute Cloud (Amazon EC2) can be challenging due to differences in SQL syntax, data types, and security configurations.

In this post, we’ll discuss code conversion patterns to help make the process as seamless as possible. We’ll also provide tips on how to ensure that the migrated databases are optimized for Amazon EC2.

Automate High Availability Setup with Amazon RDS Custom for Oracle

Amazon Relational Database (Amazon RDS) Custom automates database administration tasks and operations. With RDS Custom, you can customize your database environment and operating system to meet the requirements of legacy, custom, and packaged applications. Amazon RDS Custom for Oracle supports high availability (HA) setups, which enables automated failover for smooth, continuous operations.

In this post, we’ll discuss how to automate HA setup in Amazon RDS Custom for Oracle, as well as provide tips to help you optimize your setup.

Using Amazon Aurora Global Database for Disaster Recovery Within India

Disaster recovery (DR) is paramount to the success of any business. It ensures that operations can continue in the event of an unexpected disruption such as a natural disaster, power outage, or cyberattack. Amazon Aurora Global Database meets the requirements of many regulators across multiple industries by providing a disaster recovery solution with multi-region replication and cross-region read replicas.

In this post, we’ll discuss how to set up a disaster recovery solution with Amazon Aurora Global Database within India. We’ll also provide tips on how to ensure that your data is secure and compliant.

Improving Query Performance and Reducing Cost with Scheduled Queries in Amazon Timestream

Amazon Timestream is a serverless time series database that makes real-time analytics more performant and cost-effective. Scheduled queries enable you to derive additional insights from your data and make better business decisions.

In this post, we’ll explain how you can use scheduled queries in Amazon Timestream to improve query performance and reduce cost. We’ll also provide tips for optimizing your setup.

Accelerating Large Database Migrations to Amazon RDS Custom Oracle Using Tsunami UDP

Migrating a database to the cloud involves transferring existing data from the source (on premises) to the target (cloud). For medium and large databases, such as those ranging from hundreds of gigabytes to 5 terabytes, speed of data transfer matters. Tsunami UDP is a high-speed, open source tool that can be used to accelerate large database migrations to Amazon RDS Custom Oracle.

In this post, we’ll discuss how to use Tsunami UDP to minimize downtime of your database migration. We’ll provide tips on how you can optimize your setup and ensure that your data is transferred securely.

Building AI-Powered Search in PostgreSQL using Amazon SageMaker and pgvector

Generative AI and large language models (LLMs) are revolutionizing the creative process in a variety of sectors. Organizations are exploring novel ways to enhance user experiences by leveraging these powerful tools. In this post, we’ll show you how to use Amazon SageMaker and pgvector to build an AI-powered search in PostgreSQL. We’ll provide tips on how to optimize your setup and ensure that your search engine is accurate and reliable.

Securing Your Data with Amazon RDS for SQL Server: A Guide to Best Practices and Fortification

Amazon Relational Database Service for SQL Server (Amazon RDS) provides several security features to help ensure the confidentiality, integrity, and availability of your database instances. In this post, we’ll discuss how to secure your data with Amazon RDS for SQL Server. We’ll provide best practices and tips on how to fortify your setup and ensure that your data is secure.

Choosing the Right Amazon RDS Deployment Option

Amazon Relational Database Service (Amazon RDS) offers a range of service offerings to help you choose the right option for your workload. In this post, we explain the differences between a single-AZ instance, a multi-AZ instance, and a multi-AZ database cluster. We’ll also provide tips on how to evaluate your requirements and choose the right set of service offerings.

Build a Knowledge Graph on Amazon Neptune with AI-Powered Video Analysis Using Media2Cloud

A knowledge graph allows us to combine data from different sources to gain a better understanding of a specific problem domain. In this post, we’ll show you how to use AI-powered video analysis and Media2Cloud to build a knowledge graph on Amazon Neptune. We’ll provide tips on how to optimize your setup and ensure that your knowledge graph is accurate and useful.

Joining Historical Data Between Amazon Athena and Amazon RDS for PostgreSQL

In certain scenarios, an application must query both active data and archived data simultaneously. To benefit from using Amazon Athena and Amazon RDS for PostgreSQL, developers need a solution that lets them join historical data between these two services.

In this post, we’ll explain how to join historical data between Amazon Athena and Amazon RDS for PostgreSQL. We’ll provide tips on how to optimize your setup, as well as discuss the challenges of data archiving and purging.

Working with JSON Data in Amazon DynamoDB

Amazon DynamoDB allows you to store JSON objects into attributes and perform operations on them, including filtering, updating, and deleting. This is a very powerful capability because it allows applications to store objects (JSON data, arrays) directly into DynamoDB tables, and still retain the ability to use nested attributes within these objects in their queries.

In this post, we’ll discuss how to work with JSON data in Amazon DynamoDB. We’ll provide tips on how to optimize your setup and ensure that your data is stored and retrieved efficiently.

At KeyCore, we understand the importance of having secure and performant database environments. Our team of AWS Certified professionals can help you migrate, secure, and optimize your database on Amazon RDS or Amazon DynamoDB. Contact us to learn more about our database services and how we can help you get the most out of your AWS setup.

Read the full blog posts from AWS

AWS for Games Blog

Unreal Engine 5 Dedicated Server Development With Amazon GameLift Anywhere

Developing dedicated servers can be difficult, which is why developers need to be able to quickly test and iterate on their games. To help with this, Amazon Web Services (AWS) recently updated their Amazon GameLift Server SDK – offering an Unreal Engine Plugin compatible with both Unreal Engine 4 and Unreal Engine 5.

How The Plugin Simplifies Dedicated Server Development

The plugin from AWS makes it easier to build dedicated servers for Unreal Engine. It provides a range of features, such as the ability to:

  • Integrate Amazon GameLift with API’s in the Unreal Engine project
  • Manage dedicated game servers for direct deployment and for Amazon GameLift hosting
  • Launch dedicated game servers on Amazon GameLift on-demand, or as part of best-effort or spot fleets
  • Scale game server clusters up or down based on game session demand.
  • Connect to game servers using Remote Procedure Calls (RPCs) over AWS Global Accelerator

These features are available without needing to manage the underlying infrastructure, meaning developers can focus on adding new features to their game instead of managing servers.

Using The Plugin To Deploy With Amazon GameLift Anywhere

The updated plugin also includes support for Amazon GameLift Anywhere. This is a fully managed service used to extend Amazon GameLift’s compute fleet to on-premises, edge, or hybrid cloud environments. It allows developers to deploy dedicated game servers to any cloud or on-premise infrastructure, including on virtual machines and bare metal hosts.

Amazon GameLift Anywhere makes it easy to deploy dedicated game servers onto any infrastructure configuration. It also provides additional features, such as:

  • Isolated network environments for game servers
  • Integrated logging and monitoring for game servers
  • Automatic scaling for game server clusters
  • Task queues for managing game server lifecycles
  • Built-in health checking for game servers

For developers looking to take advantage of the Amazon GameLift Anywhere service, the Unreal Engine plugin is the best way to get started.

KeyCore Can Help With Dedicated Server Development

At KeyCore, we provide professional services and managed services for AWS customers looking to take advantage of the Amazon GameLift Server SDK. Our experienced team can help you with everything from initial setup and configuration, to customizing the plugin to better suit your development needs.

We have extensive experience developing and deploying game servers on AWS, and we understand the complexities of setting up an environment for game development. Our expertise allows us to provide you with the best advice and guidance, so you can get the most out of the Amazon GameLift Server SDK.

Contact us today to discuss how KeyCore can help make your game development easier.

Read the full blog posts from AWS

AWS Training and Certification Blog

Introducing the Real-World Engineering Management Courses

AWS Training and Certification are proud to announce the launch of a new collection of courses on Real-World Engineering Management, sponsored by AWS and available exclusively on Coursera. Advancing Women in Tech (AWIT) is a non-profit organization that aims to address the gender gap in engineering leadership by providing upskilling avenues and increasing the availability of women tech leaders as mentors.

What Is Real-World Engineering Management?

Real-World Engineering Management is a comprehensive set of courses that provides engineers, tech leaders, and mentors with the essential knowledge to excel in the engineering and technology space. The courses focus on the following topics: architecture, engineering leadership, design, innovation, and engineering management. As part of the course, students will learn the fundamentals of engineering management and develop the skills needed to succeed in their career.

Benefits of Real-World Engineering Management

Real-World Engineering Management provides students with the skills and knowledge to excel in their engineering and technology career. The courses provide students with the fundamentals of engineering management, as well as the necessary insights to develop an engineering team and manage it effectively. Moreover, students gain valuable insights into the role of engineering in the business as a whole, allowing them to understand the impact that their engineering decisions have on the overall success of the business.

KeyCore Offers Professional Services for AWS Training and Certification

At KeyCore, we provide a variety of professional services to help you get the most out of AWS Training and Certification. Our experienced team of AWS professionals can help you design, implement and manage AWS Training and Certification plans that are tailored to your specific needs. We can also provide support and guidance to ensure that you are able to make the most of the Real-World Engineering Management courses. Contact us today to learn more about our professional services.

Read the full blog posts from AWS

Microsoft Workloads on AWS

Embedding Amazon QuickSight Analytics in .NET Applications

Amazon QuickSight Embedded Analytics is a feature of QuickSight which applies data analytics to the applications used by end users, analysts and business leaders. This blog post for .NET developers outlines step-by-step instructions on how to embed Amazon QuickSight Analytics in .NET applications using QuickSight APIs and make them available for Amazon Cognito-authenticated users.

Getting Started

The process of embedding Amazon QuickSight Analytics in a .NET application starts by setting up an Amazon Cognito user pool, followed by creating an Amazon QuickSight user which will be used to access the analytical content. After this, the .NET code needs to be developed to embed the analytical content.

Configuring Amazon Cognito User Pool

The Amazon Cognito user pool enables users to sign in to the application. Once the user is authenticated, the application is able to make calls to Amazon QuickSight APIs to embed the analytical content. To configure the Amazon Cognito user pool, users need to create an identity pool in the AWS Management Console. This identity pool needs to be associated with the user pool which will be used to authenticate the users.

Creating Amazon QuickSight User

The Amazon QuickSight user is used to access the analytical content that should be embedded in the application. The user should have the correct access privileges to ensure that the content is authorized and visible. To create an Amazon QuickSight user, users will need to create an Amazon QuickSight user in the Amazon QuickSight console. The user should be associated with the same Amazon Cognito identity pool as the application.

Developing .NET Code

Once the Amazon Cognito user pool and Amazon QuickSight user are set up, users can start developing the .NET code to embed the content in the application. The code should include the credentials of the Amazon QuickSight user, along with the Amazon QuickSight API calls. After this, users can make the calls to the API and embed the analytical content in the application.

KeyCore’s Offerings

KeyCore provides both professional services and managed services to help you with your AWS needs. Our experienced team of developers and consultants can help you with your data analytics projects, and make sure that your .NET applications are properly embedded with Amazon QuickSight analytics. Contact us today and let us help you get started with your project.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

Single sign-on with Amazon Redshift Serverless using Okta, Amazon Redshift Query Editor v2, and Third-Party SQL Clients

Amazon Redshift Serverless makes it easy to run and scale analytics quickly without having to manage data warehouse clusters. With the help of Redshift Serverless, data analysts, developers, business professionals, and data scientists can access insights from data by loading and querying the data warehouse.

How Encored Technologies Built Serverless Event-Driven Data Pipelines with AWS

This post is a co-written guest post by SeonJeong Lee, JaeRyun Yim, and HyeonSeok Yang from Encored Technologies. Encored Technologies is an energy IT company in Korea that helps its customers generate more revenue and reduce operational costs in the renewable energy industries by providing various AI-based solutions. Encored develops machine learning (ML) applications for predicting the electricity price.

Build Efficient, Cross-Regional, I/O-Intensive Workloads with Dask on AWS

With the ever-growing amount of data being captured daily, platforms and solutions must adapt to evolve. Amazon Simple Storage Service (Amazon S3) is an example of a scalable solution that remains cost-effective for larger datasets. The Amazon Sustainability Data Initiative (ASDI) uses the capabilities of Amazon S3 and Amazon Athena to process and analyze sustainability data.

Improve Reliability and Reduce Costs of Your Apache Spark Workloads with Vertical Autoscaling on Amazon EMR on EKS

Amazon EMR on Amazon EKS is an offering by Amazon EMR that enables you to run Apache Spark applications on Amazon Elastic Kubernetes Service (Amazon EKS) in an efficient manner. The EMR runtime for Apache Spark increases performance so that your jobs run faster and cost less. Apache Spark helps customers process and analyze data quickly, reliably, and cost-effectively.

Process Price Transparency Data with AWS Glue

The Transparency in Coverage rule, finalized by the Center for Medicare and Medicaid Services (CMS) in October 2020, requires health insurers to provide clear and concise information on benefits, costs, and coverage details to consumers. In order to comply with the rule, health insurers need to provide data in a format that can be understood by consumers.

Amazon OpenSearch Service Now Supports 99.99% Availability Using Multi-AZ with Standby

Amazon OpenSearch Service is used for mission-critical applications and monitoring. If the service itself is unavailable, customers can suffer from revenue losses or impaired ability to detect and repair application issues. To improve reliability, Amazon now offers Multi-AZ with Standby for OpenSearch Service to provide a 99.99% availability guarantee.

Build, Deploy, and Run Spark Jobs on Amazon EMR with the Open-Source EMR CLI Tool

The Amazon EMR CLI is a new command line tool to package and deploy PySpark projects across different Amazon EMR environments. This tool makes it easy to deploy a wide range of PySpark projects to remote EMR environments and also integrates with AWS CodePipeline, allowing users to automate their project deployments.

Compose Your ETL Jobs for MongoDB Atlas with AWS Glue

Businesses need to build data warehouses and data lakes based on operational data from disparate sources to meet the need for centralized and integrated data. AWS Glue is a fully managed ETL service that makes it easy to extract, transform, and load data from various sources, such as Amazon S3, MongoDB, and more.

How SOCAR Handles Large IoT Data with Amazon MSK and Amazon ElastiCache for Redis

As companies continue to expand their digital footprint, real-time data processing and analysis becomes increasingly important. SOCAR is a leading car-sharing and leasing company in Korea, handling large amount of real-time IoT data. To process this data in real-time, SOCAR uses Amazon MSK and Amazon ElastiCache for Redis.

Data Load Made Easy and Secure in Amazon Redshift Using Query Editor V2

Amazon Redshift is a data warehouse service that helps users analyze data quickly and securely. The Amazon Redshift Query Editor V2 is a web-based tool that helps users load and query data in their data warehouse. It also offers features such as single sign-on for authentication, query/file editor for data management, and query history for auditability.

What’s New with Amazon MWAA Support for Apache Airflow Version 2.4.3

Amazon Managed Workflows for Apache Airflow (Amazon MWAA) is a managed orchestration service that makes it simple to set up and operate end-to-end data pipelines. Amazon MWAA supports Apache Airflow versions (v1.10.12, v2.0.2, and v2.2.2). Recently, Amazon added support for Apache Airflow v2.4.3 with Amazon MWAA.

Build an Analytics Pipeline for a Multi-Account Support Case Dashboard

As organizations grow, they may have hundreds of accounts to manage. When there is no unified dashboard, administrators have to access each account manually to view and manage support cases. To simplify this process, using the AWS Security Hub multi-account view, you can build an analytics pipeline to monitor multiple accounts from a single console.

Real-Time Anomaly Detection with Random Cut Forest in Amazon Kinesis Data Analytics

Real-time anomaly detection is a use case to detect and flag unusual behavior in streaming data as it occurs. Online machine learning (ML) algorithms are popular for this use case because they can adapt to a changing baseline and don’t rely on explicit rules. Amazon Kinesis Data Analytics uses Random Cut Forest (RCF) algorithm for real-time anomaly detection.

At KeyCore, our team of experts can help you implement the solutions described in this blog post. Our team provides both professional services and managed services. We have the experience and skills to design, build, and manage your AWS architectures, ensuring maximum performance and reliability.

Read the full blog posts from AWS

Networking & Content Delivery

Networking & Content Delivery with AWS

AWS provides a variety of services to help organizations build, deploy, and maintain secure networks for their applications. One such service is NetDevOps, which provides an automated and orchestrated way to manage network changes on AWS. This reduces the time to deploy new networks, leading to a faster deployment cycle that allows organizations to better serve their customers and stay competitive.

AWS re:Inforce

AWS re:Inforce is a security conference that helps developers and users learn about the latest solutions in security, compliance, identity, and privacy. Attendees have the opportunity to attend hundreds of technical and non-technical sessions, as well as explore the various security-related solutions offered by AWS.

NAT Gateways

NAT Gateways are a highly available and horizontally scalable network address translation (NAT) service provided by AWS. It enables instances in private subnets to connect to resources outside the subnets using NAT Gateway’s IP address. It also allows for scaling egress traffic patterns by attaching multiple IPs to a NAT Gateway.

KeyCore and AWS Networking & Content Delivery

At KeyCore, we are AWS experts who provide professional and managed services to our customers. We can help you take advantage of the networking and content delivery services AWS provides, such as NetDevOps, AWS re:Inforce, and NAT Gateways. Our team of experienced professionals can help you create, deploy, and maintain secure networks for your applications. Contact us today to learn more.

Read the full blog posts from AWS

AWS Compute Blog

Many applications today contain both serverless and containerized workloads. As your application grows in complexity, a requirement arises to integrate the two architectures and extend serverless capabilities to your existing containerized workloads on AWS. In this article, we discuss how to leverage AWS EventBridge, AWS Cloud Development Kit (CDK), and AWS Fargate to build an event-driven architecture which takes advantage of both serverless and container workloads.

Furthermore, we’ll explore three approaches for setting up an API to securely upload content to an Amazon S3 bucket via HTTPS. This can be incredibly useful for scenarios where you need a service for storing and managing data, but don’t want the overhead of a dedicated API or client application.

Leveraging EventBridge for Event-Driven Architecture

At the heart of decoupling services is EventBridge, a serverless rules engine allowing you to connect different applications and services from AWS and from your own code. It is an event bus that can detect and respond to events from your applications and AWS services. EventBridge acts as the source of truth, allowing you to create an event-driven architecture that is much more scalable and extensible than a traditional monolithic architecture.

Using EventBridge, you can easily integrate your existing containerized workloads with a new event-driven architecture. The blog explains how to use EventBridge and AWS CDK to build a modular architecture, with services decoupled from one another.

Upload Files to S3 with an API

The second topic discussed in this blog post is securely uploading content to an Amazon S3 bucket without having to build a dedicated API or client application. We’ll explore three different approaches for setting up an API to do this:

1. Using Amazon API Gateway and AWS Lambda, you can set up an API endpoint which can accept an upload request and process the file accordingly.
2. Alternatively, you can use the Amazon S3 Transfer Acceleration feature and upload files directly from the client application.
3. Lastly, you can leverage Amazon CloudFront’s custom origin feature to create an API which can accept an upload request and process the file accordingly.

Each of these methods has its own use cases and benefits, and depending on your specific application requirements, one approach may be more suitable than the other.

KeyCore Can Help

At KeyCore, we understand the complexity of managing containers and the need to extend serverless capabilities to existing container workloads. Our experience with AWS EventBridge and CloudFormation helps our customers build an event-driven architecture which takes advantage of both serverless and container workloads. Additionally, we also help customers with integrating their existing applications with Amazon S3 and configuring an API to securely upload content via HTTPS.

Whether you’re just starting out or already have an existing application, KeyCore can help you build a secure and reliable cloud infrastructure that meets your business needs. Our services and managed services offerings will provide the peace of mind that your cloud applications and infrastructure are running securely and efficiently.For more information about KeyCore and our offerings, please visit our website.

Read the full blog posts from AWS

AWS for M&E Blog

Using Analytics to Improve Video Playback Quality with Amazon IVS

Capturing and retaining the attention of viewers is a growing challenge for content distributors. Investing in interactive content can keep viewers engaged for longer, however, this additional investment may not pay off if viewers experience playback related issues. Amazon Interactive Video Service (Amazon IVS) is a managed live streaming solution that lets customers easily create, manage and scale live streaming video to any device, without the need for complex video infrastructure.

Amazon IVS is tightly integrated with AWS analytics services, such as Amazon CloudWatch and Amazon Kinesis Video Streams to provide customers with deep visibility into their streaming performance. Amazon CloudWatch can be used to monitor the health of streaming endpoints and view key metrics, such as the number of viewers, the number of frames per second (FPS), the number of dropped frames, and the amount of buffering. With Amazon Kinesis Video Streams, customers can inspect the video quality of their live stream with a built-in video analytics API. Customers can use the API to access low-level details about the video, such as resolution, bitrate, fps, and packet loss ratio.

Customers can also analyze video performance and playback issues to identify trends and patterns. Amazon IVS provides customers with playback diagnostics to quickly identify the issue impacting the quality of the video stream. The Amazon IVS playback diagnostics console gives customers visibility into their streaming performance, along with playback monitoring and alerting capabilities. With the console, customers can quickly identify the root cause of playback issues, such as slow initial startup time, poor picture quality, and connection issues. This makes it easier for customers to resolve streaming issues quickly and effectively.

At KeyCore we have extensive experience with Amazon IVS and AWS analytics services. We can help you set up your streaming performance monitoring system quickly and efficiently, ensuring that your streaming performance is always up to the highest standards. Contact us today to learn more about how we can help you optimize your streaming performance.

AWS Direct-to-Consumer Streaming Partners’ Showcase at NAB 2023

At the 100th Anniversary NAB Show 2023 in Las Vegas, AWS Partners showcased solutions addressing the entire direct-to-consumer cycle. Many organizations in the Media and Entertainment (M&E) industry use Amazon Web Services (AWS) to reinvent their media workloads in the cloud.

Accedo showcased their cloud-native platform, which enables media companies to quickly launch video streaming services. Accedo makes it easy to create an end-to-end streaming experience, from content ingest to delivery. Their platform is integrated with AWS services, such as Amazon Rekognition, Amazon Transcribe, and Amazon Elastic Transcoder, giving customers the ability to quickly and easily create engaging streaming experiences.

WarpMedia Solutions demonstrated their cloud-native platform to help customers build and manage streaming solutions. WarpMedia’s platform is integrated with AWS services, such as Amazon S3, Amazon Kinesis Video Streams and Amazon CloudFront. With their platform, customers can easily manage their entire streaming workflow and optimize their streaming performance.

At KeyCore, we have extensive experience with streaming solutions on AWS. We can help you build and launch your streaming platform quickly and efficiently, ensuring that your streaming performance is always up to the highest standards. Contact us today to learn more about how we can help you build and deploy your streaming platform.

AWS Thinkbox Deadline Adds Multi-Regional Support to Spot Event Plugin

Amazon Web Services (AWS) has announced AWS Thinkbox Deadline release 10.2.1, which includes the addition of multi-regional support to the Spot Event Plugin. The Spot Event Plugin allows Deadline customers to easily scale rendering by launching and managing Spot Fleets in multiple AWS regions from a single plugin.

The Spot Event Plugin helps customers save on costs by automatically scaling the render farm according to their project’s needs. The new multi-region support allows customers to take advantage of AWS Spot Instances in multiple AWS regions, providing customers with the flexibility to launch Spot Fleets in the region that best meets their needs. The plugin also simplifies the process of launching Spot Instances, as customers can configure and manage Spot Fleets in multiple regions from a single interface.

At KeyCore, we have extensive experience with AWS Thinkbox Deadline. We can help you get up and running quickly and easily, ensuring that you take full advantage of the power of the Spot Event Plugin. Contact us today to learn more about how we can help you optimize your rendering performance.

Read the full blog posts from AWS

AWS Storage Blog

AWS Storage Blog – Canva, Amazon EBS Volume Metrics and Automation

Canva, an online design tool that empowers users worldwide to design, edit, and publish anything they can dream up is a great example of the power of running production workloads on AWS. By leveraging services such as Amazon S3, Amazon ECS, Amazon RDS, and Amazon DynamoDB, Canva saves over $3 million annually in Amazon S3 costs.

Improving Application Resiliency and Availability

Choosing the correct metrics to monitor and setting up alarms as needed is essential for customers to achieve application resiliency and availability goals. Amazon Elastic Block Store (Amazon EBS) provides a suite of metrics that can be used together with the AWS Fault Injection Simulator to help customers build a resilient infrastructure. This simulator can help inject faults in a controlled environment to help customers understand how their applications behave in the event of real-time failures.

Automating CloudWatch Dashboard Creation for Amazon EBS Volume KPIs

Enterprises benefit significantly from optimizing block storage performance in the cloud. In addition to having faster and more reliable data access to support critical business operations and real-time applications, they can also realize cost savings by increasing operational efficiency to reduce the need for additional resources. Using CloudWatch Dashboard automation, businesses can quickly gain visibility into their Amazon EBS performance KPIs and take proactive steps to address any issue.

How KeyCore Can Help

KeyCore can help ensure businesses take advantage of the full suite of AWS Storage services with our professional and managed services offerings. Our team of AWS certified professionals can help you identify the best services for your business needs and create a comprehensive strategy for leveraging them. We can help you automate the creation of your CloudWatch Dashboard, as well as monitor and manage Amazon EBS Volume metrics and alarms. With our assistance, you can ensure your applications are resilient and reliable, while also taking full advantage of the cost savings associated with AWS Storage services.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

New AWS Partner Network (APN) Blog Articles

Improve Your Security Posture with Claroty xDome Integration with AWS Security Hub

Industrial digital transformation is connecting operational technology (OT) to the internet, IT systems, and solutions, making them more susceptible to malware and ransomware. Claroty xDome and AWS Security Hub provide security and vulnerability monitoring, and visibility of security events for operational teams. With this integration, KeyCore can help customers improve their security posture by leveraging the powerful combination of AWS Security Hub and Claroty xDome.

When to Use a Graph Database Like Neo4j on AWS

Graph databases are great for solving problems related to connected data, representing data as nodes and uncovering relationships between data that other approaches can’t. AWS and Neo4j experts explore four types of databases and their most common applications. Industries can use graph databases as part of an AWS architecture. KeyCore can help customers assess their data and determine when and how to use graph databases in their AWS architecture.

Next-Gen Kubernetes Management Approaches for Managing Hybrid and Edge Applications

To maximize the cloud, customers need critical infrastructure run immediately, with automated management and operations. Rafay Systems provides a high level of automation, security, viability, and governance on top of Amazon EKS. Many customers use Amazon EKS and leverage Rafay to streamline their lifecycle management, application deployment, and governance requirements for containerized apps running in EKS. KeyCore can help customers get the most out of their cloud, leveraging Amazon EKS and Rafay to manage their hybrid and edge applications.

Simplifying Blockchain Tokenization with HCLTech OBOL and AWS

Blockchain-based tokenization is a powerful way for businesses to reach new customers due to the trust and financial inclusion promoted by blockchain. HCLTech’s OBOL is a no-code/low-code tokenization platform that is easy-to-integrate and designed to deploy on AWS. OBOL provides businesses with a rich interface to create and use custom tokens. KeyCore can help customers simplify the tokenization journey by leveraging the powerful combination of HCLTech OBOL and AWS.

Infor OS on AWS Accelerates Intelligent Business Solutions with AI and Data Capabilities

Infor OS is a digital business platform that connects Infor’s various software products and third-party solutions. It provides support for AI/ML, integration, hyperautomation, application development, data management, and analytics. Infor OS on AWS can help businesses tackle innovation use cases and unlock the full potential of their data. KeyCore can help customers leverage the combination of Infor OS and AWS to accelerate their intelligent business solutions.

Successful Decentralized Clinical Trials: A True Possibility with AWS in the Post-Pandemic Era

Decentralized clinical trials (DCTs) put the patient at the center of the trial experience and use digital technologies to address the challenges of traditional clinical trials. SourceFuse is leveraging AWS to build solutions to transform clinical research. KeyCore can help customers use AWS to make successful decentralized clinical trials a true possibility in the post-pandemic era.

Optimizing Energy Footprint with Edge Analytics and Artificial Intelligence of Things with Bosch Phantom

Organizations are using environmental, social, and governance (ESG) criteria to reduce energy consumption and increase efficiency. Bosch is solving this problem with the help of edge analytics, machine learning, and core Internet of Things (IoT) components provided by AWS. Bosch Phantom helps extract information without intruding. KeyCore can help customers optimize their energy footprint through Bosch Phantom, leveraging the powerful combination of edge analytics, AIoT, and AWS.

How Internal Developer Platforms Built with AWS Proton Help Achieve DevOps Best Practices

Platform engineering can improve the developer experience by providing the right set of tools, technologies, and templates in a self-service portal. Redapt creates internal development platforms that empower application developers while providing secure and cost-effective environments. Redapt uses AWS Proton, an AWS managed service, to implement IDP features for ongoing operations. KeyCore can help customers create and use internal developer platforms built with AWS Proton to achieve DevOps best practices.

New AWS Ambassadors from Q1 2023 and Latest Ambassador Activities

The AWS Ambassador Program is an international community of technical experts who share their AWS expertise online and offline. They author blogs and whitepapers, create public presentations, and contribute to open source projects. They evangelize AWS via social media and facilitate peer-to-peer learning by presenting at conferences, leading workshops, and hosting user group events. KeyCore can help customers use the AWS Ambassador Program to access a vibrant worldwide community of technical experts.

Read the full blog posts from AWS

AWS Cloud Enterprise Strategy Blog

The Evolving Role of the Chief Trade-Off Officer: An Exploration of the DJ Analogy

It can be difficult to make everyone happy, especially as a leader in a role like the ‘Chief Trade-Off Officer’ (CTO). With the art of being a DJ being a good analogy for the CTO’s role – juggling multiple tasks, needs, and customers at once – this article explores the idea of introducing binary options to the mix.

The DJ Analogy

The DJ analogy can be used to demonstrate the CTO’s role in juggling multiple tasks and making everyone happy. A DJ must manage many variables – volume, bass, treble, etc. – all of which come with their own set of trade-offs. To make the task easier, we propose that all of the complex sliders on their mixing deck be replaced with binary switches: on/off. With this, the CTO can focus on the more important tasks, such as understanding the needs of the customers, understanding the context of the project, and defining the right trade-offs.

Being Agile

Oftentimes, being agile means making trade-offs. It is important for the CTO to be able to assess the situation and make the correct decisions quickly, while also understanding the context of the project. The CTO must also be able to understand the customer’s needs and how they can be met in a timely manner. This requires the ability to be able to assess the situation and make the right decisions quickly, while also understanding the project’s context.

KeyCore’s Role in the CTO’s Role

At KeyCore, we understand that it can be difficult for organizations to make the right decisions when it comes to their enterprise strategy. That is why we are here to help. We provide both professional services and managed services that can help our clients make the best decisions for their business. Our team is highly advanced in AWS and can provide technical details and/or references to AWS documentation or code snippets in CloudFormation YAML or AWS API Calls using Typescript and AWS SDK for JavaScript v3. We can help organizations navigate the evolving role of the CTO by providing the resources and expertise they need to make the right decisions.

Read the full blog posts from AWS

AWS HPC Blog

Scaling Compute Resources to Handle Celery Tasks with AWS Batch

Many applications leverage distributed task systems like Celery to handle asynchronous work. However, when compute-intensive tasks are added to the workload, it can be difficult to scale up the compute resources to keep up with the demand. AWS Batch can provide the solution, allowing you to scale resources on-demand to handle the workload.

Using AWS Batch with Celery

The idea behind using AWS Batch with Celery is to offload the compute-intensive tasks from the main Celery process, and send them to AWS Batch for processing. To achieve this, you need to create a custom Celery task queue and a custom AWS Batch job queue for each of the compute-intensive tasks. This will allow you to scale the compute resources up or down as needed. The process is as follows:

  • Create a custom Celery task queue
  • Register the custom Celery task queue
  • Create the AWS Batch job queue
  • Create the AWS Batch job definition
  • Submit the AWS Batch job
  • Wait for the job to complete
  • Retrieve the results

Once you’ve set up the AWS Batch job queue and job definition, you can submit the jobs to the queue and they will be processed by the AWS Batch service. The service will scale up or down depending on the demand, so you don’t need to worry about running out of compute resources. You can also set up auto-scaling rules for your job queues so that the resources allocated to them are automatically adjusted to match the workload.

KeyCore’s Expertise

At KeyCore, our experienced AWS consultants can help you to set up Celery tasks and integrate them with AWS Batch. Our team can guide you through the process, from setting up the custom Celery task queue to creating the AWS Batch job queue and job definition. We can also help with setting up auto-scaling rules for your job queues to ensure that your compute resources are properly scaled to meet the demand.

To learn more about how KeyCore can help you leverage AWS Batch to handle Celery tasks, contact us today.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

Improve Incident Management Response Times for Container Workloads with AWS Chatbot

Mission-critical container workloads need to be monitored in real-time in order to quickly address performance issues, traffic spikes, infrastructure events and security threats. With AWS Chatbot, teams can enable real-time visibility into these events and enable response times to be reduced.

AWS Chatbot integrates with AWS services such as Amazon CloudWatch Events and Amazon SNS, keeping teams notified of operational events via chat platform such as Slack, Amazon Chime and AWS Chatbot in the Amazon Management Console. With AWS Chatbot, teams can get notified of operational events, run commands to troubleshoot and act on the events, and set up automated responses to respond to the events.

Optimizing Large-Scale Cloud Migration with AWS Application Migration Service

Large-scale cloud migrations can be challenging due to the multiple tasks, scaling complexities, manual processes and various tools and stakeholders involved. To help with these challenges, AWS Application Migration Service (AWS MGN) was designed to enable re-hosting or “lift and shift” migrations of even the largest and most complex applications.

AWS MGN is used to migrate on-premise applications to the AWS Cloud with minimal disruption to the existing application. AWS MGN enables customers to migrate applications without having to make any changes to the existing application. This can be done by selecting the source environment, creating a migration project, and then selecting the target environment where the application needs to be migrated. AWS MGN can be used to migrate applications to any AWS region, and the migration process can be monitored using CloudWatch Metrics.

Automating CloudWatch Alarm Cleanup with AWS Tools

Having hundreds or thousands of CloudWatch alarms across AWS Regions can be difficult to manage. To help with this, AWS offers various tools to quickly identify low-value alarms or misconfigured alarms, identify alarms in a ‘ALARM’ or ‘IN_SUFFICIENT’ state, and to provide a cleanup mechanism.

The CloudWatch API can be used to query alarms across all regions, and then the AWS Command Line Interface (CLI) or an AWS SDK can be used to delete the alarms. Additionally, AWS Systems Manager Automation and AWS CloudFormation can be used to automate the deletion process. With AWS Systems Manager Automation, customers can develop an automated solution to query, identify, and delete alarms as part of their operations.

Establishing Observability for Serverless Multi-Account Workloads on AWS

Monitoring applications in a multi-account environment can be difficult, but finding and correlating all available information is critical for successful cloud operation. To help with this, Hapag-Lloyd AG used AWS services such as AWS CloudFormation, AWS Lambda, Amazon Kinesis Data Streams, Amazon Athena, and Amazon CloudWatch Logs to develop an automated solution for querying and correlating application observability data across accounts.

The solution enables teams to query, filter, and aggregate logs and metrics in real-time, and then analyze the data to detect problems, or to operate and maintain the application. Furthermore, the solution can provide cross-account visibility into system state, performance, health, and security posture of the application.

Implementing Logging Guardrails in a Multi-Account Environment with AWS Session Manager

Raiffeisen Bank International (RBI) maintains a multi-account AWS environment that requires central logging of all sessions established to Amazon Elastic Compute Cloud (Amazon EC2) instances. To meet this requirement, RBI used AWS services such as AWS Session Manager, Amazon CloudWatch, and Amazon Simple Queue Service (SQS). AWS Session Manager is used to centrally manage EC2 instance sessions.

Amazon CloudWatch is used to monitor and audit the sessions, and Amazon SQS is used to asynchronously propagate the session data once a session has begun. This solution provides compliance and audit logging for all sessions across all accounts, and can help RBI to ensure that their central security guardrails are met.

At KeyCore, we have a team of AWS certified professionals that have deep experience with AWS services such as AWS Session Manager, Amazon CloudWatch and Amazon SQS. Our consultants and engineers can help you to setup guardrails in your multi-account AWS environment and ensure compliance with your security policies.

Read the full blog posts from AWS

AWS for Industries

AWS for Industries

How Machine Learning on AWS can help customers predict the risk of Automotive Part Recalls

Customers can use Long Short Term Memory (LSTM) machine learning models to predict potential defects or recalls in automotive parts. These predictions can outperform existing manual processes and help customers make informed decisions. An LSTM model can take in information related to the part design, production process, and other factors, and use this data to identify which parts are more likely to fail. Considering the potential risks of recalling automotive parts, using an LSTM machine learning model can help customers make the best decisions for their operations.

AI-Assisted Annotation of Medical Images using MONAI Label on AWS

Accurate and efficient annotation of medical images is an essential step for training AI models for clinical purposes. Amazon Web Services (AWS) offers MONAI Label, a tool that helps quickly and accurately annotate medical images. MONAI Label utilizes machine learning to detect image elements and streamline the annotation process. It is also capable of providing annotations that are compliant with the international Common Data Elements standards. By using MONAI Label, medical organizations can streamline their annotation processes, helping to make sure their AI models are up to date and accurate.

Integrating CDP with Amazon Marketing Cloud to drive better Ad campaigns

First-party data (1P signal) from sources like emails, websites, mobile apps, and physical stores is valuable for businesses that want to create actionable insights and next-best actions for their customers. A Customer Data Platform (CDP) lets businesses collect, activate, and analyze this data. Amazon Marketing Cloud can then be used to create targeted campaigns and deliver them to customers efficiently. CDPs such as Amazon Customer Profiles and Amazon Personalize are integrated with Amazon Marketing Cloud, allowing businesses to harness the power of 1P signal to deliver relevant ads and improve their ROI.

Selecting the best automatic machine learning to meet your manufacturing needs

In the age of rapid innovation, manufacturing businesses need to use the right machine learning (ML) services and tools to stay competitive. To help, Amazon Web Services (AWS) offers a variety of ML services such as Amazon SageMaker, Amazon Comprehend, Amazon Rekognition, Amazon Textract, and more. Each of these services is designed for a specific use case, so it’s important for businesses to understand which tool is best for their operations.

Partnerships extend Just Walk Out technology to more colleges and universities

In September 2022, Texas A&M University became the first higher education institution to launch Amazon Just Walk Out technology at their sports venue. This technology uses computer vision, deep learning, and sensor fusion to let customers enter a store and take the items they need without the need for a checkout counter. Since then, Just Walk Out technology has been extended to other colleges and universities, with partnerships with the University of Miami, Clemson University, and others.

Embrace Retail’s Future: Bringing AWS Smart Store Solutions to Life

The rise of ecommerce and digital devices has caused a digital transformation in the world of retail. AWS Smart Store solutions can help retailers accelerate innovation and provide a better customer experience. AWS Smart Store solutions include Amazon Connect, Amazon Personalize, Amazon Rekognition, Amazon Textract, and more. These tools allow retailers to provide a more personalized and efficient shopping experience.

ALDO Group solves their OMS challenges with Fluent Commerce on AWS

Order management systems (OMS) are essential for optimizing supply chain operations and delivering exceptional customer experiences. However, managing global inventory visibility and order allocation can be challenging. The ALDO Group solved their OMS challenges by utilizing Fluent Commerce on AWS. This solution helps ensure near real-time global inventory visibility and order allocation, allowing them to make informed decisions about inventory replenishment and fulfillment.

At KeyCore, our team of AWS experts can help you design and implement the best AWS solutions for your organization. Our team has extensive experience in machine learning, AI-assisted annotation, marketing, and order management—all of which are key components of successful operations. With our help, you can make sure your organization is running as efficiently and effectively as possible.

Read the full blog posts from AWS

AWS Marketplace

Using InfinStor MLflow with Amazon SageMaker Studio for Machine Learning Experiments

In this article, we explore how to use InfinStor MLflow with Amazon SageMaker Studio to experiment, collaborate, train, and run inferences using this ML platform. With this solution, users do not need to write special code for experiment tracking or model management. InfinStor MLflow provides the experiment tracking and model management portion of the platform, and SageMaker Studio provides the Notebook and remote IPython kernel portion.

Experiment Tracking and Model Management with InfinStor MLflow

InfinStor MLflow provides users with a suite of tools that make it easy to track and manage machine learning experiments. MLflow provides a wide range of features, such as experiment tracking, model management, and integration with other tools. With MLflow, users can easily track their experiments, collaborate with colleagues, and store their models. MLflow also enables users to store, search, and version their models, making it easy to find the right model for the task at hand.

Machine Learning Experiments with SageMaker Studio

SageMaker Studio provides a powerful suite of tools for developing and deploying machine learning models. With SageMaker Studio, users can easily create and manage Notebooks, access remote compute resources, and use SageMaker algorithms to train, evaluate, and deploy models. SageMaker Studio also simplifies the process of collaborating with colleagues, as users can share their work with authorized colleagues.

Conclusion and KeyCore Solutions

Using InfinStor MLflow and Amazon SageMaker Studio, users can easily track, manage, and collaborate on machine learning experiments. KeyCore offers a range of professional and managed services to help users get the most out of their ML workflow. Our ML experts can help users optimize their experiments and models, as well as help users get the most out of their ML platform. Contact us today to learn more about our ML solutions.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

The Latest Security, Identity, and Compliance Launches from AWS

Finding Changes with the New Finding History Feature in Security Hub

AWS Security Hub helps security teams detect and track security findings to protect their organization’s assets. As part of cloud security posture management, Security Hub helps identify and address security findings in a timely and effective manner. The new Finding History feature in Security Hub offers further control over security finding changes.

This feature allows customers to view a comprehensive list of all findings, including those that have already been addressed and those that are still unresolved. It also provides customers with a chronological list of all activities associated with a finding, such as when it was first detected, when it was addressed, and who it was assigned to. Additionally, customers can now search findings by text and filter them by a variety of criteria, such as finding types and status.

Delivering on the AWS Digital Sovereignty Pledge

At AWS, trust from customers is paramount to the success of the business. To protect customer data, AWS launched the Digital Sovereignty Pledge in November 2020. This commitment offers customers assurance that their data will remain within their region of choice and never be transferred or stored across borders.

As part of this pledge, AWS provides customers with various tools and resources to help them maintain control over their data. This includes a detailed compliance audit, a Data Access and Protection Policy, and a clear process for monitoring and reporting any data access to the customer’s chosen region.

Scanning AWS Lambda Functions with Amazon Inspector

Amazon Inspector is a vulnerability management and application security service that helps improve the security of workloads. Using Amazon Inspector, customers can automatically scan applications for vulnerabilities and view a detailed list of security findings, organized according to severity level. In addition, customers are provided with remediation instructions.

The Amazon Inspector service now offers improved scanning capabilities for AWS Lambda functions. This includes support for multiple Lambda functions in a single scan, extended scan duration, and the ability to scan functions as frequently as once a day. Additionally, customers can now view a single dashboard for all their Lambda function scans and apply tags to their Lambda functions for better organization.

Monitoring the Expiration of SAML Identity Provider Certificates in Amazon Cognito User Pools

Amazon Cognito user pools allow customers to configure third-party SAML identity providers (IdPs) so users can log in using their IdP credentials. Amazon Cognito user pools use the public certificate of the SAML IdP to verify the signature. To ensure the security of authentication, customers need to monitor the expiration of SAML IdP certificates.

The Amazon Cognito user pool now offers a new feature that allows customers to be notified of SAML IdP certificate expiration. Upon detecting an expired certificate, Amazon Cognito user pool will send an automated notification to customers, who can then take action to replace the expired certificate and ensure the continued secure authentication of users.

KeyCore Can Help

KeyCore can help customers navigate the latest security, identity, and compliance launches from AWS. Our team of AWS certified developers, cloud architects, and DevOps engineers have experience in helping customers adopt and deploy AWS services. We stay up-to-date on the latest AWS news and offerings, so we can help customers take advantage of the new features and ensure a secure environment. Contact us to learn more about how we can help you.

Read the full blog posts from AWS

Front-End Web & Mobile

Front-End Web & Mobile: Introducing AWS AppSync, Benchmarking Mobile Apps, and Announcing Amplify UI StorageManager Component

AWS AppSync is a fully managed cloud service that enables developers to create GraphQL APIs. APIs created with AppSync generate a public endpoint to securely access, manipulate, and combine data from various sources. Benchmarking your mobile app with Rooted Android Private Devices and AWS Device Farm is a way to improve app performance by unlocking utilities that can analyze the app. Lastly, Amplify UI offers a cloud-connected StorageManager UI component that lets users upload and manage files to the cloud.

Introducing AWS AppSync

AWS AppSync simplifies the process of creating GraphQL APIs. AppSync’s public endpoint can be used to send queries, mutations, and subscriptions requests. Developers can also create GraphQL schemas and resolvers to define how their data should be combined. It also provides an authorization model that allows developers to control access to their data with fine-grained security. Finally, AppSync enables developers to rapidly iterate and deploy updates to their GraphQL APIs.

Benchmarking Mobile Apps with Rooted Android Private Devices and AWS Device Farm

Rooting an Android device allows access to file explorer and custom ROMS. It can also unlock utilities that can identify and improve app performance. AWS Device Farm is a service that offers access to a variety of rooted devices which can be used for testing and benchmarking mobile apps. The AWS Device Farm can be used to a variety of tests, such as performance tests, crash tests, usability tests, and more. It also provides detailed metrics, such as CPU usage, memory usage, network throughput, and battery life.

Announcing the Amplify UI StorageManager Component

Amplify UI is a collection of React components that can connect directly to the cloud. The StorageManager component allows users to upload and manage files in the cloud. It features a drag-and-drop interface, progress indicators, and more. Developers can customize the StorageManager by overriding its default styles, adding a custom file selector, and adding custom validation.

How KeyCore Can Help

At KeyCore, we offer both professional services and managed services to help you build, manage, and optimize your applications. Our team of certified AWS consultants can help you build and design secure GraphQL APIs with AWS AppSync, benchmark and optimize your mobile applications, and use the Amplify UI StorageManager component for file management. Contact us today to learn more about how we can help.

Read the full blog posts from AWS

AWS Contact Center

Monitor Real-Time Metrics Using Granular Access Controls in Amazon Connect

Introduction

Contact center supervisors, managers, compliance, workforce analysts, and more can monitor the real-time performance of their contact center, including agent, queue, and routing profile performance, with the real-time metrics dashboard in the Amazon Connect console. An evolving privacy and regulatory landscape has made it necessary to provide granular access controls for users to protect sensitive and confidential data.

Granular Access Controls in Amazon Connect

Amazon Connect delivers a few key features to help manage user privileges with granular access controls. These features are provided by Amazon Cognito, AWS IAM, and Amazon Connect roles. Amazon Cognito helps create a user pool of authenticated users and provides an identity provider to keep track of them. With AWS IAM, Amazon Connect administrators can create IAM roles for their users and define permissions for each role. Amazon Connect roles are used to define user privileges within Amazon Connect.

Monitor Real-Time Metrics Using Granular Access Controls in Amazon Connect

Amazon Connect allows supervisors, managers, compliance, workforce analysts, and other users to monitor the real-time performance of their contact center. Amazon Connect’s real-time dashboard provides metrics on agent, queue, and routing profile performance. Granular access controls can ensure only the necessary users have access to the dashboard. Amazon Cognito, AWS IAM, and Amazon Connect roles can be used to define user privileges and access to ensure sensitive and confidential data is protected.

KeyCore Can Help

At KeyCore, we provide professional and managed services related to Amazon Connect and granular access controls. Our team of AWS-certified professionals has the experience and technical expertise to help you implement Amazon Connect and configure granular access controls. Contact us today to learn more about how we can help you get the most out of Amazon Connect.

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector

Public sector organisations are investing in technologies that improve the member experience and increase efficiency and effectiveness of processes. Cloud technology can help with this, and this article discusses two ways AWS ISV Partners are enabling success in the public sector, as well as how geospatial query latency can be decreased from minutes to seconds by using Zarr on Amazon S3.

Improving Credit Union Member Experience with Cloud Transformation

Credit unions are looking for ways to make their processes and practices more efficient and effective, in order to meet the needs of their members. One of these ways is to move workloads to the cloud. Cloud transformation can offer many benefits, from cost savings to improved storage and accessibility. Credit unions can use resources such as the Credit Union Cloud Ecosystem to get started with their cloud transformation journey and make the process as smooth and fast as possible.

Enabling Success for AWS ISV Partners in the Public Sector

AWS ISV Partners work with public sector organisations to accelerate the adoption of new services and technologies, without the need to develop and maintain their own applications. The AWS Partner team provides two key ways to support ISV Partners, with the goal of helping them to both reduce costs and increase sales. This enables a faster digital transformation and improved outcomes for citizens.

Decreasing Geospatial Query Latency with Zarr on Amazon S3

Many government and nonprofit organisations release geospatial data in compressed file formats, such as NetCDF and GRIB. In order to make the best use of these datasets, it is necessary to leave them in one place and query the data virtually, only downloading the subset that is needed. Zarr, a cloud-native format, is designed to help facilitate this kind of virtual access to compressed chunks of data saved on Amazon S3. This walkthrough explains how to convert NetCDF datasets to Zarr using an Amazon SageMaker notebook and an AWS Fargate cluster, and how to query the resulting Zarr store, reducing the time required for time series queries from minutes to seconds.

How Can KeyCore Help?

KeyCore is a leading Danish AWS consultancy, providing both professional services and managed services. Our experts can help with all aspects of cloud transformations and digital transformations, from helping credit unions improve their processes and practices to enabling success for AWS ISV Partners and decreasing geospatial query latency. We have the experience, resources and tools to help you make the most of AWS technologies, transforming your organisation and improving citizen outcomes.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

The Internet of Things on AWS – Official Blog

Sharing a Vision for a More Connected World with AWS IoT

Yasser Alsaied, Vice President of AWS IoT, recently discussed IoT strategy, commitment, and outlook in a conversation with KeyCore. Alsaied discussed the need for hyperscalers, vendors, and customers to be aware of the rapidly changing landscape of the internet of things (IoT). He described AWS’ focus on enabling customers to leverage their existing investments in IoT, while helping them progress faster on their journey. He also highlighted AWS’ commitment to helping customers save time and money, through providing solutions that make IoT easier to deploy and manage.

Getting Started with the New Shared Subscriptions in AWS IoT Core

The new shared subscriptions feature in AWS IoT Core gives customers the load balancing capability to multiple subscribing MQTT sessions or consumers. This feature sends a published message to only one of its subscribers in a random manner, instead of sending it to all subscribers as non-shared subscriptions do. With this blog post, KeyCore will show customers how to get started with the new shared subscriptions in AWS IoT Core.

Convert glTF to 3D Tiles for Streaming of Large Models in AWS IoT TwinMaker

Customers can experience long wait times and poor rendering performance when loading a 3D scene in AWS IoT TwinMaker. To help customers solve their issues, this blog post explains how to convert models into the 3D Tiles standard for streaming in a scene. KeyCore can help customers with converting glTF to 3D Tiles, providing an optimal streaming experience, while ensuring their 3D models still look amazing.

Finally, KeyCore has the knowledge and experience to help customers make the most of their IoT solutions. Our team can help customers design, build, and deploy their solutions, while taking advantage of all the features and tools that AWS has to offer. With KeyCore’s expertise, customers can get their IoT solutions up and running quickly and cost-effectively.

Read the full blog posts from AWS

Scroll to Top