Summary of AWS blogs for the week of monday Mon Jun 19
In the week of Mon Jun 19 2023 AWS published 96 blog posts – here is an overview of what happened.
Topics Covered
- Desktop and Application Streaming
- AWS DevOps Blog
- Official Machine Learning Blog of AWS
- Announcements, Updates, and Launches
- Containers
- AWS Smart Business Blog
- Official Database Blog of AWS
- AWS Training and Certification Blog
- Microsoft Workloads on AWS
- Official Big Data Blog of AWS
- Networking & Content Delivery
- AWS Compute Blog
- AWS for M&E Blog
- AWS Storage Blog
- AWS Architecture Blog
- AWS Partner Network (APN) Blog
- AWS Cloud Enterprise Strategy Blog
- AWS HPC Blog
- AWS Cloud Operations & Migrations Blog
- AWS for Industries
- AWS Messaging & Targeting Blog
- AWS Robotics Blog
- AWS Marketplace
- The latest AWS security, identity, and compliance launches, announcements, and how-to posts.
- AWS Startups Blog
- Business Productivity
- Innovating in the Public Sector
- The Internet of Things on AWS – Official Blog
- AWS Open Source Blog
Desktop and Application Streaming
Enhance Video Calls with Amazon WorkSpaces and Certificate Based Authentication
With Amazon WorkSpaces and Certificate-based Authentication (CBA), organizations can provide seamless authentication to their end user computing (EUC) services. Zoom has extended their support for virtual desktops with its Zoom plugin for Amazon WorkSpaces, enabling users to optimize their video experience when using Zoom with an Amazon WorkSpaces Windows client.
Optimizing Video Experience with Zoom Plugin for Amazon WorkSpaces
When starting a Zoom Meeting in an Amazon WorkSpaces session, the Zoom plugin establishes a secure connection with the Zoom Meeting client in the Amazon WorkSpaces session. The plugin also combines the audio and video streams from the Amazon WorkSpaces session into the Zoom Meeting, allowing users to start their video meeting with no extra setup.
Design Considerations with Certificate-Based Authentication
Organizations are increasingly standardizing on SAML 2.0 Identity Providers such as AWS IAM Identity Center and OKTA for their identity solution to access EUC services in AWS. With CBA, the logon experience to a virtual desktop includes the client authenticating with the domain controller through the Certificate Authority (CA). The CA checks the validity of the certificate and if valid, issues a token to the client which is used for authentication.
KeyCore: Helping with AWS Services
At KeyCore, we provide both professional and managed services to help customers with their AWS services. We are highly advanced in AWS and can provide specific solutions for customers, such as services for Amazon WorkSpaces and Certificate-Based Authentication. To read more about KeyCore and our offerings, visit our website at https://www.keycore.dk.
Read the full blog posts from AWS
- Enhance Zoom video calls with Amazon WorkSpaces
- Design considerations in highly regulated environments for Certificate Based Authentication with AppStream 2.0 and WorkSpaces
AWS DevOps Blog
Policy-Based Access Control in Application Development with Amazon Verified Permissions
Access control is one of the most essential components of application security. Recently, policy-based access control (PBAC) has been gaining popularity, as it offers several advantages over traditional mechanisms such as role-based access control (RBAC) and access control lists (ACLs).
What is Policy-Based Access Control?
PBAC is a mechanism that provides a standardized way to define, manage and enforce access control across an organization. A policy-based access control system consists of a policy engine, a policy repository, and an enforcement engine. The policy engine is used to define the access control policies, while the policy repository stores them. Finally, the enforcement engine is responsible for enforcing the policies.
Benefits of PBAC
PBAC provides several benefits compared to traditional access control mechanisms. First, it enables increased flexibility and scalability by allowing access control to be managed at a granular level. This allows organizations to configure access to specific resources on an individual basis, as well as reducing the amount of manual work required to manage access control. Additionally, PBAC helps ensure that access control is consistent across the organization, as policies can be easily updated and enforced across multiple resources.
Amazon Verified Permissions
Amazon Verified Permissions (AVP) provides an easy-to-use policy-based access control system for Amazon Web Services (AWS). AVP allows developers to define and enforce policies for AWS resources in an automated way, using the AWS identity and access management (IAM) service. AVP provides a comprehensive set of features for creating, editing and managing policies, as well as for verifying their effectiveness.
AVP also enables users to quickly and easily set up and manage user access control policies for various AWS services. With AVP, users can define granular access control policies that ensure that only authorized users have access to the necessary resources. In addition, AVP can be used to ensure that access control policies are enforced consistently across multiple AWS services.
KeyCore Can Help
At KeyCore, we understand the importance of access control in application development, and are experienced in helping customers implement policy-based access control with Amazon Verified Permissions. Our experts can help you develop an AVP policy that meets your security needs, creating an efficient and effective access control system for your AWS environment.
Read the full blog posts from AWS
Official Machine Learning Blog of Amazon Web Services
Unlock the Potential of Amazon SageMaker with Data Wrangler, FastAPI, Live Call Analytics, and More
Accelerate Time to Business Insights with the Amazon SageMaker Data Wrangler Direct Connection to Snowflake
Amazon SageMaker Data Wrangler is a powerful visual interface that lets you select and clean data, create features, and automate data preparation in machine learning (ML) workflows without writing any code. This reduces the time required to prepare data and perform feature engineering from weeks to minutes. SageMaker Data Wrangler supports Snowflake, a popular data warehouse solution that integrates with a range of services and databases. This integration allows users to visualize, detect anomalies, detect mismatched data points, and join data from different sources quickly and easily.
Deploy a Serverless ML Inference Endpoint of Large Language Models Using FastAPI, AWS Lambda, and AWS CDK
Deploying a locally trained machine learning (ML) model to the cloud for inference and use in other applications can be a significant challenge for data scientists. But with the right tools, such as FastAPI, AWS Lambda, and the AWS Cloud Development Kit (CDK), this process can be made easier. With these services, data scientists can deploy a serverless ML inference endpoint of large language models efficiently and quickly.
How Light & Wonder Built a Predictive Maintenance Solution for Gaming Machines on AWS
Working with AWS, Light & Wonder recently developed an industry-first secure solution, Light & Wonder Connect (LnW Connect), to enable predictive maintenance for gaming machines. LnW Connect enables users to monitor their gaming machines securely and remotely, with insights for predictive maintenance and analytics. This helps users reduce downtimes and improve the gaming experience for their customers.
Use the AWS CDK to Deploy Amazon SageMaker Studio Lifecycle Configurations
Amazon SageMaker Studio is the first fully integrated development environment (IDE) for machine learning (ML). Studio provides a single web-based visual interface where you can perform all ML development steps required to prepare data, as well as build, train, and deploy models. Lifecycle configurations are shell scripts triggered by Studio lifecycle events, such as starting or stopping a notebook instance, to automate data preparation, feature engineering, or model training. The AWS Cloud Development Kit (CDK) lets you define and deploy these configurations quickly using Infrastructure-as-Code (IaC).
Boost Agent Productivity with Salesforce Integration for Live Call Analytics
Contact center agents can now focus on having productive customer conversations instead of getting distracted by having to look up customer information and knowledge articles. With Salesforce integration for Live Call Analytics, agents can have access to customer details and documents in real time, boosting productivity. This solution is built on Amazon Connect, Amazon Comprehend, and Amazon Transcribe.
Onboard Users to Amazon SageMaker Studio with Active Directory Group-Specific IAM Roles
To get started with Amazon SageMaker Studio, the first step is to create an Amazon SageMaker domain, which provides a secure ML environment. You can use Active Directory group-specific IAM roles to onboard users to the SageMaker domain quickly and easily. This simplifies the process of provisioning Studio in your AWS account and Region. With group-specific IAM roles, you can onboard new users with the appropriate roles and permissions to the SageMaker domain in a few clicks.
At KeyCore, we offer professional and managed services to help you leverage the power of Amazon SageMaker. Our experienced team of AWS experts can help you develop and deploy ML models quickly and efficiently, and optimize your ML workloads for better performance. Contact us to learn more.
Read the full blog posts from AWS
- Accelerate time to business insights with the Amazon SageMaker Data Wrangler direct connection to Snowflake
- Deploy a serverless ML inference endpoint of large language models using FastAPI, AWS Lambda, and AWS CDK
- How Light & Wonder built a predictive maintenance solution for gaming machines on AWS
- Use the AWS CDK to deploy Amazon SageMaker Studio lifecycle configurations
- Boost agent productivity with Salesforce integration for Live Call Analytics
- Onboard users to Amazon SageMaker Studio with Active Directory group-specific IAM roles
Announcements, Updates, and Launches
Announcing the Launch of Amazon EC2 Hpc7g, C7gn, and Amazon EC2 Instance Connect Endpoint to Streamline and Secure SaaS Applications
At AWS re:Invent 2022, AWS CEO Adam Selipsky introduced Amazon EC2 Hpc7g instances optimized for High Performance Computing (HPC) workloads, such as weather forecasting, computational fluid dynamics, and options pricing. Now available, EC2 C7gn instances are designed for demanding network-intensive workloads, such as firewalls, virtual routers, load balancers, and data analytics. Both are powered by AWS Graviton3E processors.
Additionally, AWS has launched several other services to help streamline and secure SaaS applications. These include Amazon EC2 Instance Connect Endpoint, AWS Detective, Amazon S3 Dual Layer Encryption, and Amazon Verified Permission.
Amazon EC2 Hpc7g and C7gn
Amazon EC2 Hpc7g instances are designed to enable customers to run compute-intensive HPC workloads with ease, providing up to 20x faster performance than previous-generation HPC instance types. C7gn instances, meanwhile, are optimized for compute- and networking-intensive workloads, data analytics, and tightly-coupled cluster computing jobs, and support up to 200 Gbps network bandwidth.
Amazon EC2 Instance Connect Endpoint, AWS Detective, Amazon S3 Dual Layer Encryption, and Amazon Verified Permission
Amazon EC2 Instance Connect Endpoint provides a secure foothold into instances, allowing customers to manage and troubleshoot their EC2 instances without needing to open additional ports or enable public IP addresses. AWS Detective allows customers to automate the process of tracking and analyzing their security issues. Amazon S3 Dual Layer Encryption provides an additional layer of security by encrypting data at rest, and Amazon Verified Permission grants customers access to resources without having to manage IAM policies.
With these new services, companies can now adopt SaaS applications with greater ease and security. KeyCore can help customers streamline and secure their SaaS applications so they can adopt and use the most modern tools and technologies available. We offer both professional and managed services for AWS, providing customers with the support they need to get the most out of their SaaS applications.
Read the full blog posts from AWS
- New – Amazon EC2 Hpc7g Instances Powered by AWS Graviton3E Processors Optimized for High Performance Computing Workloads
- New Amazon EC2 C7gn Instances: Graviton3E Processors and Up To 200 Gbps Network Bandwidth
- Learn how to streamline and secure your SaaS applications at AWS Applications Innovation Day
- AWS Week in Review – Amazon EC2 Instance Connect Endpoint, Detective, Amazon S3 Dual Layer Encryption, Amazon Verified Permission – June 19, 2023
Containers
How Quora Modernized MLOps on Amazon EKS to Improve Customer Experience
Quora is a leading Q&A platform with a mission to share and grow the world’s knowledge, serving hundreds of millions of users worldwide every month. Quora uses machine learning (ML) to generate a custom feed of questions, answers, and content recommendations based on each user’s interests.
Quora’s Journey
In order to power these ML applications, Quora needed to manage their data-heavy workloads. They were initially using a monolithic architecture, but quickly ran into resource bottlenecks. To improve performance, Quora decided to transition to a microservices-based architecture, with each service running in its own container. To manage these containers, they chose Kubernetes, and in particular Amazon EKS as the platform.
Optimizing Resources with MLOps
Quora needed to deploy and scale their ML applications quickly and efficiently, so they decided to adopt MLOps. MLOps enables teams to quickly deploy and manage ML applications, allowing them to focus on building better models and optimizing their ML pipelines. With MLOps, Quora was able to save time and resources by automating the deployment, monitoring, and scaling of their applications.
Adopting ML Pipelines on EKS
Quora adopted the ML pipelines on EKS to manage their ML applications. ML pipelines are a set of tools that allow teams to quickly create, deploy, and manage ML applications in a containerized environment. With ML pipelines, Quora was able to automate the deployment and scaling of their ML applications, as well as the monitoring and optimization of their ML pipelines.
Improved Customer Experience
By adopting MLOps on Amazon EKS, Quora was able to reduce their resource utilization and improve their customer experience. By leveraging the scalability of EKS, Quora was able to deploy and manage their ML applications quickly and efficiently, allowing them to focus on building better models and optimizing their ML pipelines.
How KeyCore Can Help
KeyCore is the leading Danish AWS consultancy, offering professional services and managed services to help customers get the most out of their cloud infrastructure. With our expertise in Amazon EKS, we can assist customers with the deployment, management, and optimization of their ML applications. Our team of experienced AWS professionals has the knowledge and experience to ensure that customers are getting the optimal performance from their ML pipelines.
Read the full blog posts from AWS
AWS Smart Business Blog
Automate Your Business Processes for Optimization and Cost Savings
Recurring Business Processes to Automate
There are many recurring processes in businesses which can be automated to save time and money. Small and medium businesses (SMBs) often rely on outdated IT systems and software, or worry about changes to the IT system affecting their operations. While some inefficiencies may be worth the cost of not modernizing, there are many processes that can be automated to save resources. Automating processes can help SMBs increase efficiency and productivity, and help manage costs.
Automation can help with tasks such as payroll, accounting, customer billing, and document processing. Automating these tasks can free up time for employees to focus on more important activities, save money on resources such as time and labor, and make sure manual processes are done more efficiently and accurately. Automation can also help to increase security by ensuring data is stored and processed securely and accurately.
High Impact, Low Effort Tasks to Optimize IT Costs
Managing cloud IT spend is one of the top challenges for 80 percent of SMBs. In today’s turbulent business landscape, businesses must proceed with caution, especially SMBs. Optimizing your cloud IT costs is a low effort task that can have a high impact on cost savings.
To optimize your cloud IT costs, you should identify your cloud needs and understand the costs associated with those needs. You should also take advantage of discounts and pay-as-you-go pricing models available through cloud providers such as AWS. Additionally, you can use cloud monitoring tools to track your usage and costs, and take steps to mitigate any excessive usage.
KeyCore Can Help Automate Your Business Processes and Optimize Costs
At KeyCore, we can help SMBs automate their business processes and optimize their cloud IT costs. Our professional services and managed services offer a variety of options to meet the needs of our customers. We have years of experience in AWS and can work with you to develop a plan to automate processes and optimize costs using AWS. Our team of experts can provide customized solutions to fit the unique needs of your business. Contact us today to learn more about how we can help you automate your business processes and optimize your cloud IT costs.
Read the full blog posts from AWS
- Which Recurring Business Processes Can Small and Medium Businesses Automate?
- Three High Impact, Low Effort Tasks Small and Medium Businesses Can Do to Optimize IT Costs
Official Database Blog of Amazon Web Services
Running Ethereum Validators in AWS Nitro Enclaves for Maximum Security
Introduction
Ethereum validators are operators on the blockchain who services the network. KeyCore uses AWS Nitro Enclaves to provide a secure and reliable Web3Signer blockchain validation and signing service. This allows node operators to provide staking pools and staking-as-a-service with minimal security risks. The Nitro Enclaves system is the optimal choice for running cryptographic workloads.
Operating with AWS Nitro Enclaves
AWS Nitro Enclaves make it possible to securely run cryptographic operations in a remote environment. The Nitro Enclave system is a custom hardware model combined with a custom kernel that offers an isolated environment to run code. This makes it possible to securely run Ethereum validators on the blockchain without any security risks.
Encrypting Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL Database with Minimal Downtime
KeyCore recently helped one of its customers to encrypt their unencrypted Amazon Relational Database Service (Amazon RDS) for PostgreSQL. The solution we provided used database Snapshot and PostgreSQL logical replication to seamlessly create an encrypted database with minimal disruption to the applications.
Optimizing the Cost of Amazon ElastiCache for Redis Workloads
Amazon ElastiCache for Redis is a cost-effective caching service that provides ultra-fast response for modern applications. KeyCore has 5 recommendations to optimize the cost of your Amazon ElastiCache for Redis clusters. These include optimizing the instance size, leveraging reserved instances, using Auto Scaling, setting proper eviction policies, and setting appropriate TTL values.
KeyCore Can Help
At KeyCore, our team of experienced AWS Consultants can help you get the most out of your Ethereum validators. We provide both professional services and managed services with a team of experts familiar with the AWS Nitro Enclaves system. Our team can help you optimize your Ethereum validators so that you can maximize the security of your blockchain network. Additionally, we can assist with the encryption of your Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL databases with minimal downtime. We can also help you optimize the cost of your Amazon ElastiCache for Redis workloads so that you can get the most out of your caching service.
For more information on how KeyCore can help you with your AWS needs, please visit our website: https://www.keycore.dk.
Read the full blog posts from AWS
- AWS Nitro Enclaves for running Ethereum validators – Part 2
- AWS Nitro Enclaves for running Ethereum validators – Part 1
- Encrypt Amazon RDS for PostgreSQL and Amazon Aurora PostgreSQL database with minimal downtime
- Optimize the cost of your Amazon ElastiCache for Redis workloads
AWS Training and Certification Blog
Achieve Success and Certification with AWS
Take Your AWS Certification Exam Online with Confidence
Online proctoring is a great way to take your AWS Certification exam from the comfort of your own home or office. The AWS Certification team has provided five tips to help you be successful and confident taking an AWS Certification exam via online proctoring.
- Be sure to read all instructions carefully and follow the prompts.
- Ensure that your work area is quiet, distraction-free, and well-lit.
- Give yourself plenty of time to set up the exam environment and to complete the exam with no pressure.
- Make sure that your computer meets the system requirements for the exam.
- Don’t forget to bring a valid, non-expired form of government-issued ID.
These tips will help ensure that you have the best experience possible when taking an online-proctored exam.
Updates to the AWS Certified Cloud Practitioner Exam
AWS Certified Cloud Practitioner (CLF-C01) is being updated to reflect changes in trends, the industry landscape, and the work practices of cloud professionals. You have the option to take the current exam on or before September 18, 2023, or the updated AWS Certified Cloud Practitioner exam (CLF-C02) starting September 19, 2023.
Achieving All Six Specialty AWS Certifications on the First Attempt
One AWS Solutions Architect achieved six AWS Certifications on the first attempt. Their journey to prepare for and earn the AWS Specialty Certifications included skilling up and validating their expertise in AWS Cloud. As the individual mentioned, their journey was not without difficulties, but with focused study and practice, they were able to prepare for and successfully pass all six exams on the first attempt.
How KeyCore Can Help
At KeyCore, we provide professional and managed services to help you with all your AWS Certification needs. From certification readiness assessments to AWS Training and Certification Exams, our team of experts can provide you with the knowledge and skills required to earn your certifications quickly and efficiently. Visit us at KeyCore.dk to learn more.
Read the full blog posts from AWS
- 5 tips for a successful online-proctored AWS Certification exam
- Coming soon: updates to AWS Certified Cloud Practitioner exam
- How I achieved all six specialty AWS Certifications on first attempt
Microsoft Workloads on AWS
Integrating SAMBA 4 Active Directory with AWS IAM Identity Center
Microsoft Active Directory is a widely used identity management solution for Windows networks. It provides authentication and access protocols, such as Kerberos, LDAP, and SAMBA, to securely manage identities for both on-premises and cloud deployments. SAMBA 4 is an open-source implementation of Microsoft Active Directory that can be used to synchronize the identities and credentials with AWS IAM Identity Center.
In this blog post, we will show you how to integrate SAMBA 4 with AWS IAM Identity Center utilizing either AWS Managed Active Directory (AD) or Active Directory Connector (ADC).
Active Directory Connector (ADC)
The ADC is a service that enables customers to sync their on-premises Active Directory to AWS IAM Identity Center. This service can be used to sync users, groups, and passwords to AWS IAM Identity Center. It can also be used to enable single sign-on (SSO) for on-premises users to the AWS Management Console.
With the ADC, customers can leverage SAMBA 4 to sync their identities and credentials to AWS IAM Identity Center. SAMBA 4 is fully compatible with the ADC, allowing customers to securely manage these identities and credentials in AWS IAM Identity Center.
AWS Managed Active Directory (AD)
AWS Managed AD is a service that enables customers to launch an AD instance in the AWS cloud. This service allows customers to securely manage their identities and credentials in the AWS cloud.
The AWS Managed AD supports SAMBA 4 and enables customers to sync their identities and credentials from the on-premises SAMBA 4 instance to the cloud-hosted Active Directory. With this service, customers can manage their identities and credentials in the cloud, without the need for manual synchronization.
Conclusion
Integrating SAMBA 4 with AWS IAM Identity Center is a simple and secure way to manage identities and credentials in the cloud. With the AWS Managed Active Directory and Active Directory Connector services, customers can easily sync their SAMBA 4 identities and credentials to the cloud.
At KeyCore, we provide professional and managed services to help enterprises integrate SAMBA 4 with AWS IAM Identity Center. Our team of experienced AWS consultants can enable single sign-on for on-premises users to the AWS Management Console and help customers manage their identities and credentials in the cloud. Contact us today to learn more about our services.
Read the full blog posts from AWS
Official Big Data Blog of Amazon Web Services
Unlock Insight from Your Data Lake with Amazon QuickSight
Imperva Cloud WAF protects hundreds of thousands of websites and blocks billions of security events every day. To enable the transformation of Imperva’s data into business outcomes, they leverage Amazon QuickSight to analyze large datasets in their data lake. Amazon QuickSight enables business users to explore and collaborate on data in near real-time, making data-driven decisions easier. With QuickSight, users can discover patterns, relationships, and correlations in their data quickly and easily.
Enforce Boundaries with AWS Glue Interactive Sessions
AWS Glue interactive sessions enable engineers to build, test, and run data preparation and analytics workloads in an interactive notebook. These interactive sessions provide isolated development environments, manage the underlying compute cluster, and allow for the configuration to stop idling resources. This technology provides default recommended configurations, but also allows users to customize the settings for their own specific use cases.
Manage Partitions for Amazon S3 Tables with AWS Glue Data Catalog
Organizations that process large volumes of data usually store it in Amazon Simple Storage Service (Amazon S3) and query the data using distributed analytics engines such as Amazon Athena. If queries are run without considering the optimal data layout, it can result in a high volume of read requests, resulting in longer query runtimes and higher costs. Amazon S3 tables backed by the AWS Glue Data Catalog are a way to optimize query performance. AWS Glue Data Catalog adds structure to your data, enabling partition maintenance, and provides the ability to query only the partitions you need.
Explore Amazon OpenSearch Service’s Vector Database Capabilities
Amazon OpenSearch Service’s vector database capabilities can be used to implement a variety of applications, such as semantic search, Retrieval Augmented Generation (RAG) with LLMs, recommendation engines, and search in rich media. Taking advantage of this technology can enable your organization to unlock valuable insights from complex datasets.
Improve Onboarding and Integration with ThoughtSpot and Amazon Redshift
Amazon Redshift is a powerful cloud data warehouse used by tens of thousands of customers to analyze huge amounts of data and run complex analytical queries. ThoughtSpot’s partner integration with Amazon Redshift accelerates the onboarding process and ensures seamless integration when setting up a data warehouse. This makes it easier for customers to use the data warehouse to make data-driven decisions.
Build an Amazon Redshift Data Warehouse Using DynamoDB
Amazon DynamoDB is a fully managed NoSQL service that delivers single-digit millisecond performance at any scale. It’s used by thousands of customers for mission-critical workloads, typically for applications handling a high volume of transactions or gaming applications that need to maintain scorecards for players and games. Amazon Redshift data warehouses can be built using an Amazon DynamoDB single-table design, allowing the data warehouse to scale with your needs.
Stream VPC Flow Logs to Datadog with Amazon Kinesis Data Firehose
Logs generated by applications and services are important for a variety of reasons, such as compliance, audits, troubleshooting, security incident responses, and more. Amazon Kinesis Data Firehose can be used to stream VPC Flow Logs to Datadog for log analysis and to gain insights into user application behavior and patterns.
Accelerate Data Science Feature Engineering with Apache Iceberg
Apache Iceberg is an open-source framework that can be used to accelerate data science feature engineering on transactional data lakes. When used with Amazon Athena, it provides an interactive query service that makes it easy to analyze data in Amazon Simple Storage Service (Amazon S3) and other data sources.
Multi-tenancy Apache Kafka Clusters in Amazon MSK
To process streaming data, organizations can use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to build and run applications that use Apache Kafka. Amazon MSK supports two types of quotas: user quotas and cluster quotas, which are integral to multi-tenant Kafka clusters. User quotas prevent individual users from overconsuming cluster resources, while cluster quotas prevent the entire cluster from being impacted by any one user.
Ingest, Transform, and Deliver Events from Amazon Security Lake
Amazon Security Lake provides a centralized place to access all your security-related data such as findings from AWS Security Hub, DNS query data from Amazon Route 53, and more. Amazon OpenSearch Service can be used to ingest, transform, and deliver the events published by Amazon Security Lake, making it easier to unlock valuable insights.
Optimize Queries with Dataset Parameters in Amazon QuickSight
Amazon QuickSight is a unified business intelligence (BI) platform that enables users to explore and collaborate on data in near real-time. Dataset parameters are a new type of parameter in QuickSight that can help you optimize your queries and achieve better performance. These parameters provide a way to dynamically specify values for a query and can help you reduce the amount of data scanned, resulting in faster query response times.
KeyCore is an AWS-certified Danish consultancy that provides both professional services and managed services. When it comes to data lakes, KeyCore can help you unlock the value of your data and gain insights faster by leveraging the power of Amazon QuickSight. We can also assist you in setting up multi-tenant Apache Kafka clusters in Amazon MSK, streamlining the onboarding and integration process with ThoughtSpot and Amazon Redshift, and enhancing query optimization with Amazon QuickSight datasets parameters. For more information about KeyCore and our offerings, please visit our website.
Read the full blog posts from AWS
- Enable business users to analyze large datasets in your data lake with Amazon QuickSight
- Enforce boundaries on AWS Glue interactive sessions
- Get started managing partitions for Amazon S3 tables backed by the AWS Glue Data Catalog
- Amazon OpenSearch Service’s vector database capabilities explained
- Accelerate onboarding and seamless integration with ThoughtSpot using Amazon Redshift partner integration
- Build an Amazon Redshift data warehouse using an Amazon DynamoDB single-table design
- Stream VPC Flow Logs to Datadog via Amazon Kinesis Data Firehose
- Accelerate data science feature engineering on transactional data lakes using Amazon Athena with Apache Iceberg
- Multi-tenancy Apache Kafka clusters in Amazon MSK with IAM access control and Kafka Quotas – Part 1
- Multi-tenancy Apache Kafka clusters in Amazon MSK with IAM access control and Kafka quotas – Part 2
- Ingest, transform, and deliver events published by Amazon Security Lake to Amazon OpenSearch Service
- Optimize queries using dataset parameters in Amazon QuickSight
Networking & Content Delivery
Network Security & Content Delivery
Customize AWS WAF Rate-Based Rule Blocking Period
AWS WAF is Amazon Web Services’ cloud-based web application firewall designed to help protect web applications from malicious actors. By customizing the block period for rate-based rules, this feature can prevent malicious actors from reusing the same set of IP addresses for generating HTTP request floods. This allows for greater security as rule-breakers are blocked for a longer period of time, preventing them from reusing the same IPs.
The rate-based rule is set to default for a block period of one hour, but this period can be customized. To do so, customers need to create a WAF rule in the web console and select a rate-based setting. The setting is based on requests per second, and the default is one request per second. Once the rule is created, customers can adjust the block period accordingly to their security needs.
At KeyCore, our expert AWS consultants can advise you on the best course of action to ensure your application is secure. We provide a range of professional services and managed services to help you configure the right WAF settings and develop an effective security strategy for your business.
Mitigate Common Web Threats With One Click in Amazon CloudFront
Amazon CloudFront is a content delivery service designed to deliver web content faster and more securely. With the new one-click protection feature, users can set up and monitor AWS WAF protections to their Amazon CloudFront distributions. This simplifies the set-up process and allows customers to quickly and efficiently mitigate common web threats.
The one-click feature provides layer-7 protections, offering additional security measures such as SQL injection and cross-site scripting prevention. The feature also provides pricing and security recommendations to better inform customers of the cost and security of their CloudFront distributions.
At KeyCore, our AWS experts can take the guesswork out of the process by helping you to quickly set up and configure the one-click protection feature. We offer a range of managed services and professional services to ensure your Amazon CloudFront distributions are secure and running optimally.
Estimating Radio Coverage for Your Network With AWS Private 5G
AWS Private 5G provides customers with a radio frequency (RF) estimator to help them determine the number of units needed to meet their coverage and capacity requirements. This blog walks customers through how to use the RF estimator to accurately interpret their network requirements.
The RF estimator requires three pieces of information: the RF coverage requirements, the link budget requirements and the general area of coverage. All of this information can be inputted into the estimator to accurately predict the number of units needed. Once these have been determined, customers can then begin to configure and deploy the AWS Private 5G units.
At KeyCore, we provide AWS managed services and professional services to help you configure and deploy AWS Private 5G quickly and effectively. Our expert AWS consultants can advise you on the best course of action to ensure your network is running optimally.
Hybrid Security Inspection Architectures With AWS Cloud WAN and AWS Direct Connect
AWS Cloud WAN makes it easy for customers to build and operate wide area networks that connect data centers, branch offices and Amazon Virtual Private Clouds (VPCs). With Cloud WAN, customers can connect to AWS through their chosen local network provider and then use a central dashboard and network policies to create a secure hybrid network.
AWS Direct Connect can also be used to provide secure hybrid connections to AWS. This private, dedicated connection offers a range of benefits, including improved security, reduced network costs and increased network performance. With AWS Cloud WAN and AWS Direct Connect customers can create a secure hybrid network for their applications.
At KeyCore, our expert AWS consultants can help you create a secure hybrid network. We specialize in helping customers configure and deploy AWS Cloud WAN and AWS Direct Connect, so you can enjoy improved network performance and security. Our managed services and professional services ensure our customers’ networks are configured optimally for their needs.
Read the full blog posts from AWS
- Customize AWS WAF rate-based rule blocking period.
- Mitigate Common Web Threats with One Click in Amazon CloudFront
- Estimating radio coverage for your network with AWS Private 5G
- Hybrid security inspection architectures with AWS Cloud WAN and AWS Direct Connect
AWS Compute Blog
Testing and Deploying AWS Lambda Functions with AWS Step Functions
In recent years, serverless developers have been looking for the most efficient way to test and deploy their applications in the AWS Cloud without needing to mock security, external services, or other environment variables. This blog will explore how to achieve this efficiently using the new AWS SAM remote invoke feature for testing, and the versions and aliases feature in AWS Step Functions for deploying.
Testing with AWS SAM Remote Invoke
The AWS Serverless Application Model (AWS SAM) has been available to help streamline the development process for serverless applications. AWS SAM provides a command line interface (CLI) to make it easy to create, test, and debug serverless applications. The new AWS SAM remote invoke feature makes it possible to easily invoke a Lambda function for testing, without needing to worry about setting up environment variables or other related resources.
Using the remote invoke feature, developers can quickly invoke a Lambda function with a simple command. This command will pass the function’s payload, including environment variables, and will return the output of that request. This makes testing Lambda functions much simpler and faster, as developers don’t need to worry about mocking resources or setting up test databases.
Deploying State Machines with Versions and Aliases in AWS Step Functions
AWS Step Functions is a service that allows developers to create workflows for application logic. This can be used to coordinate multiple AWS services and define the flow of their applications. The new versions and aliases feature in AWS Step Functions makes it possible to run specific revisions of a state machine instead of the latest. This provides customers with more control over their deployments, and allows them to have more reliable deployments while controlling risk.
Using the versions and aliases feature in AWS Step Functions, customers can deploy a new revision of their application logic to a specific version and then promote it to an alias. This allows customers to test the new version before making it available to their users. Developers can also easily rollback to a previous version if needed.
How KeyCore Can Help
KeyCore is the leading Danish AWS consultancy. We provide both professional services and managed services to help customers get the most out of their serverless applications on AWS. Our team of AWS experts can help your organization develop, deploy, and debug serverless applications on AWS quickly and efficiently using the AWS SAM remote invoke feature and the versions and aliases feature in AWS Step Functions.
Read the full blog posts from AWS
- Testing AWS Lambda functions with AWS SAM remote invoke
- Deploying state machines incrementally with versions and aliases in AWS Step Functions
AWS for M&E Blog
How Media & Entertainment Companies can Leverage AWS Services to Personalize Content, Provide Media Metadata at Scale, and Unlock New Revenue Potential
Content Personalization with AI/ML
With Artificial Intelligence (AI) and Machine Learning (ML), digital publishers can create personalized user experiences and display and recommend content that is tailored to the interests of an individual user. This content personalization can improve customer satisfaction, increase user engagement, and reduce the workload of editorial and other teams.
Media Metadata with AWS Serverless Application Model
Media & Entertainment (M&E) companies often have large collections of media files stored in Amazon Simple Storage Service (Amazon S3). Before using these files, technical metadata is needed regarding video codecs, frame rates, audio channels, duration, and other information. This is where MediaInfo can help. An open-source command-line utility, MediaInfo can provide the required technical metadata about media files. Leveraging AWS Serverless Application Model, Media & Entertainment companies can run MediaInfo at scale.
Unlock New Revenue Potential with Atmosphere
With Atmosphere, brands can reach consumers while they are in restaurants, bars, gyms, medical offices, airports, and beyond. To broaden access to these audiences, Atmosphere has built a free ad-supported streaming service on cloud-based technologies from Amazon Web Services (AWS). This service unlocks new revenue potential for businesses and allows them to monetize their content with ads.
KeyCore’s AWS Services for Media & Entertainment
At KeyCore, our advanced AWS Consultancy provides professional and managed services to help Media & Entertainment companies leverage AWS services. Whether it’s content personalization with AI/ML, media metadata with AWS Serverless Application Model, or unlocking new revenue potential with Atmosphere, our experts can provide the help you need.
Read the full blog posts from AWS
- The benefits of content personalization for digital publishing
- Leverage AWS Serverless Application Model to run MediaInfo at scale
- Atmosphere scales streaming service for businesses, unlocks new revenue potential with AWS
AWS Storage Blog
Encrypt and Decrypt Files with PGP and AWS Transfer Family – How KeyCore Can Help
Protecting sensitive data is a priority for many companies, especially those in industries like financial services and healthcare. These customers need to securely exchange files containing sensitive information like Personal Identifiable Information (PII) and financial records with their users. To do so, they often rely on Pretty Good Privacy (PGP) encryption to ensure regulatory and data policy compliance.
What is PGP Encryption?
PGP is a type of encryption that uses cryptographic algorithms and digital signatures to secure data. It helps to ensure that only authorized parties can gain access to confidential information. It also allows for verifiable proof of data integrity and non-repudiation of data.
AWS Transfer Family and PGP
AWS Transfer Family is a fully managed service that enables customers to securely transfer files over the internet. It supports both SFTP and FTPS protocols, and now also supports PGP encryption. This allows customers to ensure that only authorized users can access their data.
How KeyCore Can Help
At KeyCore, we specialize in helping customers take advantage of all the services offered by AWS. Our team of experienced AWS Certified Solutions Architects can help you securely migrate your sensitive data to AWS Transfer Family and set up PGP encryption. Additionally, we offer both professional services and managed services to help you get the most out of your AWS solutions. Contact us today to learn more about how we can help you secure your data with AWS Transfer Family and PGP encryption.
Read the full blog posts from AWS
AWS Architecture Blog
Developing Open Source Solutions on AWS: A Guide
Open-source technology has been gaining traction for quite some time, and AWS has been at the forefront of leveraging and amplifying it. AWS is committed to continuing the support for open-source development and ensuring that customers have the right tools to build and deploy open-source solutions on AWS. In this article, we will discuss the process of developing open-source and how AWS addresses the needs of developers.
Open Source Software on AWS
AWS offers a wide range of open-source software in the form of managed services, such as Amazon Elastic Compute Cloud (EC2), Amazon RDS, and Amazon Redshift. This helps developers to quickly and easily deploy their open-source solutions on the AWS cloud. Additionally, AWS provides tools and services such as AWS CloudFormation, AWS CodeDeploy, and AWS Code Pipeline, which provide developers with the ability to quickly and easily deploy, test, and monitor their applications.
Developing Open Source Solutions on AWS
Developing open-source solutions on AWS is a relatively simple process. First, developers need to identify the open-source technology they wish to use. After selecting the technology, developers can use AWS tools and services to deploy the open-source solution on the AWS cloud. This can be done within a few hours, depending upon the complexity of the application and the number of services that need to be deployed. Once the application is deployed, developers need to monitor and maintain the application. Additionally, they need to ensure that the application is secure and that it meets the desired performance standards.
Benefits of Developing on AWS
Developing on AWS offers several advantages. For starters, AWS provides a range of managed services that allow developers to quickly and easily deploy their applications. Furthermore, AWS also offers tools, such as CloudFormation and CodeDeploy, which enable developers to test and monitor their applications. Additionally, AWS provides a secure and reliable environment in which to develop open-source solutions. This ensures that the applications are secure and perform as expected.
KeyCore and Open Source Solutions On AWS
At KeyCore, we are dedicated to providing our customers with the best-in-class services to help them develop and deploy open-source solutions on the AWS cloud. Our team of experts can help you select the right tools and services for your needs and provide you with the guidance and expertise you need to ensure that your application is secure and performs as expected. We can also help you monitor and maintain your applications to ensure that they are always up to date and secure.
Read the full blog posts from AWS
AWS Partner Network (APN) Blog
Overview of AWS Partner Network (APN) Blog Articles
Wipro’s Best Practices for Conducting AWS Well-Architected Reviews Using the SAP Lens
Wipro is an AWS Premier Tier Services Partner, and has helped multiple SAP customers running their SAP applications on AWS to achieve agility, innovation at high speed, and improved resilience. This post highlights best practices and learnings from Well-Architected Framework Reviews conducted by Wipro using the SAP Lens. Wipro’s approach focuses on conducting reviews with customers as a joint team exercise with the customer’s technology and business teams involved, and with the customer taking ownership of the overall review and the subsequent action items.
Accelerating Public Sector Financial Processes with Baker Tilly Digital’s Procure-to-Pay Portal on AWS
Baker Tilly Digital developed a serverless procure-to-pay portal on AWS deployed in AWS GovCloud (US) for a government contractor. The P2P portal helped automate vendor management, improve the pre-award capabilities, transform the purchase ordering system, and simplify invoicing. The solution leveraged AWS Lambda, AWS IAM, Amazon API Gateway, and Amazon DynamoDB to securely store and process data while providing an end-to-end user experience.
Accelerating Healthcare Data Management for Digital Transformation with Emids CoreLAKE
Emids CoreLAKE is a low-code data management platform built by healthcare practitioners to accelerate healthcare data modernization. It’s a suite of accelerators with pre-built capabilities that leverage AWS capabilities to carry out noise-based quality testing operations in a reliable, scalable fashion, while at the same time being completely modular and cost-efficient. CoreLAKE also has a feature for healthcare organizations to easily establish seamless integrations with existing systems internally as well as with their partner and client ecosystems.
Elastio Integrates with AWS Backup for Secure Backups to Enhance Ransomware Defense
Elastio’s integration between its Cyber Recovery as a Service (CRaaS) platform and AWS Backup enables customers to securely back up data and protect against ransomware. The integration runs from within the customer’s AWS account, and Elastio does not have access to view or take custody of customer data, nor does it have access to encryption keys. AWS customers maintain control over policy details, including which account(s) to run it in, what assets to scan, and whether to automatically scan or do so on a point-in-time basis.
Act Now or Lag Behind: Why SaaS Leaders Can’t Afford to Overlook PLG on AWS
Partner-Led Growth (PLG) is a game-changing strategy for SaaS businesses looking to reduce customer acquisition cost, widen their funnel, and expand globally. By leveraging the capabilities of AWS SaaS Factory, AWS Partners can implement PLG tactics to unlock the potential of their product. Everyone from boards, investors, founders, C-suite executives, and even individual team members can benefit from understanding this strategy.
How MHP SOUNCE Enhances Shopfloor Quality Control by Analyzing Acoustic Anomalies
MHP built an artificial intelligence-supported acoustic testing solution named MHP SOUNCE for shopfloors and manufacturing lines. Leveraging the capabilities of AWS, MHP SOUNCE is able to carry out noise-based quality testing operations in a reliable, scalable fashion, while at the same time being completely modular and cost-efficient. MHP SOUNCE scans audio samples to detect and classify common anomalies, such as mechanical or electrical faults, to improve quality control practices.
Achieving Compliance with Healthcare Regulations Using safeINIT’s HIPAA-Compliant Environment
SafeINIT developed an infrastructure-as-code HIPAA-compliant environment for healthcare applications on AWS. The environment is designed specifically to protect sensitive data, and leverages AWS capabilities such as AWS CloudFormation, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Config, and AWS CodeBuild to enable organizations to achieve and maintain compliance with healthcare regulations.
Say Hello to 176 AWS Competency, Service Delivery, Service Ready, and MSP Partners Added or Renewed in May
176 AWS Partners received new or renewed designations in May for AWS Competency, AWS Managed Service Provider (MSP), AWS Service Delivery, and AWS Service Ready programs. These designations span workload, solution, and industry, and help AWS customers identify top AWS Partners that can deliver on core business objectives. AWS Partners are focused on customer success, helping customers take full advantage of the business benefits AWS has to offer.
Using Computer Vision to Enable Digital Building Twins with NavVis and AWS
NavVis and AWS collaborated to build a digital building twin for a large industry customer. The twin leveraged object detection algorithms and machine learning to automate and scale the creation of a digital building twin, providing accurate ground truth data for existing brownfield buildings. The solution leveraged AWS services such as AWS Lambda, Amazon S3, and Amazon Rekognition to provide an end-to-end user experience.
Building Multi-Edge Data Architectures on AWS Wavelength and MongoDB
MongoDB and AWS collaborated to build multi-edge data architectures on AWS Wavelength for low-latency experiences across industrial Internet of Things IoT (IIoT), media and entertainment, automotive, and beyond. The post builds on an earlier demonstration of MongoDB Realm on AWS Wavelength by amplifying the existing architecture with features such as the AWS Load Balancer Controller, and shows how to use it to deploy an Application Load Balancer in front of the database.
5 Stages to Building a Successful Partner Practice with AWS
Having a dedicated AWS practice can be essential in getting the most out of your AWS Partner Network (APN) involvement. This post describes the five stages of a successful partner practice: exploration, learning, building, launching, and sustaining. At each stage, APN resources and benefits can be optimized to help partners be successful in supporting their mutual customers. KeyCore Consulting always puts our customers’ success first and can help partners develop and optimize their AWS practice.
Read the full blog posts from AWS
- Wipro’s Best Practices for Conducting AWS Well-Architected Reviews Using the SAP Lens
- Accelerating Public Sector Financial Processes with Baker Tilly Digital’s Procure-to-Pay Portal on AWS
- Accelerating Healthcare Data Management for Digital Transformation with Emids CoreLAKE
- Elastio Integrates with AWS Backup for Secure Backups to Enhance Ransomware Defense
- Act Now or Lag Behind: Why SaaS Leaders Can’t Afford to Overlook PLG on AWS
- How MHP SOUNCE Enhances Shopfloor Quality Control by Analyzing Acoustic Anomalies
- Achieving Compliance with Healthcare Regulations Using safeINIT’s HIPAA-Compliant Environment
- Say Hello to 176 AWS Competency, Service Delivery, Service Ready, and MSP Partners Added or Renewed in May
- Using Computer Vision to Enable Digital Building Twins with NavVis and AWS
- Building Multi-Edge Data Architectures on AWS Wavelength and MongoDB
- 5 Stages to Building a Successful Partner Practice with AWS
AWS Cloud Enterprise Strategy Blog
Navigating the Cloud Migration Bubble: Increasing Business Value from Cloud Governance
Questioning Prioritization Conventions
It has become common practice in meetings to prioritize tasks and ask if something is a priority, without questioning the underlying assumptions. However, when something goes wrong, it is usually attributed to a failure of prioritization. Organizations should bear in mind that cloud governance is a key factor in the success of digital transformation and cloud migration initiatives.
Cloud Governance for Digital Transformation
When launching digital transformation initiatives, we advise AWS customers to consider that cloud migration is a part of the process. Establishing effective cloud governance programs is essential to ensure successful migration and transformation. After the cloud migration portion is complete, teams should maintain a focus on governance with the goal of delivering business outcomes.
Turning the Migration Bubble into a Blip
The cloud has revolutionized how businesses operate by providing scalability, flexibility, and innovation at a much faster rate. However, transitioning to the cloud can bring with it unexpected costs, complexities, and challenges that can disrupt the process. Nonetheless, organizations can use cloud governance to manage and mitigate these issues, effectively turning the migration bubble into a blip.
How KeyCore Can Help
KeyCore’s advanced AWS consultants have years of experience in helping companies manage their cloud migration and digital transformation initiatives. By leveraging our expertise in the best cloud governance practices, we can help you navigate the migration bubble and ensure successful business outcomes. Reach out to us today and discover how KeyCore can help your business.
Read the full blog posts from AWS
- Should You Prioritize?
- Increase Business Value from the Cloud with Effective Cloud Governance
- Navigating the Cloud Migration Bubble: Turning Your Bubble into a Blip
AWS HPC Blog
Understanding High Throughput Compute Grid Blueprint
The High Throughput Compute (HTC) Grid Blueprint is a solution to the challenges faced by Financial Services Industry (FSI) organizations for high throughput computing on AWS. In this post, we discuss the operational characteristics of HTC-Grid, how Novo Nordisk approached the deployment of a scale-out HPC platform for running AlphaFold, and how AWS ParallelCluster 3.6 can help you to customize Slurm settings and improve reproducibility and self-documentation for your HPC infrastructure.
HTC-Grid’s Operational Characteristics
The HTC-Grid blueprint is designed to meet the challenges faced by FSI organizations for high throughput computing on AWS. Specifically, this solution has been designed with four operational characteristics in mind: latency, throughput, scalability, and cost.
Latency is a measure of the time it takes for a request to be processed by the HPC platform. The HTC-Grid blueprint is designed to minimize latency by using Amazon EC2 instance types optimized for compute-intensive workloads such as the C-type instance families.
Throughput is the rate at which tasks can be processed by the HPC platform. The HTC-Grid blueprint is designed to maximize throughput by using Amazon EC2 instance families that are optimized for high-performance computing (HPC) workloads such as the MPI-enabled P3 instance families.
Scalability is the ability of the HPC platform to increase or decrease in size in order to meet changing workload demands. The HTC-Grid blueprint is designed to maximize scalability by using Amazon EC2 Auto Scaling Groups and Amazon Spot Fleets to rapidly adjust the size of the HPC platform in order to meet increasing or decreasing workload demands.
Cost is the total cost of running the HPC platform. The HTC-Grid blueprint is designed to minimize cost by using Amazon EC2 Spot Instances and Amazon EC2 Reserved Instances to maximize compute cost savings.
Deploying an HPC Platform with AWS Batch and AWS ParallelCluster
Novo Nordisk used AWS Batch and AWS ParallelCluster to deploy a scale-out HPC platform for running AlphaFold. AWS Batch is a fully managed compute service that helps customers automate the process of managing and running batch jobs. It allows customers to easily set up compute resources and run jobs without having to manage complex infrastructure.
Novo Nordisk used AWS ParallelCluster to manage and configure the HPC platform. AWS ParallelCluster is an open-source cluster management tool that makes it easy to deploy and manage HPC clusters in the cloud. With AWS ParallelCluster 3.6, customers can directly specify Slurm settings in the cluster config file, allowing for improved reproducibility and another step towards self-documentation for their HPC infrastructure.
How KeyCore Can Help
At KeyCore, our team of AWS Certified Solutions Architects is experienced in deploying high-performance computing (HPC) workloads on AWS. We specialize in helping customers design and deploy secure, compliant, and cost-effective HPC solutions that are tailored to their unique requirements. Our team can provide custom design and implementation guidance for all aspects of your HPC workload, including but not limited to:
– Designing and deploying high throughput compute (HTC) grid architectures
– Setting up compute resources with AWS Batch
– Configuring and managing clusters with AWS ParallelCluster
– Optimizing cluster settings for improved reproducibility and self-documentation
– Designing and deploying cost-effective HPC solutions with Amazon EC2 Spot Instances and Amazon EC2 Reserved Instances
If you are looking to design and deploy a secure, compliant, and cost-effective HPC solution on AWS, contact us today to learn how KeyCore can help!
Read the full blog posts from AWS
- Customize Slurm settings with AWS ParallelCluster 3.6
- Protein Structure Prediction at Scale using AWS Batch
- HTC-Grid – examining the operational characteristics of the high throughput compute grid blueprint
AWS Cloud Operations & Migrations Blog
Migrating and Automating Patching at Scale with AWS Application Migration Service
AWS Application Migration Service (AWS MGN) is a highly automated rehosting solution that helps customers move applications from their existing infrastructure to the AWS Cloud. It provides the necessary tools for migrating applications in an automated and secure manner. Additionally, AWS MGN offers a powerful patching process that allows customers to keep their applications up to date and secure.
Automated Patching with AWS MGN
AWS MGN provides an automated patching process that helps customers reduce manual effort associated with maintaining applications. The process includes the following steps:
- The customer creates a patching plan in AWS MGN specifying the patching schedule, patch target, and patch frequency.
- AWS MGN automatically finds applicable patches for the target environment and applies them to the environment.
- The customer can monitor the patching process in the AWS MGN console.
This process can be used to patch both on-premises and cloud-based applications. Additionally, customers can use it to automate patching of applications after migrating them to the cloud.
Benefits of Automated Patching
Automating the patching process with AWS MGN has a number of benefits. It allows customers to save time and resources by eliminating manual effort associated with patching. Additionally, it ensures that applications are kept up to date with the latest security patches, helping customers maintain a secure environment. Finally, automated patching can help customers reduce application downtime by applying patches on a regular schedule.
Approach to Migrate Spring Cloud Microservices Applications to Amazon EKS
Enterprises can use the AWS Cloud to migrate their on-premise Spring Cloud microservices applications to Amazon Elastic Kubernetes Service (Amazon EKS) and take advantage of managed service offerings from AWS. With Amazon EKS, developers can eliminate the need to run and manage cross-cutting services such as Service Registry, Config Server, and API Gateway, and instead focus on developing cloud-native applications.
Migrating Spring Cloud Applications to Amazon EKS
Migrating Spring Cloud applications to Amazon EKS can be divided into three steps:
- Prepare the application and infrastructure: This involves setting up the development environment, creating the Docker images, and deploying the application.
- Migrate to Amazon EKS: This involves creating an Amazon EKS cluster and deploying the application to it.
- Optimize: This involves optimizing the application for the cloud, such as configuring autoscaling and load balancing.
Benefits of Migrating to Amazon EKS
Migrating Spring Cloud applications to Amazon EKS provides a number of benefits. It allows developers to easily deploy and manage applications on Amazon EKS, while taking advantage of the managed service offerings from AWS. Additionally, it enables developers to quickly iterate on application development and improve the performance of their applications. Finally, it helps developers reduce complexity and costs associated with managing and running applications.
Creating a Near-Realtime Dashboard on Amazon CloudWatch for a Migration Use Case
Monitoring performance metrics of AWS resources is crucial for any business use case running in cloud. To ensure observability and monitoring of the infrastructure, AWS Well-Architected Framework best practices recommend customers to setup metrics at scale. Amazon CloudWatch is a powerful service for collecting, analyzing, and monitoring metrics and logs from AWS resources. Customers can use Amazon Cloudwatch to create near-realtime dashboard for their use cases.
Creating a Dashboard in Amazon CloudWatch
Creating a dashboard in Amazon CloudWatch requires the following steps:
- Create the metrics: Customers can create custom metrics from the logs and events generated by the application.
- Enable alarms: Customers can set alarms on the metrics to trigger notifications when the metrics exceed certain thresholds.
- Create dashboards: Customers can create dashboards to visualize the metrics in Amazon CloudWatch.
Benefits of Amazon CloudWatch Dashboard
Amazon CloudWatch dashboard provides a number of benefits. It allows customers to easily monitor and analyze metrics and logs across multiple AWS services and resources. Additionally, it helps customers troubleshoot issues quickly by providing insight into the performance of their applications. Finally, it helps customers optimize their applications by providing near-realtime visibility into the performance of their applications.
Centralizing Configuration Management Using AWS Systems Manager
AWS Systems Manager helps customers to centralize the management of their configurations across multiple AWS accounts and regions. With AWS Systems Manager, customers can use a single interface to configure, secure, and audit their Amazon EC2 instances. Additionally, customers can use AWS Systems Manager to automate patching, security updates, and configuration changes.
Using AWS Systems Manager for Configuration Management
Using AWS Systems Manager for configuration management requires the following steps:
- Create an Amazon EC2 instance: Customers can create an Amazon EC2 instance to manage their configurations.
- Configure AWS Systems Manager: Customers can configure AWS Systems Manager to manage their configurations.
- Integrate with AWS Identity and Access Management (IAM): Customers can integrate AWS Systems Manager with IAM to ensure that users and applications only have access to the resources they need.
- Secure the configurations: Customers can use AWS Systems Manager to secure the configurations by encrypting the data and creating access control rules.
Benefits of Centralizing Configuration Management
Centralizing configuration management with AWS Systems Manager has a number of benefits. It helps customers reduce complexity and time associated with managing configurations. Additionally, it allows customers to automate configuration changes and security updates, helping them ensure that their applications are secure and compliant. Finally, it helps customers reduce cost by optimizing the Amazon EC2 instance utilization.
Improve Your Security Posture with AWS Control Tower and AWS Security Hub Integration
AWS Control Tower is a service that provides customers with a secure and compliant landing zone for their applications. With the integration of AWS Control Tower and AWS Security Hub, customers can now detect control operations performed on the Security Hub detective controls from the Security Hub service. This enables customers to monitor and track the status of their detective controls in a single, unified dashboard.
Detecting Control Operations with AWS Control Tower
AWS Control Tower can detect control operations performed in the Security Hub service in the following steps:
- Enable AWS Control Tower: Customers can enable AWS Control Tower in their AWS account to start monitoring the Security Hub detective controls.
- Configure AWS Security Hub: Customers can configure the Security Hub detective controls to enable them in AWS Control Tower.
- Monitor the Security Hub detective controls: AWS Control Tower will detect and report the status of the Security Hub detective controls.
Benefits of the Integration
The integration between AWS Control Tower and AWS Security Hub provides a number of benefits. It allows customers to monitor the status of their Security Hub detective controls in a single, unified dashboard. Additionally, it helps customers to quickly detect security issues and respond to them in a timely manner. Finally, it helps customers reduce complexity and cost associated with monitoring the security posture of their applications.
At KeyCore, our certified AWS consultants have the experience and expertise to help you migrate to the AWS Cloud and optimise your security posture. Our team can assist you with setting up the automated patching process, migrating your Spring Cloud applications, creating Amazon CloudWatch dashboards, and centralizing your configuration management. We can also help you integrate AWS Control Tower and AWS Security Hub to monitor the status of your detective controls. Contact us today to learn more.
Read the full blog posts from AWS
- Migrating and automating patching at scale with AWS Application Migration Service
- Approach to migrate Spring Cloud microservices applications to Amazon EKS
- Creating a near-realtime dashboard on Amazon CloudWatch for a Migration usecase
- Centralizing configuration management using AWS Systems Manager
- Improve your security posture with AWS Control Tower and AWS Security Hub integration
AWS for Industries
AWS for Industries: Generative AI, ConnectedFresh, Manufacturing, EnergySys Software, and Patient Safety Intelligence
Generative AI for Financial Services
Generative artificial intelligence (AI) is a type of AI that can create new content and ideas, such as conversations, stories, images, videos, and music. Generative AI is powered by machine learning (ML) models, which are large language models like GPT-3. AWS is an ideal platform for training these models, because of its scalability and agility. AWS also provides tools like Amazon SageMaker and Amazon Elastic Compute Cloud (EC2) to help developers build and deploy generative AI applications. With AWS, developers can quickly build and deploy generative AI applications for financial services, such as chatbots and fraud detection.
ConnectedFresh
ConnectedFresh is a restaurant technology built on AWS. It simplifies the process of managing equipment outages and food safety concerns by providing a cloud-native, low-code environment. ConnectedFresh uses AWS tools such as Amazon SageMaker, Amazon Elastic Cloud Compute (EC2), and Amazon Web Services (AWS) to help restaurant operators monitor their equipment and food safety. By leveraging the scalability and agility of the AWS platform, ConnectedFresh delivers an automated, end-to-end solution. With ConnectedFresh, restaurant operators can easily monitor their equipment and food safety in real time.
Generative AI in Manufacturing
Generative AI is a powerful tool for manufacturing companies who are looking to improve their operations. Generative AI enables manufacturers to generate new ideas, products, and processes, which can help them gain a competitive edge. AWS provides tools such as Amazon SageMaker and Amazon Elastic Compute Cloud (EC2) to help developers build and deploy generative AI applications for manufacturing. These tools can help manufacturers quickly develop and deploy generative AI applications that can create new ideas, products, and processes.
EnergySys Software
EnergySys is a cloud-native, low-code environment that simplifies data management for oil and gas companies. It uses AWS tools such as Amazon SageMaker, Amazon Elastic Cloud Compute (EC2), and Amazon Web Services (AWS) to help operators manage their asset data and comply with complex commercial agreements. EnergySys enables operators to quickly and easily deploy its software and monitor their asset data in real time while keeping costs low.
Patient Safety Intelligence
Healthcare organizations can use AWS AI/ML services to improve patient safety intelligence. AWS provides tools such as Amazon SageMaker and Amazon Elastic Compute Cloud (EC2) to help healthcare organizations build and deploy patient safety intelligence applications. These tools can help healthcare organizations quickly develop and deploy applications that can collect, review, and classify patient safety reports in real time. AWS also provides tools such as Amazon Comprehend and Amazon Translate to help healthcare organizations better analyze and interpret patient safety reports.
With AWS for industries, businesses of all sizes and industries can leverage the scalability and agility of the AWS platform to quickly develop and deploy applications for their operations. KeyCore, the leading Danish AWS consultancy, can help businesses take full advantage of AWS for industries. Our team of experts can help businesses develop and deploy AI and ML applications that can improve their operations. We can provide professional services and managed services to help businesses get the most out of AWS for industries.
Read the full blog posts from AWS
- The Next Frontier: Generative AI for Financial Services
- How We Built This on AWS: ConnectedFresh
- How Generative AI will transform manufacturing
- Ancala Midstream Unlocks Scalability and Agility for Its Oil and Gas Operations Using EnergySys Software
- Improve Patient Safety Intelligence Using AWS AI/ML Services
AWS Messaging & Targeting Blog
Manage Email Deliverability and Incoming Messages at Scale with Amazon SES
The Amazon Simple Email Service (SES) is a cloud-based email service provider that enables businesses to build large-scale email solutions and host multiple domains from one account. With SES, businesses can ensure their emails reach the right inboxes with advanced deliverability tracking and managing incoming emails at scale. In this blog post, we will walk you through the different features of Amazon SES and how KeyCore can help you get the most out of SES.
How to Track Email Deliverability to Domain Level with Amazon SES
It is important to track email deliverability per domain to ensure your emails are reaching the right inboxes. Amazon SES provides advanced deliverability tracking tools that can help you monitor your emails at the domain level. This is useful for understanding email engagement and troubleshooting email delivery issues.
How to Send Your First Email on SES
Sending your first email on SES can be complicated. This blog post will walk you through how to send your first email on SES by using the SES console, as well as provide examples of how you can use the AWS SDK to send emails. Our public documentation provides resources and instructions on how to get started.
How to Send Messages to Multiple Recipients with Amazon SES
Customers often ask what the best way is to send messages to multiple recipients at once using SES. This blog post will show you how to determine the best approach for sending messages to multiple recipients while avoiding exceeding the maximum recipient limit. Additionally, best practices for deliverability are discussed.
Manage Incoming Emails at Scale with Amazon SES
Are you looking for an efficient way to handle incoming emails and streamline your email processing workflows? This blog post will guide you through setting up SES for incoming emails. This includes the setup, monitoring and use of receipt rules to optimize email handling.
At KeyCore, our AWS experts can help you get the most out of Amazon SES. We provide both professional and managed services for SES with a focus on optimizing deliverability, managing incoming emails and streamlining email processing workflows. Contact us to learn more about how KeyCore can help you make the most of Amazon SES.
Read the full blog posts from AWS
- Amazon SES – How to track email deliverability to domain level with CloudWatch
- How to send your first email on SES
- How to send messages to multiple recipients with Amazon Simple Email Service (SES)
- Manage Incoming Emails at Scale with Amazon SES
AWS Robotics Blog
Overview of Running an SSH Server on AWS RoboMaker
AWS RoboMaker is a fully managed service that enables Robotics developers to build, run, scale, and automate simulations without managing any infrastructure. It is an ideal choice for customers looking to reduce the time and cost associated with managing and scaling robotics applications.
During the development cycle, roboticists may need to perform deeper inspection on what’s going on within the running container. CloudWatch offers important metrics and logs related to simulation jobs, but customers may want to access the container to interact directly with the running container. To do that, customers can set up an SSH server on AWS RoboMaker.
Setting Up an SSH Server on AWS RoboMaker
To set up an SSH server, customers must first create an Amazon Elastic Compute Cloud (EC2) key pair. The key pair will be used to authenticate the connection and consists of a public key and a private key. Customers can store the private key securely as AWS RoboMaker does not have access to it. The public key will be used to authenticate the SSH connection.
Once the key pair is created, customers can set up the SSH server by providing the public key to the AWS RoboMaker console. Customers can then access the SSH server using the private key. AWS RoboMaker will authenticate the connection using the public key provided.
Benefits of Running an SSH Server on AWS RoboMaker
One of the primary benefits of running an SSH server on AWS RoboMaker is that customers can access the container and interact directly with it. This allows customers to perform deeper inspection and troubleshoot any issues without needing to manage and scale the infrastructure.
Running an SSH server on AWS RoboMaker also provides customers with enhanced security, as customers are able to control who has access to the SSH server. Additionally, customers can customize the SSH server to only allow specific IP addresses.
How KeyCore Can Help With Running an SSH Server on AWS RoboMaker
KeyCore is a leading Danish AWS consultancy that provides both professional services and managed services. Our team of experienced AWS engineers can help you set up and manage an SSH server on AWS RoboMaker, as well as provide more general guidance on setting up and running robotics applications with AWS RoboMaker. To learn more about our offerings, please visit https://www.keycore.dk/.
Read the full blog posts from AWS
AWS Marketplace
Self-Serving AWS Marketplace Updates for SaaS Based Products
AWS Marketplace now provides a self-service experience for sellers, independent software vendors (ISVs) and consulting partners (CPs) to update their software as a service (SaaS) based products. In this blog, we’ll cover how to use this feature to update the different features of SaaS-based products in AWS Marketplace.
Using AWS Marketplace to Create Quality Fan Viewing Experiences
In this blog, we’ll summarize the contents of the Elevate viewer experiences with broadcast innovation webinar. In that webinar, NBCUniversal, a multinational mass media and entertainment conglomerate, and Harmonic, a provider of video streaming solutions, joined to discuss how to innovate broadcasting in the cloud.
To take full advantage of cloud capabilities, NBCUniversal turned to AWS Marketplace. With AWS Marketplace, they were able to increase their agility and speed to market while reducing their operational overhead. NBCUniversal was also able to leverage their existing infrastructure and resources more efficiently, allowing them to scale rapidly as viewership demands increased.
Through their partnership with Harmonic, NBCUniversal was able to take advantage of video-streaming solutions such as encoding, transcoding, and content delivery networks. This allowed them to create a high-quality, seamless fan viewing experience, with low latency and high quality audio and visual components.
At KeyCore, we help our customers to leverage the power of AWS Marketplace. Our team of AWS Certified Solutions Architects have years of experience building and deploying cloud-based solutions that help our customers optimize their business operations. We also offer managed services, allowing us to provide ongoing support and maintenance for our customers’ cloud infrastructure.
If you’re looking to take advantage of the self-service features of AWS Marketplace, or to leverage the power of cloud computing to create a high-quality viewer experience, contact KeyCore today. Our team of experts can help you get the most out of your cloud computing investments.
Read the full blog posts from AWS
- How to self-serve AWS Marketplace updates for SaaS based products
- Peacock creates quality fan viewing experience for iconic sporting events
The latest AWS security, identity, and compliance launches, announcements, and how-to posts.
AWS Security, Identity, and Compliance: New Launches, Announcements, and How-To Posts
AWS is continuously introducing new services, features, and programs to help customers comply with security, identity, and compliance regulations. In this article, we will discuss four of the latest AWS security, identity, and compliance launches, announcements, and how-to posts.
Customer Compliance Guides Now Available on AWS Artifact
Amazon Web Services (AWS) has released Customer Compliance Guides (CCGs) to support customers, partners, and auditors in understanding how compliance requirements from leading frameworks map to AWS service security recommendations. The CCGs contain security guidance mapped to 10 different compliance frameworks and cover over 100 services and features. Customers can select any of the available frameworks and services to receive the best guidance to ensure they are compliant.
AWS Completes Police-Assured Secure Facilities (PASF) Audit in Europe (London) Region
AWS has recently renewed its accreditation for United Kingdom (UK) Police-Assured Secure Facilities (PASF) for Official-Sensitive data in its Europe (London) Region. This demonstrates AWS’s commitment to meeting the heightened expectations of its customers with regards to security and compliance.
Use AWS Private Certificate Authority to Issue Device Attestation Certificates for Matter
This blog post explains how to use AWS Private Certificate Authority (CA) to create Matter device attestation CAs and issue device attestation certificates (DAC). This solution allows device makers to operate their own secure device attestation CAs, leveraging the solid security foundation of AWS Private CA.
CISPE Code of Conduct Public Register Now Has 107 Compliant AWS Services
AWS is continuing to grow the scope of its assurance programs and has announced that 107 services are now certified as compliant with the Cloud Infrastructure Services Providers in Europe (CISPE) Data Protection Code of Conduct. This demonstrates AWS’s commitment to adhere to CISPE requirements and ensures that customers can continue to trust their data will be secure.
At KeyCore, we provide professional and managed services to help customers meet their security, identity, and compliance needs. Our team of AWS Certified Solutions Architects and DevOps engineers can provide expertise in designing and deploying the necessary solutions. Additionally, our highly secure Data Center can help customers comply with regulatory requirements. Contact us today for more information.
Read the full blog posts from AWS
- Customer Compliance Guides now available on AWS Artifact
- AWS completes Police-Assured Secure Facilities (PASF) audit in Europe (London) Region
- Use AWS Private Certificate Authority to issue device attestation certificates for Matter
- CISPE Code of Conduct Public Register now has 107 compliant AWS services
AWS Startups Blog
Selecting the Right Foundation Model for Startups on AWS
When startups build generative artificial intelligence (AI) into their products, selecting the right Foundation Model (FM) is an important step. It impacts user experience, go-to-market strategy, hiring decisions, and profitability. This article will discuss the most impactful aspects to consider when selecting a Foundation Model to meet a startup’s needs.
What is a Foundation Model?
A Foundation Model (FM) is the most basic model of a product, which the startup can build upon to add additional features and functionality. It provides the underlying architecture of the product, and dictates the capabilities and limitations of the final product. Understanding the FM is essential for startups to create a product in line with their goals and objectives.
Factors to Consider in Selecting an FM
Startups must consider several factors when selecting an FM, such as scalability, performance, cost, security, and compliance.
Scalability
Scalability is a measure of how easily the product can scale up to accommodate more users. Scalability is important to consider, as it determines the potential size of the customer base the product can support. When evaluating an FM, startups should consider the complexity of the scalability model and the scalability limitations of the architecture.
Performance
Performance is a measure of how quickly the product can process requests and respond to user inputs. This is especially important for AI-driven products, as users expect fast response times. Startups should consider the performance of the architecture, as well as the size and complexity of the codebase.
Cost
Cost is another important factor to consider when selecting an FM. Startups must consider the cost of the underlying infrastructure, as well as the cost of hiring the necessary personnel to develop and maintain the product.
Security
Security is a key factor when selecting an FM, as it determines the trustworthiness of the product. Startups should consider the security features built into the architecture, as well as the security protocols in place for access control and data protection.
Compliance
Compliance is an important factor to consider when selecting an FM, as it determines the regulatory requirements and industry standards that must be met. Startups should consider the compliance requirements for the specific industry they are operating in, as well as the compliance tools available to ensure that the product meets the necessary requirements.
How CFOs Can Integrate the Cloud into Their Long-Term Success Strategy
This is the first post in a series focused on the evolving role of the startup CFO. This series will tackle questions such as: What does the role of today’s startup CFO entail and how will it evolve over the lifecycle of a startup? How can CFOs most effectively support the cloud’s increasing dominance within the organization and balance sheet? Can the CFO better navigate—and ultimately enable—the relationship between technical leaders, CTOs, and engineering teams? This post focuses on how startup CFOs can integrate the cloud into their long-term success strategy.
The Impact of the Cloud on the CFO Role
The cloud is changing the way startups operate and the CFO role is no exception. As the cloud continues to become a larger part of the organization’s operations, CFOs must adapt to the changing landscape in order to stay competitive. This means understanding the different cloud models and options available, as well as the associated costs and benefits.
How to Leverage the Cloud to Support Long-term Success
Startup CFOs can leverage the cloud to support long-term success in several ways.
Scalability
The cloud provides unprecedented scalability for startups. This means that CFOs can easily adjust resources to accommodate changes in demand and scale up or down quickly when needed. This helps CFOs ensure that the organization has the resources it needs to remain competitive in the long-term.
Cost Savings
The cloud also provides cost savings for startups. CFOs can take advantage of the cloud’s pay-as-you-go model to reduce costs and shave off unnecessary expenses. This helps CFOs ensure that the organization is running as efficiently as possible and that resources are being used wisely.
Better Security
The cloud also provides better security for startups. CFOs can take advantage of the cloud’s built-in security features to protect the organization’s data and ensure that all customer and employee information remains secure.
KeyCore Services
At KeyCore, we understand the changing role of the CFO and how the cloud can be leveraged to support long-term success. We provide professional services and managed services to help startups integrate the cloud into their long-term success strategies. Our team of experts is available to provide guidance and support throughout the process. Contact us today to learn more about how we can help you leverage the cloud to support long-term success.
Read the full blog posts from AWS
- Selecting the right foundation model for your startup
- How startup CFOs can integrate the cloud into their long-term success strategy
Business Productivity
Optimize Your Business Communication Infrastructure with Amazon Chime SDK Voice Connector
Amazon Chime SDK Voice Connector is a cloud-based service that provides SIP trunking for voice calling, making it a popular choice for businesses and organizations that need to communicate with customers and partners. This service provides scalability, lower costs, and better reliability than on-premise solutions.
What are Call Detail Records (CDRs)?
Call Detail Records (CDRs) are data records that contain information about voice calls made over a network such as duration, time, caller and callee numbers, and other details. These records are used to analyze call patterns and monitor network performance and usage.
How Can CDRs Improve Communication Infrastructure?
CDRs provide valuable insights into communication infrastructure performance. By analyzing CDRs, businesses can identify trends, identify potential opportunities for improvement, and optimize their communication infrastructure. CDRs can also help businesses detect and prevent fraud by detecting suspicious calling patterns.
How Can Amazon Chime SDK Voice Connector Help?
Amazon Chime SDK Voice Connector enables businesses to easily integrate their existing phone systems with the cloud. This integration allows businesses to benefit from the scalability, cost-effectiveness, and reliability of the cloud while taking advantage of features such as real-time analytics and fraud prevention. With Amazon Chime SDK Voice Connector, businesses can analyze their CDRs to identify call trends, optimize their communication infrastructure, and prevent fraud.
KeyCore – Your Partner for Amazon Chime SDK Voice Connector
KeyCore is the leading Danish AWS consulting firm and can help you take advantage of Amazon Chime SDK Voice Connector to optimize your communication infrastructure. Our experienced AWS professionals will provide the expertise and guidance to get the most out of the service. We also offer both professional and managed services to ensure the successful integration of your existing phone system to the cloud.
KeyCore is here to help you make the most of your Amazon Chime SDK Voice Connector experience. Contact us today to learn more.
Read the full blog posts from AWS
Innovating in the Public Sector
Innovating in the Public Sector
EdIndia Foundation Improves Student Success with AWS
EdIndia Foundation is an Indian nonprofit organization that works to improve the quality of education on a large scale. Research conducted by the organization revealed a significant disparity in learning outcomes between students taught by high-performing teachers and those taught by low-performing teachers. In response to this problem, EdIndia decided to use AWS to upskill teachers and ultimately increase student success.
Creating Real-Time Flood Alerts with Cloud Technology
The Latin America and Caribbean region is the second most disaster-prone region in the world and floods are the most common natural disaster in the region. Recently, flooding has become especially severe in Panama. The AWS Disaster Preparedness and Response Team and AWS Partner Grupo TX determined that cloud technology could be effectively utilized to better understand and prepare for flooding, with the ultimate goal of saving lives.
High Performance Computing Drives Research at UBC
Computer scientists at the University of British Columbia (UBC), Dr. Kevin Leyton-Brown and Neil Newman, use artificial intelligence and microeconomic theory in their research. Their work requires large-scale, high-performance computing, and they needed more computing power than their on-premises infrastructure could provide when they began their research into the auction theory behind the 2016 U.S. wireless spectrum auction. To meet this need, the team turned to RONIN, an AWS Partner, and the virtually unlimited infrastructure of the AWS Cloud.
Alzheimer’s Disease Research Portal on AWS
The National Institute on Aging Genetics of Alzheimer’s Disease Data Storage Site (NIAGADS DSS) is powered by AWS and provides access to publicly available datasets for Alzheimer’s disease and related neuropathologies. Created to make Alzheimers-genetics knowledge more accessible to researchers, NIAGADS contains genomics data on 172,701 samples from 98 datasets and is now 1.3 petabytes (PB) in total size. NIAGADS uses data sharing to promote scientific discovery with a large group of institutions.
IMAGINE 2023 Conference for Education & Government Leaders
AWS is hosting a two-day, no-cost event for leaders from across state and local government and education. The IMAGINE conference will take place on July 11-12, 2023 in Sacramento, CA. Leaders will learn how to transform public services to be beneficial to students, constituents, and communities. KeyCore can help with attending the conference and informing leaders of the most relevant AWS solutions that can be used to transform public services.
Hartnell College Recovers Critical Systems with AWS After a Cyber Attack
Hartnell College experienced a cyber event in 2022 and used AWS and AWS Partner Ferrilli to recover and rebuild their systems. Leaders in higher education can use the knowledge gained from the Hartnell College experience to better prepare for and prevent future cyber events. KeyCore can provide consulting services around preparing for and recovering from cyber events.
Read the full blog posts from AWS
- How EdIndia Foundation uses AWS to upskill teachers and increase student success
- Creating real-time flood alerts with the cloud
- Accelerating economic research at UBC with high performance computing using RONIN and AWS
- Alzheimer’s disease research portal enables data sharing and scientific discovery at scale
- Register now for the IMAGINE 2023 conference for education, state, and local leaders
- How Hartnell College recovered critical systems using AWS after a cyber attack
The Internet of Things on AWS – Official Blog
Deploy Applications to IoT Devices using AWS IoT Device Management
The Internet of Things (IoT) industry is trending towards devices that are increasingly compatible with the latest standards, interfaces, and protocols. In order for device manufacturers to remain competitive, they must be able to deploy new features, system updates, and security patches to their products in an organized and timely manner.
Using Automated CI/CD Pipelines
Software applications have long been using automated continuous integration and delivery (CI/CD) pipelines to manage this process. Now, it’s possible to use the same pipelines to deploy applications on IoT devices. AWS IoT Device Management helps device manufacturers and system integrators automate the deployment of applications on their IoT devices.
Deploy and Manage IoT Applications with AWS IoT Device Management
AWS IoT Device Management helps device manufacturers and system integrators automate the deployment of applications on their IoT devices. The service enables customers to onboard, organize, monitor, and remotely manage IoT devices throughout their lifecycle, even when they are geographically distributed. It also helps manage device fleets and their associated applications.
Using AWS IoT Device Management, customers can define application layers to represent different versions or features of an application, and then track those layers across device fleets. This makes it easier to configure and deploy applications to devices, ensuring that the correct application is installed on each device and that the applications remain up-to-date.
How KeyCore Can Help
At KeyCore, our experienced AWS consultants are experts in integrating AWS services into your existing IoT infrastructure. We can help your organization build and maintain secure, automated CI/CD pipelines to deploy applications to your IoT devices. Contact us today to learn more about how we can help you get the most out of AWS IoT Device Management.
Read the full blog posts from AWS
AWS Open Source Blog
Leveraging Open Source Tooling for Kubernetes Multi-Cluster Service Discovery using AWS Cloud Map MCS Controller
What is Kubernetes Multi-Cluster Service Discovery?
Multi-cluster service discovery refers to the process of finding and connecting services across Kubernetes clusters. This can be useful for connecting workloads from different clusters together in order to share data, resources, and services.
Overview of the Open Source Tooling
The open source tooling for multi-cluster service discovery includes the upstream Kubernetes Multi-Cluster Services API (mcs-api) and the open source Amazon Web Services (AWS) Cloud Map MCS Controller (MCS-Controller).
The mcs-api is a Kubernetes API extension that provides a multi-cluster service discovery API. It allows services to be discovered across multiple clusters, connected, and managed in a consistent manner.
The MCS-Controller is a controller that runs on Kubernetes clusters and interacts with the mcs-api to enable service discovery across multiple Kubernetes clusters. The MCS-Controller watches for changes in the mcs-api and takes appropriate action.
Implementing Multi-Cluster Service Discovery using mcs-api and MCS-Controller
To implement multi-cluster service discovery using the open source tooling, you must first deploy the mcs-api on the Kubernetes clusters that you wish to connect. Once deployed, the mcs-api will watch for changes in the services and take action accordingly.
Once the mcs-api is deployed, you can then deploy the MCS-Controller on the clusters that you wish to connect. The MCS-Controller will watch for changes in the mcs-api and take action accordingly. For example, when a service is created in one cluster, the MCS-Controller will automatically create the same service in the other cluster.
Benefits of Multi-Cluster Service Discovery
Multi-cluster service discovery provides several benefits, including increased reliability, scalability, and availability of services across multiple clusters. With service discovery across multiple clusters, services can be quickly and easily discovered, connected, and managed in a consistent manner. This enables workloads to be distributed across multiple clusters for increased scalability and availability.
KeyCore Can Help
At KeyCore, we provide professional and managed services to help customers with their multi-cluster service discovery needs. Our experienced team of AWS experts can help you deploy mcs-api and the MCS-Controller, as well as other open source tooling, to ensure that your services are distributed across multiple clusters for maximum scalability, reliability, and availability. We also provide support and maintenance services to ensure your services remain available and performant. Contact us today to learn more about how we can help with your multi-cluster service discovery needs.