Summary of AWS blogs for the week of monday Mon Apr 24

In the week of Mon Apr 24 2023 AWS published 83 blog posts – here is an overview of what happened.

Topics Covered

AWS DevOps Blog

Utilizing DevSecOps for Faster Application Builds with Amazon CodeGuru Reviewer and Bitbucket Pipelines

Integrating Security Controls into CI/CD Workflows with DevSecOps

DevSecOps is a set of best practices that combine security controls into continuous integration and delivery (CI/CD) workflows. The first step in this process is utilizing Static Application Security Testing (SAST) tools to search for potential security vulnerabilities before code is executed for the first time. Catching vulnerable code early in the development process can help prevent costly security breaches later.

Using Amazon CodeGuru Reviewer and Bitbucket Pipelines to Implement DevSecOps

Amazon CodeGuru Reviewer is an integrated service that supports code-level security testing. It uses machine learning to identify security and performance vulnerabilities in code. It can be used with Bitbucket Pipelines, a cloud-based CI/CD tool. This combination of services makes it easy to implement DevSecOps and ensure that code is secure before it is deployed.

Accelerating Application Builds with Amazon CodeWhisperer

Amazon CodeWhisperer is a powerful generative AI tool that helps users build applications faster by automating common coding tasks. By incorporating CodeWhisperer into their workflow, developers can drastically reduce their development time and produce better results. However, effectively using CodeWhisperer requires a beginner’s mindset and willingness to adopt new methods.

How KeyCore Can Help

At KeyCore, we specialize in providing professional and managed services for AWS. Our team of AWS experts is experienced in working with Amazon CodeGuru Reviewer and Bitbucket Pipelines to build secure, reliable applications quickly. We can also provide guidance on how to use Amazon CodeWhisperer to accelerate application builds. Contact us today to learn more about how we can help you get the most out of DevSecOps and Amazon’s generative AI tools.

Read the full blog posts from AWS

Official Machine Learning Blog of Amazon Web Services

Recent Advances In Machine Learning To Improve Multi-Hop Reasoning And Extend Functionality Of AWS Trainium

Large language models (LLMs) are making tremendous progress in natural language understanding, but they are prone to generating confident but nonsensical explanations, posing a significant obstacle to establishing trust with users. This post introduces a method to incorporate human feedback on incorrect reasoning chains for multi-hop reasoning to improve performance. Additionally, the post covers how to extend the functionality of AWS Trainium with custom operators.

Incorporating Human Feedback for Improved Multi-Hop Reasoning

Incorporating human feedback on the incorrect reasoning chains can bridge the gap between LLMs and providing consistent and trust-worthy results. To do this, a framework is needed that can take in feedback from a human evaluator and use it to update a language model. By using feedback from a human evaluator, the model is able to adjust its weights and improve its performance. In addition, the framework must account for the fact that human feedback may be noisy, as some feedback may be incorrect or incomplete. To account for this, the framework must be able to identify false feedback and discard it.

This process enables the model to learn from rich human feedback, becoming more accurate and reliable. The improved performance can then be used to provide more accurate and trustworthy explanations.

Extending The Functionality Of AWS Trainium With Custom Operators

Deep learning is constantly evolving, and practitioners are continuously creating new models and ways to speed up existing models. Custom operators are one of the methods used to extend the functionality of existing ML frameworks such as PyTorch. An operator is a function that defines how a model should perform a certain operation, such as an activation function or a convolution operation. By using custom operators, developers can add their own custom logic to a model, allowing them to create more powerful and accurate models.

Using custom operators can significantly improve the performance of a model, and is becoming increasingly popular in the ML space. With AWS Trainium, developers can add custom operators to their models without having to write code. With AWS Trainium, developers can also use pre-trained models and leverage the insights they provide to improve the performance of their own models.

Delivering Your First ML Use Case In 8–12 Weeks

Many executives believe that ML can be applied to any business decision, however, only half of ML projects make it to production. To help move ML journey from pilot to production, Amazon offers support to implement the first ML use case. The post covers how to implement the use case with Amazon SageMaker and also provides a timeline of 8–12 weeks.

The implementation process involves four steps: data preparation, training, inference, and deployment. During the data preparation stage, the dataset needs to be cleaned, formatted, and split into training and testing sets. During the training stage, the ML model should be trained on the training dataset. The inference stage is used to evaluate the performance of the model on the test dataset. And the deployment stage is used to send the model to production.

Running Local Machine Learning Code As Amazon SageMaker Training Jobs With Minimal Code Changes

The Amazon SageMaker Python SDK enables data scientists to run their ML code on Amazon SageMaker training jobs with minimal changes. This helps data scientists to quickly deploy their models to production. The process involves adding a few lines of code to their existing code and making the necessary changes to support the SageMaker environment.

This feature allows data scientists to take advantage of the SageMaker environment without having to rewrite their code. By doing this, they can quickly move their ML projects from development to production.

Performing Intelligent Search Across Emails In Your Google Workspace Using The Gmail Connector For Amazon Kendra

Google Workspace is a set of productivity and collaboration tools including Gmail for Business, Google Drive, Google Docs, Google Sheets, and more. Emails contain a wealth of information, which can be difficult to access and use. The Gmail connector for Amazon Kendra is a feature that makes it easy to search through emails and make use of the information they contain.

The Gmail connector for Amazon Kendra enables organizations to quickly find the exact email they are looking for by searching through the subject and body text. Furthermore, it can also be used to search through attachments such as documents, spreadsheets, and PDFs. This feature simplifies the process of finding the right information in emails and makes it easier for organizations to make use of the information they contain.

Dimensionality Reduction Using Amazon SageMaker Data Wrangler

The quality of datasets is of great importance to the predictability of a model. However, large datasets with many features can sometimes lead to non-optimal model performance due to the curse of dimensionality. To improve the performance of a model, analysts often have to spend a significant amount of time transforming, cleaning, and engineering features.

Amazon SageMaker Data Wrangler simplifies this process by providing a set of tools to help analysts quickly reduce the dimensionality of datasets. The feature includes an automated feature engineering tool that can detect and generate features in real-time. Additionally, it includes a feature selection tool that can quickly identify and select the most predictive features.

Identifying Objections In Customer Conversations Using Amazon Comprehend To Enhance Customer Experience Without ML Expertise

To improve customer experience, it is important to identify customer objections in pre- and post-sales conversations. Various methods such as email, live chat, bots, and phone calls are used to communicate with customers. However, manually identifying objections in customer conversations takes significant time and effort. Amazon Comprehend simplifies this process by providing an automated machine learning (ML) service that can quickly identify customer objections.

Amazon Comprehend enables organizations to enhance customer experience without the need for ML expertise. It can quickly analyze customer conversations and detect customer objections, allowing customer care teams to respond quickly and effectively. By leveraging Amazon Comprehend, organizations can improve customer satisfaction levels and reduce customer churn.

KeyCore Can Help

At KeyCore, we offer professional and managed services to help organizations move their ML journey from pilot to production. Our experienced team of ML experts can help you implement your first ML use case in 8–12 weeks. We also offer services to help you extend the functionality of AWS Trainium with custom operators, identify customer objections with Amazon Comprehend, and reduce the dimensionality of datasets with Amazon SageMaker Data Wrangler.

Whether you are just starting your ML journey or you need help with an existing ML project, our team of ML experts can help you get the most out of your ML projects. Contact us today to learn more about how we can help.

Read the full blog posts from AWS

Announcements, Updates, and Launches

Athena Provisioned Capacity and Step Functions Distributed Map Launches

Athena is a query service that makes it simple to analyze data in Amazon Simple Storage Service (Amazon S3) data lakes and 30 different data sources, including on-premises data sources or other cloud systems, using standard SQL queries. Athena is serverless, and now Amazon has launched the ability to provision capacity to run your Athena queries. This means that customers have more control over the performance of their queries and can improve query execution times.

AWS Step Functions, which allows customers to orchestrate large-scale parallel workloads in the cloud, has also been updated with the addition of a distributed map state. Charles Burton, a data systems engineer for the company CyberGRX, was able to refactor his workflow using the new feature, reducing the processing time for Machine Learning (ML) jobs from 8 days to just 56 minutes.

AWS Week in Review

This past week, Amazon CodeCatalyst, a code development tool, has been made generally available, Amazon S3 is now available on Snowball Edge devices, and version 1.0.0 of AWS Amplify Flutter is here. These new tools can help customers more easily develop, manage, and deploy their cloud applications.

AWS Support has also added Korean as a preferred language, in addition to English, Japanese, and Chinese. This allows customers to communicate with AWS Support engineers and agents in Korean for a more tailored support experience.

How KeyCore Can Help

At KeyCore, we provide both professional and managed services to help customers take full advantage of the latest AWS developments. Our team of experienced AWS professionals can help you implement the new features, optimize your infrastructure and resources, and ensure that your applications are running securely and reliably.

We also offer our Managed Services to help you maintain an optimal cloud environment. Our team can help you monitor, troubleshoot, and manage your AWS environment, while providing regular feedback and customized reporting to ensure your applications are running optimally.

If you would like to learn more about how KeyCore can help you with your AWS development and infrastructure needs, please visit our website.

Read the full blog posts from AWS

AWS Smart Business Blog

Is the Cloud Safe for Small and Medium Businesses? Debunking Security Myths

The global workforce is rapidly digitizing. Thanks in part to a pandemic that accelerated the shift to remote work, and consumers demanding an optimized supply chain, companies of various sizes are adopting cloud computing to modernize their business and compete in our digital world. However, the question on the minds of many is if the cloud is safe for small and medium businesses.

Cloud Is More Secure Than On-Premise

When it comes to security, cloud computing is actually more secure than on-premise solutions, as the cloud providers are constantly updating their systems to meet the latest security requirements. Clouds are also monitored 24/7 by experts. AWS follows the shared responsibility model, where it is the responsibility of the customer to manage their own data security, and AWS is responsible for the security of its infrastructure.

Data Privacy and Regulations

When it comes to data privacy, companies must ensure they comply with the applicable laws and regulations. AWS is compliant with GDPR, HIPAA and other applicable laws, and provides customers with additional services that help ensure they are compliant with applicable laws. Furthermore, customers can also encrypt their data, use multi-factor authentication, and set up access control lists to control who can access their data.

KeyCore Can Help

At KeyCore, our team of certified AWS experts can help small and medium businesses understand cloud security and ensure that their systems are secure and compliant with applicable laws. Our professional services team can help design and implement secure cloud solutions, and our managed services team can help provide ongoing security monitoring and maintenance of cloud systems.

Cloud computing has become an essential part of business operations, and with the help of KeyCore, small and medium businesses can ensure that their cloud solutions are secure and compliant. Contact KeyCore today to learn more about how we can help.

Read the full blog posts from AWS

Official Database Blog of Amazon Web Services

Using Amazon Web Services for Time Series Security and Data Migration

Amazon Web Services (AWS) offers a range of services and products designed to help organizations meet their security and data migration needs. In this blog post, we’ll take a look at how VMware Carbon Black improves and scales security observability with Amazon Timestream, how to achieve high-performance migrations to Amazon RDS for Oracle, optimize costs by scheduling provisioned capacity for Amazon DynamoDB, deploy schema changes in Amazon Aurora MySQL databases with minimal downtime, migrate billions of records from Oracle data warehouses to Amazon Redshift, best practices and parameter configurations for enhanced performance on Amazon RDS Custom for SQL Server, reduce data archiving costs for compliance by automating Amazon RDS snapshot exports to Amazon S3, migrate data from partitioned tables in PostgreSQL using AWS DMS, deploy Amazon RDS Proxy for SQL Server with IAM authentication, and new features in AWS DMS 3.5.0. Finally, we’ll look at how DevOcean used Amazon Neptune to build a vulnerability remediation platform for cloud-native applications and how to set up Always Encrypted with Amazon RDS for SQL Server.

Security Observability with Amazon Timestream

Organizations need to quickly and effectively address potential security threats, but are often limited by the amount of security data generated from logs and events. Amazon Timestream, a fast, secure, and serverless time series database and analytics service, is designed to help with this. It provides the scalability to process trillions of time series events per day. VMware Carbon Black’s integration with Amazon Timestream provides organizations with improved security observability and the scalability to meet their current and future needs.

Migrating to Amazon RDS for Oracle

Many companies are looking to migrate their existing Oracle databases to Amazon Relational Database Service (Amazon RDS) for Oracle. This service is a fully managed commercial database, which makes it easy to set up, operate, and scale databases in the cloud. AWS Database Migration Service (AWS DMS), AWS Schema Conversion Tool (AWS SCT), and Amazon Simple Storage Service (Amazon S3) can be used to help with the migration process.

Optimizing Costs with Amazon DynamoDB

Amazon DynamoDB is a fully managed, serverless, key-value NoSQL database designed to run high-performance applications at any scale. When creating a DynamoDB table, users can choose from two capacity modes, which each have different billing options. To optimize costs, customers should consider using provisioned capacity for peak loads, and scaling down the capacity when it’s no longer needed.

Deploying Schema Changes with Minimal Downtime

Modifying the schema of a SQL database can often be time-consuming and resource-intensive. It can also require long periods of downtime that negatively affect the user experience. To avoid this, customers can use Amazon RDS Blue/Green Deployments for Amazon Aurora MySQL-Compatible Edition with Instant DDL to reduce downtime and make the process of deploying schema changes smoother.

Migrating Billions of Records to Amazon Redshift

Customers are migrating to Amazon Redshift to modernize their data warehouse solutions and save on license, support, operation, and maintenance costs. To ease the migration process, customers can utilize AWS Database Migration Service (AWS DMS), AWS Schema Conversion Tool (AWS SCT), and Amazon Simple Storage Service (Amazon S3).

Configuring Parameters for Enhanced Performance on Amazon RDS for SQL Server

Amazon RDS Custom for SQL Server is a managed database service for legacy, custom, and packaged applications that require access to the underlying operating system and database (DB) environment. To get the most out of the service, customers should consider setting up parameters such as memory, IOPS, and storage to ensure they have the resources they need to achieve enhanced performance.

Reducing Data Archiving Costs for Compliance

Many customers use AWS Backup to automatically create Amazon Relational Database Service (RDS and Aurora) database snapshots for long-term archival and to meet compliance requirements. To reduce the cost of archiving data, customers can use AWS DMS to automate the export of RDS Snapshots to Amazon S3.

Migrating Data from Partitioned Tables in PostgreSQL

Migrating data from PostgreSQL to a data warehouse like Amazon Redshift can pose challenges. AWS Database Migration Service (AWS DMS) can be used to migrate data from PostgreSQL partitioned tables to a single table on the target database.

Deploying Amazon RDS Proxy for SQL Server

Amazon RDS Proxy is a fully managed, highly available database proxy service. With Amazon RDS, customers can use IAM authentication to set up the proxy and have access to the database and underlying operating system.

New Features in AWS DMS 3.5.0

AWS Database Migration Service (AWS DMS) replication engine version 3.5.0 has been released, and it provides improvements to task logging, new data type support, and AWS service integrations.

Vulnerability Remediation Platform with Amazon Neptune

DevOcean used Amazon Neptune to build a vulnerability remediation platform for cloud-native applications. The platform provides a unified dashboard to manage security events across all the layers of a customer’s cloud applications.

Always Encrypted with Amazon RDS for SQL Server

Customers with a variety of industries (healthcare and life sciences, financial services, and retail, among others) may require a stronger security posture and a solution that allows them to encrypt sensitive data. With Always Encrypted and Amazon RDS for SQL Server, customers can achieve this securely.

KeyCore AWS Professional Services

At KeyCore, we provide professional services and managed services for Amazon Web Services. We specialize in helping customers efficiently migrate their data and optimize their security observability. Our team of experienced consultants can help you navigate the complexities of setting up and maintaining an AWS environment. We provide end-to-end solutions designed to help you protect sensitive data, ensure compliance, and reduce costs. Contact us today to learn more about our services.

Read the full blog posts from AWS

AWS for Games Blog

Using Amazon CloudWatch Internet Monitor for a Better Gaming Experience

Playing video games online can be a lot of fun, but keeping game performance and availability up to par can be difficult. If a game is suffering from poor performance or intermittent availability, gamers may become frustrated. The good news is, with Amazon CloudWatch Internet Monitor, gaming customers can more easily track the health of their online-gaming applications and ensure their players have the best experience.

Overview of Online Gaming Application Architecture

Typically, an online gaming application architecture consists of various components, such as web and application servers, databases, and load balancers. Applications are hosted on Amazon EC2 instances, and the data is stored in a database, such as Amazon RDS. The application can be secured with Amazon CloudFront and Amazon API Gateway, and caching can be implemented with Amazon ElastiCache to reduce latency and improve performance.

Common Issues and Challenges in Monitoring Performance and Availability

Developers and administrators of online gaming applications often face challenges when monitoring their application’s performance and availability. These issues can range from lack of visibility into user experience, difficulty tracking performance and availability metrics, or difficulty determining whether the issue is related to the application or the network.

Using Amazon CloudWatch Internet Monitor for Improved Performance

Using Amazon CloudWatch Internet Monitor, gaming customers can more easily monitor the performance and availability of their online gaming applications. Amazon CloudWatch Internet Monitor regularly makes automated HTTP requests to a customer’s application from around the world and collects real-time metrics. It additionally provides insights into user experience, such as total response time, latency, and availability.

Using Amazon CloudWatch Internet Monitor, gaming customers can create alarms to be notified when performance or availability metrics become an issue. Alarms can be configured to alert administrators when the response time, latency, or availability of the application falls below a certain threshold. By monitoring the performance and availability of their applications in real time, organizations can quickly identify issues and take steps to address them.

How KeyCore Can Help

At KeyCore, we have years of experience helping customers develop, manage, and monitor their online gaming applications. Our AWS-certified engineers and consultants can help with everything from setting up Amazon CloudWatch Internet Monitor to adding alarms and performance tracking. We can also provide guidance on how to address any performance or availability issues that might arise. Contact us today to learn more about how we can help you improve the performance and availability of your online gaming applications.

Read the full blog posts from AWS

Microsoft Workloads on AWS

Microsoft Workloads on AWS

Batch processing is a key requirement for many scale-out computing solutions in various industries and domains. To meet these needs, customers are turning to cloud computing to access large amounts of computing resources that are both technically flexible and economically efficient. AWS offers a suite of services to help customers get the most out of their batch processing workloads.

Using Amazon EC2, customers can create a custom Amazon Machine Image (AMI) to store their batch processing workloads and get information about their usage and billing. Additionally, AWS CodeCatalyst helps customers build and deploy their .NET serverless and web applications with ease. With its project blueprints, customers can collaborate on the coding, building, testing, and deployment of their applications in their AWS environments.

KeyCore can help customers maximize their batch-processing workloads on AWS. Our professional services and managed services can help customers get the most out of their workloads, enabling them to scale quickly and efficiently. Our team of AWS-certified experts provides personalized solutions and dedicated support to ensure that customers are getting the most out of their AWS solutions.

Read the full blog posts from AWS

Official Big Data Blog of Amazon Web Services

Connecting Kafka Client Applications Securely to Amazon MSK Clusters From Different VPCs and AWS Accounts

You can now use Amazon Managed Streaming for Apache Kafka (Amazon MSK) to simplify connectivity of your Kafka clients to your brokers, thanks to multi-VPC private connectivity (powered by AWS PrivateLink) and cluster policy support for MSK clusters. Amazon MSK is a fully-managed service that makes it easy for you to build and run applications that use Apache Kafka to process streaming data. With this new feature, you can securely connect your Kafka client applications to an Amazon MSK cluster from different VPCs and AWS accounts.

Interacting With Amazon Redshift Serverless Using the Data API

Amazon Redshift is a fast, secure, and fully managed data warehouse that makes it simple and cost-effective to analyze large amounts of data. Tens of thousands of customers use Amazon Redshift to process exabytes of data per day. With the Amazon Redshift Data API, customers can now connect to and interact with Amazon Redshift Serverless clusters using popular programming languages and frameworks such as Java, Node.js, Go, and .NET. This new API makes it easy for customers to build applications that access data stored in Amazon Redshift Serverless clusters.

Building a Distributed Data Governance and Control Platform at Scale with Novo Nordisk

This is a guest post co-written with Jonatan Selsing and Moses Arthur from Novo Nordisk. Novo Nordisk, a large pharmaceutical enterprise, partnered with AWS Professional Services to build a scalable and secure data and analytics platform. The platform included distributed data governance and control components and enabled Novo Nordisk to achieve the scalability and compliance requirements needed for their data platform.

Monitoring and Optimizing Cost on AWS Glue for Apache Spark

AWS Glue is a serverless data integration service that makes it simple to discover, prepare, and combine data for analytics, machine learning (ML), and application development. You can use AWS Glue to create, run, and monitor data integration and ETL (extract, transform, and load) pipelines, as well as catalog your assets across multiple data stores. Now, customers can monitor and optimize the cost of their AWS Glue for Apache Spark workloads with AWS Glue Cost Optimizer.

Top Strategies for High Volume Tracing With Amazon OpenSearch Ingestion

Amazon OpenSearch Ingestion is a serverless, auto-scaled, managed data collector that receives, transforms, and delivers data to Amazon OpenSearch Service domains or Amazon OpenSearch Serverless collections. OpenSearch Ingestion is powered by Data Prepper, an open-source, streaming ETL (extract, transform, and load) solution that’s part of the OpenSearch project. When you use OpenSearch Ingestion, you can leverage a number of strategies to enable efficient and effective tracing of high-volume data.

Performing Upserts in a Data Lake Using Amazon Athena and Apache Iceberg

Amazon Athena now supports the MERGE command on Apache Iceberg tables, which allows you to perform inserts, updates, and deletes in your data lake at scale using ACID (Atomic, Consistent, Isolated, Durable)-compliant SQL statements. Apache Iceberg is an open table format for data lakes that manages large collections of files as a single table. With the MERGE command, you can easily perform upserts in your data lake using Amazon Athena and Apache Iceberg.

Working With Percolators in Amazon OpenSearch Service

Amazon OpenSearch Service is a managed service that makes it easy to deploy, secure, and operate OpenSearch and legacy Elasticsearch clusters at scale in the AWS Cloud. The service eliminates the overhead of self-managed infrastructures by provisioning all the resources for your cluster, launching it, and automatically detecting and replacing failed nodes. Now, you can use the percolator feature of Amazon OpenSearch Service to create, update, and delete queries from within your Amazon Elasticsearch Service domain.

Analyzing Semiconductor Demand with AWS Glue at BMW Group

This is a guest post co-written by Maik Leuthold and Nick Harmening from BMW Group. BMW Group is a multinational car and motorcycle manufacturer that oversees 149,000 employees in over 30 production sites across 15 countries. To process the data from their international supplier network, the BMW Group turned to AWS Glue. AWS Glue is a managed ETL service that makes it easy to build, run, and monitor data integration pipelines.

Building an Amazon QuickSight Asset Catalogue With AWS CDK Based Deployment Pipeline

This is a guest blog post co-written with Corey Johnson from Huron. Having an accurate and up-to-date inventory of technical assets helps ensure an organization can keep track of all their resources. However, creating and maintaining an asset catalogue requires a great deal of engineering effort. To simplify this process, Huron created a solution using AWS Cloud Development Kit (AWS CDK) to deploy an asset catalogue using Amazon QuickSight.

Using Amazon QuickSight as the Primary Data Visualization Tool at Dafiti

This is a guest post by Valdiney Gomes, Hélio Leal, and Flávia Lima from Dafiti. At Dafiti, they wanted to standardize the data visualization tools across their organization, while also taking into account the preferences of their professionals. To achieve this, they decided to use Amazon QuickSight as their primary data visualization tool. Amazon QuickSight is a fast, serverless BI service that makes it easy to create and share interactive dashboards and reports.

Cross-Account Integration Between SaaS Platforms Using Amazon AppFlow

Implementing an effective data sharing strategy that satisfies compliance and regulatory requirements can be complex. To make the process easier, customers can leverage Amazon AppFlow to securely share data between disparate SaaS platforms within their organization or across organizations. With Amazon AppFlow, customers can apply business logic to the data received from the source SaaS platform before pushing it to the destination SaaS platform. Amazon AppFlow also provides a number of features to help customers manage their integration processes.

AWS Recognized As a Challenger in the 2023 Gartner Magic Quadrant for Analytics and Business Intelligence Platforms

AWS has been named a Challenger in the 2023 Gartner Magic Quadrant for Analytics and Business Intelligence (ABI) Platforms. This is a significant improvement from their prior position as a Niche player in the Magic Quadrant for ABI platforms. Gartner evaluated 20 ABI companies based on their Ability to Execute and Completeness of Vision. AWS’s position in the Magic Quadrant is a testament to their success in helping customers build an effective data and analytics platform.

Building a Transactional Data Lake Using Apache Iceberg, AWS Glue, and Cross-Account Data Shares Using AWS Lake Formation and Amazon Athena

Building a data lake on Amazon Simple Storage Service (Amazon S3) provides numerous benefits for an organization. Now, you can use Apache Iceberg, AWS Glue, and AWS Lake Formation to build a transactional data lake. Apache Iceberg is an open table format for data lakes that manages large collections of files as a single table. AWS Glue is a serverless data integration service that makes it simple to discover, prepare, and combine data at scale. AWS Lake Formation allows you to set up and enforce data security, data access control, and data auditing policies across data stored in Amazon S3.

Conclusion

AWS provides a number of services and tools to help customers build secure, scalable, and compliant data and analytics platforms. From multi-VPC private connectivity and cluster policy support for Amazon MSK clusters to Amazon AppFlow for cross-account integration between SaaS platforms, AWS makes it easy for customers to build and maintain effective data and analytics infrastructures. With AWS recognized as a Challenger in the 2023 Gartner Magic Quadrant for Analytics and Business Intelligence Platforms, customers can rest assured that they are making the right choice when choosing AWS for their data and analytics needs.

KeyCore Can Help

At KeyCore, our team of expert AWS consultants can help you design and build an effective data and analytics platform. We provide professional and managed services to help you get the most out of your data platform and ensure it meets your performance, scalability, and compliance requirements. Contact us today to learn more about how we can help you get the most out of your data and analytics platform.

Read the full blog posts from AWS

Networking & Content Delivery

AWS Verified Access and Transit Gateway: Best Practices for Migration

AWS Verified Access is now generally available (GA) and gives customers the ability to provide VPN-less, secure access to their corporate applications. This service is built on the principles of AWS Zero Trust, and allows customers to reduce the risks associated with granting access to their corporate applications.

Migrating From VPC Peering to AWS Transit Gateway

When migrating from Amazon Virtual Private Cloud (VPC) Peering to AWS Transit Gateway, there are certain best practices and considerations to keep in mind. This blog post will provide recommendations and best practices for a seamless migration. It will also detail common networking testing and benchmarking tools such as iPerf, and provide an example walkthrough of the process.

When considering a migration from VPC Peering to AWS Transit Gateway, the first step is to double-check that all of the requirements for the new architecture are met. It is important to make sure that any security groups and network access control lists are properly configured and up to date. Advanced Network Access Control options such as Flow Logs, VPC Flow Logs, Amazon Inspector, and AWS Trusted Advisor can all be invaluable in ensuring the security of your network.

Next, it is important to review the routing tables in order to identify any potential issues with the migration. It is also a good idea to run a series of network tests such as latency, throughput, or jitter tests in order to benchmark the performance of the new architecture. Common tools for this purpose are iPerf, curl, and tcptraceroute.

After the tests have been completed and the new architecture is deemed satisfactory, the next step is to deploy the AWS Transit Gateway. This can be done using the AWS CLI or the AWS CloudFormation template. Once the Transit Gateway has been deployed, the VPCs and subnets can be connected to it. After the connections have been established, it is recommended to test the network traffic to ensure that the new architecture is performing as expected.

When migrating from VPC Peering to AWS Transit Gateway, KeyCore can provide experienced guidance to help make the process easier. Our team of AWS experts can help with the planning and execution of the migration, as well as testing and validation of the new architecture. Our experts also provide ongoing support and management to ensure that the new system is running optimally.

Read the full blog posts from AWS

AWS Compute Blog

How to Leverage AWS Control Tower and AWS Organizations to Manage Data Residency in AWS Local Zones

Introduction

Data residency is a key factor when it comes to managing and deploying workloads in AWS. In this blog post, we discuss best practices for managing data residency in AWS Local Zones using the capabilities of AWS Control Tower and AWS Organizations. We’ll also discuss the new features of AWS Lambda, such as Java 17 support, and how you can optimize Amazon EC2 Spot Instances with Spot Placement Scores. Finally, we’ll look at how to build private serverless APIs with AWS Lambda and Amazon VPC Lattice, and how to implement error handling for AWS Lambda asynchronous invocations.

AWS Control Tower Landing Zone and AWS Organizations

AWS Control Tower provides guardrails at the root level, known as Service Control Policies (SCPs), to ensure that all accounts created in the AWS Organization adhere to the policies you define. With SCPs, you can set up rules that specify which services can be used, who can use them, and which regions they can be used in. This enables you to control data residency across all accounts in your AWS Organization.

New Features of AWS Lambda

AWS Lambda now supports Java 17, which comes with long-term support (LTS). This provides stability and reliability to developers that are building and running applications with AWS Lambda. You can also now develop AWS Lambda functions with Corretto, an Amazon distribution of OpenJDK.

Optimizing Amazon EC2 Spot Instances

Amazon EC2 Spot Instances offer cost-effective compute capacity that can be used for a range of workloads. To optimize these Spot Instances, you can use Spot Placement Scores. Spot Placement Scores provide insight into EC2 Spot Instance placements, enabling you to make informed decisions about when and where to use Spot Instances.

Building Private Serverless APIs with AWS Lambda and Amazon VPC Lattice

Amazon VPC Lattice is a serverless technology that makes it easy to create and deploy secure, private serverless APIs. VPC Lattice allows developers to focus on creating customer value and differentiated features instead of complex networking.

Error Handling for AWS Lambda Asynchronous Invocations

AWS Lambda functions allow both synchronous and asynchronous invocations. Synchronous invocations return any unhandled errors in the function code back to the caller, allowing for easier error handling. Asynchronous invocations, on the other hand, require a different approach to error handling. In this blog post, we discuss best practices for handling errors in asynchronous invocations.

KeyCore Can Help

At KeyCore, our team of AWS experts can help you leverage the power of the AWS cloud to get the most out of your data residency strategies. We provide managed services and professional services, including architectural best practices advice, development, and deployment services. Our experienced team can help you manage your data residency requirements on AWS, so you can focus on delivering value to your customers. If you would like to learn more about how KeyCore can help, please get in touch.

Read the full blog posts from AWS

AWS for M&E Blog

Unlock Data-Driven Insights with AWS – How the Seattle Seahawks Use Technology to Dominate Football

Introduction

The National Football League (NFL) is one of the most popular sports leagues in the world and requires teams to be data-driven when making decisions. The Seattle Seahawks have been at the forefront of using data-driven insights to gain an edge and have become one of the most successful NFL teams in recent years. In this blog post we will be exploring how the Seattle Seahawks apply data-driven insights across their franchise using AWS to unlock game-winning strategies.

Using Data to Make Decisions

The Seattle Seahawks have a dedicated Research and Analytics Team that uses data to make decisions about players, coaches, and the entire organization. This team is headed by Patrick Ward, Head of Research and Analytics, and uses data from game footage, player performance metrics, statistical analysis, and more to make decisions about the team. With the help of AWS, the Seahawks have been able to quickly and accurately analyze data from multiple sources and use it to make informed decisions about their roster, game strategy, and overall team performance.

Unlocking Game-Winning Strategies

The Seattle Seahawks use data-driven insights to make the best decisions during the NFL draft. The team has access to vast amounts of data from multiple sources which they use to evaluate potential players and decide who to draft. With the help of AWS, the team is able to quickly and accurately analyze this data and make informed decisions about who to select in the draft. By using data-driven insights, the Seahawks have been able to build a team of successful players and unlock game-winning strategies year-round.

KeyCore’s Role

At KeyCore, we are experienced in leveraging AWS to help organizations make the best decisions when it comes to their data. We can help you unlock game-winning strategies and maximize performance with data-driven insights. Our team of AWS experts has the expertise to help you create the perfect data-driven strategy for your business. With our help, you will have access to the right data and the right tools to make informed decisions and maximize your success. Contact us today to learn more.

Read the full blog posts from AWS

AWS Storage Blog

Simplifying Operations with VMware Cloud on AWS and AWS Backup

As businesses adopt cloud, they leverage VMware Cloud on AWS to migrate their applications without refactoring them. However, customers still need to ensure the protection of their applications’ data, as well as provide disaster recovery and data migration in case of events. To assist with large-scale migrations and limited resources, customers can use AWS Backup in conjunction with VMware Cloud on AWS.

Migrating Mixed File Sizes with the Snow-Transfer-Tool

When moving applications and infrastructure to AWS, organizations often need to migrate data from existing file share environments. This data contains a variety of file sizes, with more than a single digit percentage of files under 1 MB. In this situation, migration performance can be significantly improved with the Snow-Transfer-Tool (STT) and an AWS Snowball Edge device. STT is capable of detecting file sizes and leveraging multiple network streams, as well as preprocessing and postprocessing.

Creating an ETL Pipeline Trigger for Existing AWS DataSync Tasks

Organizations often look to the cloud to analyze their data, produce reports that inform business decisions, and load data sets into extract-transform-load (ETL) pipelines for data processing. In order to ensure that decision makers have accurate reports, it is necessary to have an efficient way to trigger an ETL pipeline. AWS DataSync offers organizations the ability to create an ETL pipeline trigger for existing tasks through the use of AWS Lambda.

At KeyCore, we specialize in helping organizations use AWS to simplify operations and reduce costs. Our team of experts can help you leverage AWS Backup and VMware Cloud on AWS, as well as Snow-Transfer-Tool and AWS DataSync, to create an efficient ETL pipeline trigger. Contact us to learn more.

Read the full blog posts from AWS

AWS Developer Tools Blog

Bringing You the Smithy CLI: Introducing the Smithy Team and Their Open-Source Tool

The Smithy team is excited to announce the official release of the Smithy CLI, an open-source Interface Definition Language (IDL) for web services created by AWS. Smithy enables developers to collaborate on APIs through its intuitive language syntax and customizable features. Ultimately, Smithy helps developers create, manage, and publish APIs more quickly.

What Does Smithy Do?

Smithy helps developers model services, generate server scaffolding and rich clients in multiple languages, and generate the AWS SDKs. For example, you can define specifications for your API, such as input and output formats, and Smithy will automate the conversion of that into the necessary code. This code can then be used to quickly create a service.

How Does Smithy Help AWS Developers?

Smithy makes it easier and faster for AWS developers to build and maintain APIs. By using Smithy, developers can save time and effort when creating APIs. Additionally, Smithy’s intuitive syntax and customizability features make it easier to collaborate on an API and keep it up-to-date. Finally, Smithy’s automated tooling helps developers generate the AWS SDKs more quickly. Ultimately, Smithy helps AWS developers create, manage, and publish APIs more quickly.

How Can KeyCore Help?

At KeyCore, we understand the challenges of developing and deploying APIs. We have worked with AWS developers to create and maintain APIs quickly and efficiently. Using Smithy, we can help you define specifications for your API, automate the conversion of those specifications into code, and generate the AWS SDKs. We are experienced in taking advantage of the customizability and automation features of Smithy to quickly create services. If you are looking to take advantage of Smithy to create and deploy APIs quickly, please contact us.

Read the full blog posts from AWS

AWS Architecture Blog

Automating Intelligent Document Processing and Containers with AWS

Unlock Insights from Unstructured Data with Automation

Many organizations struggle to effectively manage and derive insights from the large amount of unstructured data locked in emails, PDFs, images, scanned documents, and more. This can be a challenge due to the variety of formats, document layouts, and text that can make it difficult for any standard Optical Character Recognition (OCR) to extract key insights. In order to help organizations overcome this challenge, AWS provides automated intelligent document processing (AIDP) solutions.

AIDP uses machine learning models to process these documents and extract important data points. It can then transform this information into a structured data format for easier analytics. AWS provides several services to help customers with AIDP, such as Amazon Textract and Amazon Comprehend Medical. Additionally, KeyCore’s team of experts can help customers with the setup and implementation of AIDP solutions.

Harness the Power of Containers in AWS

As more organizations move towards cloud-native applications or modernizing applications, containers are becoming an increasingly popular choice to run microservices applications. AWS provides a wide range of services to help customers harness the power of containers.

Using containers in AWS offers customers several benefits, such as increased portability, scalability, and flexibility. Additionally, AWS services such as Amazon ECS and Amazon EKS provide customers with the tools they need to deploy, manage, and scale containerized applications.

The combination of containers and AWS services provide customers with the tools to deploy applications quickly, securely, and with high availability. AWS and KeyCore can help customers take advantage of the benefits containers offer by providing the necessary services and expertise.

Using AWS and KeyCore’s expertise, customers can take advantage of automated intelligent document processing and containers to unlock the insights from unstructured documents, quickly deploy and scale their applications, and reduce their total cost of ownership.

Read the full blog posts from AWS

AWS Partner Network (APN) Blog

Manage Access, Build and Launch SaaS Solutions, and Take Advantage of APN with AWS

Just-in-Time Least Privileged Access to AWS Administrative Roles with Okta and AWS Identity Center

AWS IAM Identity Center makes it easy to manage access across an organization. Customers can leverage Okta Access Requests and AWS IAM Identity Center to provide just-in-time access to cloud resources. This allows for granting access to developers for a limited time upon approval, thus limiting the active time frame for assignments to AWS resources.

Introducing the Journey to SaaS Guide to Help You Build, Launch, and Operate SaaS Solutions on AWS

The new Journey to SaaS guide is designed to help build, migrate, secure, and optimize SaaS solutions on AWS. It provides a roadmap to identify which stage one is in and the corresponding actions, motivations, questions, pain points, and AWS SaaS Factory resources.

Discover and Re-Imagine Success with the AWS Partner Network

AWS Partner Network provides programs and benefits to help partners of all sizes transform and improve their customer experience. AWS Partners can take advantage of APN programs to help grow their business, reach their target customers, and make the most of the AWS resources and programs available to support continued growth and success.

Designing High-Performance Applications Using Serverless TiDB Cloud and AWS Lambda

TiDB Cloud and AWS Lambda can be used to build scalable, cost-effective, and serverless microservices. TiDB Cloud is a cloud-native, open-source distributed SQL database with built-in hybrid transactional and analytical processing (HTAP). Pairing TiDB Cloud with AWS Lambda enables the building of serverless, event-driven microservices – further enhancing the scalability and cost-effectiveness of one’s architecture.

Benefits of Running Virtual Machines on Red Hat OpenShift for AWS Customers

IBM Consulting and AWS experts share the benefits of running VMs on top of Red Hat OpenShift Container Platform on AWS. These benefits include integrated management, migration and modernization, and improved developer productivity. Red Hat OpenShift Virtualization makes it easy to deploy and manage VMs on OpenShift.

At KeyCore, we provide both professional services and managed services to help our customers take advantage of the various features of AWS. We can help our customers to manage access using tools like Okta Access Requests and AWS IAM Identity Center, build and launch SaaS solutions using the Journey to SaaS Guide, discover and re-imagine success with the AWS Partner Network, and design high-performance applications using serverless TiDB Cloud and AWS Lambda. Our team of experienced AWS Certified consultants can help customers build, migrate, secure, and optimize their SaaS solutions on AWS. Contact us today to find out how we can help you get the most out of AWS.

Read the full blog posts from AWS

AWS HPC Blog

How Evolvere Biosciences Performs Macromolecule Design on AWS

Evolvere Biosciences uses a customized architecture consisting of AWS Batch and Nextflow to efficiently and quickly run their macromolecule design pipeline. This blog post explores how this is done and how KeyCore can help.

The Evolution of Drug Discovery

The landscape of drug discovery is constantly changing, and in the age of data science and machine learning, the process is becoming more efficient and cost-effective. Evolvere Biosciences relies on AWS to enable their macromolecule design workflows.

Evolvere Biosciences’ Architecture

Evolvere Biosciences created a custom architecture that combines AWS Batch and Nextflow for their macromolecule design pipeline. AWS Batch enables them to efficiently run thousands of simulations by creating compute environments that can scale on demand. Nextflow allows them to manage their pipelines and set up complex workflows, which simplifies their development and deployment processes.

The Benefits of AWS

Using AWS for drug discovery has a number of benefits. By leveraging AWS Batch, Evolvere Biosciences is able to scale their compute environments to tackle the challenges of drug discovery. Additionally, Nextflow enables them to create complex workflows that greatly simplify the development and deployment process.

How KeyCore Can Help

At KeyCore, we understand the importance of leveraging the cloud to accelerate drug discovery. Our team of experienced AWS experts can help customers like Evolvere Biosciences create customized architectures to run their workloads efficiently and cost-effectively. We can provide professional services and managed services to help you get the most out of AWS and accelerate your drug discovery.

Read the full blog posts from AWS

AWS Cloud Operations & Migrations Blog

The Latest and Greatest in AWS Cloud Operations & Migrations

Amazon Managed Grafana Version Selection with 9.4 Support
Amazon Managed Grafana has added version selection, now offering support for version 9.4. This offers customers the latest product features, such as navigation, dashboards, and visualizations. This version also supports a variety of data sources, including AWS, Prometheus, InfluxDB, and more. AWS customers can now take advantage of the new features in version 9.4, improving the navigation and visualizations of their data.

Scale AWS Well-Architected Framework Reviews with the new Consolidated Report
AWS Well-Architected Framework Reviews help identify risks and areas of improvement in customer workloads. With the new Consolidated Report feature, customers can quickly prioritize risks and create an improvement strategy. This new feature allows customers to quickly identify and fix problems in their environment, improving security, performance, and cost management.

Tracking and Remediating Non-Compliant Resources with AWS Config and Atlassian Jira Service Management
Organizations require their cloud environment to be secure and compliant with governance policies. AWS Config provides customers with resources configuration details, and can be used with AWS Config managed rules, AWS Config custom rules, and conformance packs. Recently AWS has added the ability to integrate AWS Config and Atlassian Jira Service Management through automated webhooks, allowing customers to track and remediate non-compliant resources quickly and easily.

How KeyCore Can Help
KeyCore provides a full range of AWS professional services and managed services that can help customers with the latest cloud operations and migrations. Our team of AWS Certified solutions architects and DevOps engineers have the experience and expertise to help customers optimize their cloud environment, take advantage of the latest features, and maintain compliance. Contact us today to learn more.

Read the full blog posts from AWS

AWS for Industries

AWS for Industries: Geo-based Real Time Marketing for Financial Services and Increased Scalability for Epic Database Performance

Financial Services Institutions (FSIs) are looking for ways to increase their transactions through their channels, partners’ channels, and payment methods. Amazon Web Services, Inc. (AWS) provides an opportunity to do so. Forrester asserts that aggregators and intermediaries that create value across industries will become important agents in a diverse collaborative ecosystem. However, not all organizations have the capability to do so.

AWS also increases scalability of Epic database performance. The company now supports operational database workloads of up to 42 million GRefs/s – a 61% increase from the previous AWS GRefs/s sizing announcement. This step-change in scalability is delivered on the R6in instance, allowing customers to deliver improved performance and scalability to applications.

In Europe, AWS has collaborated with WindEurope and Accenture to streamline wind permitting. Wind energy is essential to Europe’s energy security strategy and it already meets 17% of the continent’s electricity. The European Commission wants wind to be half of Europe’s electricity by 2050, but the backlog in wind permitting has complicated the move to clean and renewable energy.

Thanks to AWS, WindEurope and Accenture, a new permitting platform has been developed which helps stakeholders across Europe to speed up permits. The platform uses AWS services such as Amazon Elastic Container Service for Kubernetes, Amazon Elastic Compute Cloud and AWS Lambda. This allows for improved communication and coordination between stakeholders, as well as faster and more consistent permitting decisions.

At KeyCore, we can provide professional services to help FSIs leverage the capabilities of AWS to increase their transactions, or provide managed services to help customers take advantage of the increased scalability of Epic database performance. In addition, we can help customers implement the new permitting platform in Europe to speed up wind permitting decisions and the move to clean energy.

Read the full blog posts from AWS

The latest AWS security, identity, and compliance launches, announcements, and how-to posts.

The Latest AWS Security, Identity, and Compliance Launches, Annoucements, and How-To Posts

AWS has achieved an AAA Pinakes rating for Spanish financial entities, a prestigious certification that covers 166 services across 25 global AWS Regions. Along with this achievement, we interviewed Tatyana Yatskevich, Principal Solutions Architect for AWS Identity, to discuss her experience at AWS and the important role of identity in security.

Tatyana Yatskevich – Principal Solutions Architect for AWS Identity

Tatyana has been working at AWS for many years and her current role is building identity solutions for customers while helping to keep them secure. To discuss the significance of identity in security, Tatyana explains that identity is the foundation of any organization because it is the mechanism by which users access the resources they need. It is important to securely manage user identities, as any malicious activity on these accounts can have serious implications for the organization.

Tatyana suggests that organizations use a multi-factor authentication (MFA) solution to provide an extra layer of security for user accounts. Additionally, organizations should consider implementing automated access management controls to ensure that only authorized users are able to access the resources they need. This will help to reduce the risk of unauthorized access to sensitive data or systems. Lastly, Tatyana recommends that organizations use role-based access control (RBAC) to ensure that users are only able to access the resources they need and not any that they do not need.

KeyCore Can Help

At KeyCore, our AWS experts are highly experienced in implementing identity solutions and access controls to help ensure your organization’s security. We can help you implement a multi-factor authentication (MFA) solution, automated access management controls, and role-based access control (RBAC) to protect your organization from unauthorized access. Contact us today to learn more about our AWS services and how we can help you protect your organization from security threats.

Read the full blog posts from AWS

AWS Startups Blog

Navigating Uncertain Times with Bessemer’s Jeff Epstein’s Advice

The Evolving Role of the Startup CFO

In this article, Jeff Epstein, Operating Partner at Bessemer Venture Partners, shares his perspectives to help CFOs better navigate and enable the relationship between technical leaders, CTOs, and engineering teams.

Epstein believes that the CFO’s role has become increasingly strategic and complex. CFOs are required to have a deep understanding of the business, technology and the market environment in order to optimize their companies’ operations. They need to be able to navigate turbulent times and understand the implications of financial decisions on a company’s long-term success.

To this end, Epstein recommends that CFOs ask their leadership team the right questions. They should understand the company’s business model and make sure that the right metrics are being tracked. CFOs should also be aware of new technologies and how they could be used to drive efficiency and growth. Additionally, they should understand how their decisions could affect the long-term vision of the business.

Epstein also recommends that CFOs build strong relationships with the technical teams. They should understand the engineers’ priorities and ask the right questions to make sure that their investments in technology are well thought out. CFOs should also provide clear guidance to their CTOs on the financial expectations for each project.

Finally, Epstein emphasizes the importance of communication. CFOs should communicate their goals to the leadership team and communicate effectively with their engineers. They should use data to illustrate the impact of decisions made and discuss the trade-offs between investing in technology and pursuing other projects.

At KeyCore, our team of experienced AWS consultants can help CFOs and technical leaders in their journey to optimize their company’s operations. We provide professional services such as cloud architecture design and cost optimization, in addition to managed services such as serverless backups and database security. Our team can also develop custom solutions to help you and your team navigate through today’s digital landscape. Contact us to learn more.

Read the full blog posts from AWS

AWS Contact Center

How to Manage Agent Quality using Amazon Connect and Evaluate Performance with Contact Lens & AWS CloudTrail & Athena

Introduction

Organizations often have difficulty getting a complete view of their agents’ performance, which is due to the large volume of interactions and communication channels with customers. Analyzing relevant data points for performance evaluation can be difficult. Fortunately, Contact Lens for Amazon Connect provides customers the possibility to evaluate agent performance and customer satisfaction, with the help of AI-powered analytics.

Contact Lens for Amazon Connect

Contact Lens for Amazon Connect is an AI-powered analytics solution that helps customers to improve customer experience and agent performance, by analyzing customer interactions in real-time. It provides a comprehensive view of both customer and agent sentiment, helping customers measure customer satisfaction. With Contact Lens for Amazon Connect, customers can easily identify patterns in customer interactions, categorize events, and uncover trends in customer behavior. Furthermore, customers can use the data to train agents, improve customer service, and adjust the contact center strategy.

Investigate API Activity with AWS CloudTrail & Athena

In order to follow best practices on multi-account strategy, customers often launch and manage their Amazon Connect instances across multiple accounts and Regions, based on their product, teams, departments etc. This allows individual business owners, developers and engineers to make changes to their own Amazon Connect environments. In such cases, customers need a central mechanism to monitor API activity across the organization.

AWS CloudTrail and Amazon Athena provide customers with the ability to investigate Amazon Connect API activity across their organization. CloudTrail is a web service that records API activity in a customer’s Amazon Connect account and delivers log files to an Amazon S3 bucket. Customers can then use Athena to perform SQL queries on the CloudTrail logs, to gain insights on API activity across the organization.

How KeyCore Can Help

At KeyCore, we provide professional and managed services for customers in all industries. Our team of AWS certified experts can help customers leverage AWS solutions to get the most out of their Amazon Connect contact center. We can assist customers with setting up Contact Lens for Amazon Connect, and provide guidance on how to use Athena and CloudTrail to monitor API activity. To learn more about our services, visit us at KeyCore.dk.

Read the full blog posts from AWS

Innovating in the Public Sector

Innovating in the Public Sector

Public sector organizations are increasingly turning to cloud-based services to help drive their digital transformation efforts. In this blog post, we explore three examples of how public sector organizations are utilizing the power of the cloud to innovate and make an impact.

Using Machine Learning to Customize Nonprofit Direct Mailings

Direct mailings are an essential tool for many nonprofits, helping to support fundraising efforts or other initiatives that further the organization’s mission. Traditionally, organizations have used everything from Microsoft Word mail merges to third-party mailing providers for their direct mailings. However, with the power of the cloud, organizations now have access to capabilities such as personalized scaling. Through machine learning (ML) techniques and AWS, nonprofits can tailor their direct mailings to drive better outcomes.

Georgia DHS Establishes a Cloud Strategy Through Multiyear Modernization Journey

The Georgia Department of Human Services (DHS), the largest of the state’s agencies, was able to rapidly scale their AWS Cloud adoption and digitally transform their business with AWS Managed Services (AMS). AMS enabled GDHS to modernize their legacy technology onto a hosted platform that meets rigorous security guidelines. At re:Invent 2022, the Chief Information Officer of GDHS, Sreeji Vijayan, spoke about their cloud migration journey. Learn key takeaways from GDHS’s experience with AWS and watch the on-demand session to dive deeper.

Research for Public Sector CIOs as they Prepare for Digital Assets

Regulatory agencies in the public sector are dealing with a surge of digital assets in the private sector, such as cryptocurrencies, stablecoins, non-fungible tokens (NFTs), and central bank digital currencies (CBDCs). To provide guidance on this new development, AWS collaborated with industry analyst firm Constellation Research to write a research report on the topic. “The CIO Imperative for Digital Assets in the Public Sector” offers a comprehensive exploration of the topics necessary for CIOs and their teams to understand in preparing for this journey.

Public sector organizations are beginning to adopt cloud-based services to leverage the many advantages that come with that. In this blog post, we explored three examples of how public sector organizations are innovating with the cloud. From utilizing machine learning to customize direct mailings, to establishing a cloud strategy through a multiyear modernization journey, to providing research for public sector CIOs preparing for digital assets, the cloud is proving its versatility when it comes to digital transformation. At KeyCore, we can help public sector organizations understand the opportunities available with the cloud and develop strategies to best leverage cloud-based services.

Read the full blog posts from AWS

The Internet of Things on AWS – Official Blog

The Internet of Things on AWS – Official Blog

Streaming Large 3D Models in AWS IoT TwinMaker Using glTF and 3D Tiles

Customers who have experienced lengthy wait times when loading 3D scenes in AWS IoT TwinMaker and faced poor rendering performance when navigating complex 3D models can now convert their models into the 3D Tiles standard for efficient streaming in a scene. This blog will discuss how to use glTF and 3D Tiles to reduce wait times and improve performance.

The glTF file format is an open-source 3D format that enables the interchange of 3D assets between various applications. glTF is a JSON-based format and is optimized for streaming 3D models directly over the web. The glTF format also enables the transmission of assets with minimal loading time and can be used for streaming of large models on AWS IoT TwinMaker.

The 3D Tiles standard enables 3D models to be streamed efficiently and quickly from the cloud to the visualizer. The 3D Tiles standard is an open-source format designed for streaming 3D content over the internet. 3D Tiles enables streaming of large 3D models in a scene, allowing the visualizer to render only the data needed for the current view.

Using glTF and 3D Tiles enables customers to reduce wait times and improve rendering performance when navigating a 3D model in AWS IoT TwinMaker. Additionally, customers can benefit from the open-source nature of the glTF and 3D Tiles formats, allowing more flexibility in the streaming of large models.

Introducing TLS 1.3 Support in AWS IoT Core

AWS IoT Core now supports Transport Layer Security (TLS) version 1.3 amongst its transport security options. TLS 1.3 offers customers enhanced security and performance as compared to TLS 1.2. Customers can configure the TLS version for their default Amazon Trust Services (ATS) data plane endpoint and for their custom domain endpoints. This blog will discuss the features of TLS 1.3 and the benefits of using it in AWS IoT Core.

TLS 1.3 offers customers improved security due to its advanced cryptographic protocols. It uses perfect forward secrecy, which ensures that communications remain secure even if the server’s private key is compromised. TLS 1.3 also uses an improved TLS handshake protocol, which reduces the amount of time required for communication to start. In addition, TLS 1.3 improves the performance of the application protocol by eliminating the need for unnecessary round trips.

TLS 1.3 also simplifies the configuration and deployment of the protocol. AWS IoT Core allows customers to configure the TLS version for their default ATS data plane endpoint and for their custom domain endpoints. This makes it easier for customers to use TLS 1.3 to secure their applications and data.

Using MicroPython to Get Started with AWS IoT Core

Customers who are looking to get started with AWS IoT using the devices and languages they are familiar with can benefit from MicroPython. This blog will discuss how MicroPython can be used to connect a device to AWS IoT Core and create a virtual device.

MicroPython is a lightweight implementation of the Python 3 programming language specifically designed for embedded systems. It enables customers to use the programming language they are familiar with to program their devices for use in the IoT. MicroPython is designed to be easy to use, allowing customers to quickly get started with programming their devices.

To get started with MicroPython and AWS IoT Core, customers can use the tutorials published in the AWS IoT Core Developer Guide. The guide provides customers with the steps needed to connect a Raspberry Pi to AWS IoT Core and create a virtual device. Additionally, customers can use the tutorials to get started with programming their devices for use in the AWS IoT Core.

By using MicroPython to program their devices, customers can quickly get started with AWS IoT Core and develop their applications faster.

KeyCore: Advanced AWS Consulting

Customers who are looking to take advantage of the latest features in IoT on AWS can benefit from the expertise of KeyCore, the leading Danish AWS consultancy. KeyCore provides professional and managed services to customers looking to develop, deploy, and maintain their applications on AWS.

Our team is highly advanced in AWS, being able to provide customers with advanced technical solutions and advice. Our team is knowledgeable about the latest features in AWS, such as TLS 1.3, MicroPython, and glTF & 3D Tiles, and can provide customers with the insights they need to make the most of their applications.

At KeyCore, our team has extensive experience with the Internet of Things and can help customers develop, deploy, and maintain applications on AWS. Our team is available to provide customers with the advice they need and help them take advantage of the latest technologies.

To learn more about KeyCore and our services, visit our website at https://www.keycore.dk.

Read the full blog posts from AWS

AWS Open Source Blog

Zomato Boosts Performance and Cuts Compute Cost with AWS Graviton

Zomato, an online food delivery company, recently migrated their Apache Druid and Trino workloads to AWS Graviton-based instances to boost performance and reduce compute costs. In this blog post, we’ll outline the price/performance benefits of adopting AWS Graviton-based instances for high throughput, near real-time big data analytics workloads.

AWS Graviton: Price/Performance Benefits for Big Data Analytics

AWS Graviton is an Amazon EC2 instance family that is powered by AWS-designed Graviton Processors built on 64-bit Arm Neoverse cores. Graviton Processors provide up to 40% better price-performance compared to current generation x86-based instances for a broad set of workloads, including big data analytics.

Graviton-based instances are well-suited to workloads that are memory-bound, such as those that utilize Java-based applications like Apache Druid and Trino. Apache Druid is an open source analytics data store used for fast, interactive queries of large datasets. Trino is a SQL query engine that is built on Apache PrestoDB, an open source distributed SQL query engine for running interactive analytic queries against data sources of all sizes.

Zomato Adopts AWS Graviton

Zomato, who are using Apache Druid and Trino, adopted AWS Graviton-based instances for their big data analytics workloads and saw a 25% improvement in performance and a 30% decrease in compute costs. The performance improvement and cost savings was due to the cost-effective nature of the Graviton-based instances, which are up to 40% lower cost than x86-based instances for the same performance.

KeyCore Can Help

Businesses that are looking to take advantage of the price/performance benefits of AWS Graviton-based instances can benefit from the expertise of KeyCore. Our AWS consultants have the experience and knowledge to help you migrate your workloads to Graviton-based instances and ensure that your workloads are configured and optimized to take full advantage of the cost savings and performance improvements offered by these instances. Contact us today to get started.

Read the full blog posts from AWS

Scroll to Top