Summary of AWS blogs for the week of monday Mon Aug 21
In the week of Mon Aug 21 2023 AWS published 49 blog posts – here is an overview of what happened.
Topics Covered
- Desktop and Application Streaming
- AWS for SAP
- Official Machine Learning Blog of AWS
- Announcements, Updates, and Launches
- Containers
- AWS Quantum Technologies Blog
- AWS Smart Business Blog
- Official Database Blog of AWS
- AWS Cloud Financial Management
- AWS Training and Certification Blog
- Official Big Data Blog of AWS
- Networking & Content Delivery
- AWS Compute Blog
- AWS for M&E Blog
- AWS Storage Blog
- AWS Partner Network (APN) Blog
- AWS Cloud Enterprise Strategy Blog
- AWS HPC Blog
- The latest AWS security, identity, and compliance launches, announcements, and how-to posts.
- Innovating in the Public Sector
- The Internet of Things on AWS – Official Blog
Desktop and Application Streaming
Track User Processes and Get More Insight into Amazon AppStream 2.0 Sessions
Many customers of Amazon AppStream 2.0 want to track employee usage of specific applications and use this data to track the frequency/duration of application use, as well as to optimize licensing costs. Out of the box, AppStream 2.0 provides usage reports that record applications launched from the application catalog. However, it does not track applications launched from desktop shortcuts or from other applications.
Introducing User Process Tracking with AppStream 2.0
To track user processes with AppStream 2.0, you need to make use of the streaming session APIs for your AppStream 2.0 Fleet. These APIs can give you access to the processes running in a streaming session, as well as details about them. You can use this data to track individual processes, as well as to store and analyze the usage data for each employee.
Deeper Insights into AppStream 2.0 Usage
With user process tracking, you can gain insights into how your users are interacting with the applications available through AppStream 2.0. You can track the number of users accessing each application, how long they are using the application, and the frequency of usage. This data can be used to identify areas for improvement, or to determine the most effective way to deploy new applications. Additionally, access to this data enables you to determine if users are using applications as intended, or if they are running unauthorized applications.
The Benefits of User Process Tracking with AppStream 2.0
User process tracking with AppStream 2.0 enables customers to better understand their usage patterns and optimize their usage costs. Additionally, this data provides visibility into the behavior of users and can inform decisions about how to best deploy applications.
KeyCore Solutions for Amazon AppStream 2.0
KeyCore is the leading Danish AWS Consultancy, providing both professional services and managed services. We are highly advanced in AWS, and can help you implement user process tracking with AppStream 2.0. Our team of AWS experts can help you configure your AppStream 2.0 fleet, and provide guidance on the best way to store and analyze the usage data. Contact us today to learn more about how we can help you get the most out of your Amazon AppStream 2.0 experience.
Read the full blog posts from AWS
AWS for SAP
How to Implement an Event-Driven Architecture with the AWS SDK for SAP ABAP
Omnichannel strategies have become the norm in organizations across industries, creating the challenge of integrating different systems to ensure seamless data flow and near real-time updates. This is especially critical in e-commerce businesses, where customers expect up-to-date inventory information for products they wish to purchase. The lack of access to accurate inventory data can create a suboptimal customer experience and may hurt a business’s bottom line.
Integrating ABAP with AWS
The AWS SDK for SAP ABAP enables ABAP developers to integrate with AWS services. This provides developers with a single development environment that can access AWS services, enabling them to build their own custom solutions. For example, an ABAP developer could create an application that uses a combination of Amazon S3 and Amazon SQS to trigger events when inventory levels drop below a certain threshold.
Advantages of the AWS SDK for SAP ABAP
Using the AWS SDK for SAP ABAP provides a number of advantages. It eliminates the need to learn a separate language to access AWS services, and allows developers to stay in the familiar ABAP environment. Additionally, the SDK provides a way to quickly access data stored in AWS, without the need to manually set up connections. This reduces development time and allows for faster time-to-market.
How KeyCore Can Help
At KeyCore, our team of AWS experts is well-equipped to help businesses implement AWS SDK for SAP ABAP. We can provide guidance on how to utilize the SDK to achieve the desired outcome, as well as ensure that the integration process is seamless. Whether you’re looking to create an event-driven architecture or need help with any other AWS-related task, we can help you achieve your goals.
Read the full blog posts from AWS
Official Machine Learning Blog of Amazon Web Services
Amazon SageMaker Profiler, Amazon CodeWhisperer, S3 Access Points, Federated Learning, Explainability in Clinical Settings, and Fine-Grained Data Controls
In this blog post, we will be discussing the latest developments in Amazon Web Services Machine Learning. This includes the preview of Amazon SageMaker Profiler, Persistent Systems’ experiments with Amazon CodeWhisperer, Amazon S3 access point support for Amazon SageMaker Data Wrangler, machine learning with decentralized training data using federated learning on Amazon SageMaker, explainability of machine learning models used in medical settings with Amazon SageMaker Clarify, and applying fine-grained data access controls with AWS Lake Formation in Amazon SageMaker Data Wrangler.
Amazon SageMaker Profiler
The Amazon SageMaker Profiler preview provides a detailed view into the AWS compute resources provisioned during training deep learning models on SageMaker. It tracks all activities on CPUs and GPUs, such as CPU and GPU utilizations, kernel runs on GPUs, kernel launches on CPUs, sync operations, memory operations across GPUs, latencies between kernel launches and corresponding runs, and data transfer between CPUs and GPUs.
Persistent Systems & Amazon CodeWhisperer
Persistent Systems, a global digital engineering provider, has been running several pilots and formal studies with Amazon CodeWhisperer. These experiments point to shifts in software engineering, generative AI-led modernization, responsible innovation, and more. This could potentially change software engineering as we know it.
Amazon S3 Access Point Support for Amazon SageMaker Data Wrangler
SageMaker Data Wrangler now supports importing and exporting data from an S3 access point. This allows users to easily import, export, and transform data stored in S3, enabling a more seamless and secure integration of their data pipelines with the rest of their infrastructure.
Federated Learning on Amazon SageMaker
Federated Learning is a technique for running ML with decentralized training data. Amazon SageMaker now allows users to implement this technique, allowing them to build ML models by training on decentralized data sets, without needing to move the data to a centralized location.
Explainability in Clinical Settings with Amazon SageMaker Clarify
It’s becoming increasingly important to explain models used in the medical domain from a number of perspectives, ranging from medical, technological, legal, and the most important perspective—the patient’s. Amazon SageMaker Clarify can be used to improve model explainability in these clinical settings, making it easier for clinicians to make the best choices for individual patients.
Fine-Grained Data Access Controls with AWS Lake Formation in Amazon SageMaker Data Wrangler
SageMaker Data Wrangler now supports using Lake Formation with Amazon EMR to provide fine-grained data access restriction. This allows users to easily control access to data stored in S3 with an additional layer of security, making sure that only authorized users can access the data.
At KeyCore, our AWS consultants can help you make the most of the latest developments in Amazon Web Services Machine Learning. We provide both professional and managed services, and our team of expert consultants can help you identify your use cases, develop and implement custom solutions, and optimize your workflow for maximum efficiency. Contact us today to get started.
Read the full blog posts from AWS
- Announcing the Preview of Amazon SageMaker Profiler: Track and visualize detailed hardware performance data for your model training workloads
- Persistent Systems shapes the future of software engineering with Amazon CodeWhisperer
- Announcing Amazon S3 access point support for Amazon SageMaker Data Wrangler
- Machine learning with decentralized training data using federated learning on Amazon SageMaker
- Explain medical decisions in clinical settings using Amazon SageMaker Clarify
- Apply fine-grained data access controls with AWS Lake Formation in Amazon SageMaker Data Wrangler
Announcements, Updates, and Launches
AWS AppSync, AWS CodePipeline, and More: August 21, 2023
AWS Weekly Roundup brings you news about updates, announcements, and events from the AWS community. This week, AWS is proud to be sponsoring several AWS Community Days in Latin America, and will be hosting AWS AppSync and AWS CodePipeline webinars.
AWS Community Days Latin America
AWS is proud to be sponsoring several AWS Community Days taking place across Latin America. These events are free for all participants and will feature speakers from the AWS team, as well as other members of the AWS community. The events will take place in Peru, Argentina, Chile, and Uruguay, with topics ranging from AWS AppSync and AWS CodePipeline to serverless applications and security best practices.
Webinars on AWS AppSync and AWS CodePipeline
In conjunction with the AWS Community Days, AWS is hosting two webinars on the topics of AWS AppSync and AWS CodePipeline. The AWS AppSync webinar will focus on best practices for building applications, while the AWS CodePipeline webinar will discuss how to create and manage continuous delivery pipelines.
How KeyCore Can Help
At KeyCore, we provide professional and managed services to help our customers design, build, and manage AWS-based applications. Our team is highly experienced in AWS, and we can help you create and manage AWS CodePipeline pipelines, build effective serverless applications, and set up secure environments. Additionally, our team can provide guidance and advice on AWS AppSync and serverless applications, so you can get the most out of your AWS investments. Contact us today to learn more about how we can help.
Read the full blog posts from AWS
Containers
How to Measure the Performance Impact of Amazon GuardDuty EKS Agent
Amazon GuardDuty is a powerful threat detection service that continuously monitors your AWS environment for malicious activity and abnormal behavior. Launched in 2017, it has since grown to include the capability to analyze tens of billions of events per minute across multiple AWS data sources, such as AWS CloudTrail, Amazon VPC Flow Logs and more.
Understanding the Performance Impact
When using Amazon GuardDuty, it is important to consider the potential performance impact on the cluster. The agent, which is responsible for sending data to the Amazon GuardDuty service, can potentially affect the overall performance of your EKS cluster. To determine the performance impact of using the Amazon GuardDuty EKS Agent, it is important to measure and analyze the cluster performance metrics.
Analyzing the Performance Metrics
The first step to understanding the performance impact of the Amazon GuardDuty EKS Agent is to set up a monitoring system to capture and measure the performance metrics. This can be done using Amazon CloudWatch, which allows you to collect and analyze the performance metrics of a variety of AWS services. In the case of EKS, you can use CloudWatch to collect and measure the performance metrics of the nodes and the pods in your cluster. These metrics can then be used to analyze the performance impact of the agent.
Identifying the Bottlenecks
Once you have collected and analyzed the performance metrics, you can use them to identify any potential bottlenecks that may be caused by the Amazon GuardDuty EKS Agent. The metrics will allow you to pinpoint the exact areas in which the agent is having an impact, as well as to determine if there are any underlying issues that may be causing the performance impact. By understanding the performance impact of the agent, you can optimize the performance of your EKS cluster.
Mitigating Performance Issues
Once the performance issues have been identified, you can take steps to mitigate them. This could involve optimizing the configuration of the agent, or adjusting the way the agent works to reduce the impact it has on the performance of the cluster. Depending on the specifics of the issue, you may also need to adjust the resources allocated to the cluster, or adjust the scaling parameters to ensure that the cluster is able to handle the load.
KeyCore Can Help
At KeyCore, our team of AWS experts can help you measure and analyze the performance impact of the Amazon GuardDuty EKS Agent, and help you identify and mitigate any performance issues. We can also help you optimize the configuration of the agent and make sure that the cluster is able to handle the load.
Read the full blog posts from AWS
AWS Quantum Technologies Blog
Exploring Low-Level Control of OQC’s Superconducting Quantum Computer with Amazon Braket Pulse
Amazon Braket Pulse lets users of OQC’s superconducting quantum computer take control of low-level analog instructions to optimize performance, develop new protocols, and more. In this article, we’ll take a look at how to use Amazon Braket Pulse and provide best practices.
Overview of Amazon Braket Pulse
Amazon Braket Pulse is a set of tools designed to help users take advantage of low-level control for OQC’s superconducting quantum computer. With Amazon Braket Pulse, users can create and deploy low-level analog instructions for optimal performance, or create custom protocols like error suppression and mitigation.
Getting Started with Amazon Braket Pulse
To get started with Amazon Braket Pulse, users first need to set up an OQC’s superconducting quantum computer. The setup process includes creating an Amazon Braket Quantum Device, setting up the hardware, and configuring the software and environment. Once the setup is complete, users can begin to use Amazon Braket Pulse to control the low-level analog instructions.
Using Amazon Braket Pulse
Amazon Braket Pulse provides a set of tools designed to make it easy to create and deploy low-level analog instructions. The tools include a graphical UI-based editor and a set of pre-defined templates to help users quickly get up and running. Users can also create their own custom templates.
Amazon Braket Pulse Best Practices
When using Amazon Braket Pulse, there are some best practices that users should keep in mind. These include:
- Ensure the hardware is correctly configured and the hardware environment is ready for use.
- Avoid creating unnecessary or complex analog instructions.
- Test analog instructions before deploying them.
- Use the graphical UI-based editor to create and deploy analog instructions.
- Use the predefined templates to quickly get up and running.
- Create custom templates for specific use cases.
Conclusion
Amazon Braket Pulse provides users of OQC’s superconducting quantum computer with the tools and resources they need to take control of low-level analog instructions. With Amazon Braket Pulse, users can optimize performance, develop new protocols, and more. However, it is important to keep in mind the best practices when using Amazon Braket Pulse.
At KeyCore, we specialize in providing professional and managed services related to Amazon Braket Pulse and OQC’s superconducting quantum computers. Our team of experienced AWS consultants can help you take advantage of the full power of Amazon Braket Pulse. To learn more about KeyCore and our services, visit our website.
Read the full blog posts from AWS
AWS Smart Business Blog
Modernizing Your Business Communication and Collaboration with Cloud Services
Small and medium businesses (SMBs) are feeling the strain of using outdated, on-premises collaboration tools that can hinder productivity and cause miscommunication. Cloud services can help to bridge the gap and modernize collaboration and communication in SMBs.
The Benefits of Cloud Services
Cloud services can provide SMBs with a secure, easy to use, and low-cost solution that supports collaboration and communication. Cloud services can help to reduce on-premises costs and reliance on IT staff and free up SMBs to focus on the bigger tasks.
Some of the benefits of using cloud services include automated updates, greater scalability, real-time collaboration, increased uptime, and access to a wide range of software and productivity tools. Cloud services can also provide SMBs with access to the same suite of tools that larger enterprises use.
Outpost VFX’s Cloud Storage Strategy
One example of an SMB successfully leveraging the cloud is Outpost VFX. Outpost VFX is a media and entertainment company that creates visual effects for films and television. Outpost VFX has adopted a cloud-based storage strategy to reduce costs and foster collaboration.
Outpost VFX is using the Amazon S3 storage platform to store data in the cloud and AWS Snowball Edge devices to transfer large amounts of data between on-premises and AWS locations. By leveraging cloud services, Outpost VFX is able to reduce storage costs and ensure their data remains secure and accessible.
How KeyCore Can Help
At KeyCore, we specialize in providing professional and managed services to help SMBs modernize their collaboration and communication with cloud services. Our team of AWS certified experts can provide you with tailored advice to help you securely store, manage, and share data.
We can help you migrate your data to AWS, develop robust data retention and backup policies, and create a secure environment for collaboration and communication. We can also provide ongoing managed services to help you increase your efficiency and reduce costs.
Contact us today to learn more about how KeyCore can help you modernize your collaboration and communication with cloud services.
Read the full blog posts from AWS
- Modernizing Small Business Communication and Collaboration with Cloud Services
- Inside Outpost VFX’s Strategy for Reducing Storage Costs and Fostering Collaboration in the Cloud
Official Database Blog of Amazon Web Services
Choosing the Right Compute and Storage for Ethereum Nodes on AWS
To make the most of an Ethereum node infrastructure on AWS, it’s important to consider both the right compute and storage. To make sure the most popular Ethereum Execution Layer (EL) clients—such as go-ethereum with LevelDB (Geth) and Erigon—work optimally, Amazon ran a series of tests and observed the results.
Secure Data at Rest on Amazon RDS Custom for Oracle with TDE
Transparent Data Encryption (TDE) can be used to protect data at rest for an Oracle non-multi-tenant database running in Amazon Relational Database Service (Amazon RDS) Custom for Oracle. This two-part series outlines the steps necessary to achieve this security.
In Part 1, we discuss implementation for non-CDB environments. In Part 2, we focus on multi-tenant environments.
Using Amazon DynamoDB to Build an Event-Driven and Scalable Remittance Service
Amazon Finance Technologies (FinTech) payment transmission team developed a suite of services to handle the disbursement process from invoice generation to payment creation.
This includes a remittance service built using Amazon DynamoDB. This service makes payments to a diverse range of beneficiaries, optimizing for speed, scalability, and cost-effectiveness. It utilizes event-driven architecture, which responds to changes in payment status with an asynchronous process.
Aurora Global Database Failover on AWS
Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. With Aurora Global Database, users can span their relational databases across multiple regions for increased durability and performance.
This feature is ideal for use cases that require access and availability across multiple regions, such as multi-country applications and disaster recovery.
Introducing Amazon RDS for MariaDB 10.11
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports major version 10.11, the latest long-term supported major version from the MariaDB community.
Thanks to this upgrade, users can now benefit from up to 40% higher transaction throughput compared to previous versions. In addition, Amazon RDS for MariaDB 10.11 provides better scalability, security, and reliability.
Combining AWS and KeyCore for Optimal Performance
KeyCore provides professional services and managed services to help customers get the most out of their AWS investments. Our experienced engineers can help you architect, deploy, and manage your Ethereum node infrastructure on AWS, leveraging the right compute and storage to maximize performance.
We can also help you implement TDE on Amazon RDS Custom for Oracle to secure data at rest. Our team can also help you design, build, and manage an event-driven and scalable remittance service using Amazon DynamoDB, as well as configure Aurora Global Database for your use case.
No matter the project, KeyCore can help you take advantage of the latest features in Amazon RDS for MariaDB 10.11 to achieve optimal performance. Contact us today to learn more.
Read the full blog posts from AWS
- Choose AWS Graviton and cloud storage for your Ethereum nodes infrastructure on AWS
- Secure data at rest on Amazon RDS Custom for Oracle with TDE – Part 2: Multi-tenant environments
- Secure data at rest on Amazon RDS Custom for Oracle with TDE – Part 1: non-CDB environments
- How Amazon Finance Technologies built an event-driven and scalable remittance service using Amazon DynamoDB
- Introducing – Aurora Global Database Failover
- Introducing Amazon RDS for MariaDB 10.11 for up to 40% higher transaction throughput
AWS Cloud Financial Management
Establish a Cloud Financial Management Strategy to Improve Transparency, Control, and Optimization
Ensuring your organization’s cloud financial management (CFM) practices are effective is essential for success in the cloud. A CFM “flywheel” provides a continuous cycle of cost transparency, control, forecasting, and optimization. This blog will guide you through the steps you need to take to establish an effective CFM strategy.
Cost Transparency
Cost transparency is the cornerstone of any successful CFM strategy. Identifying cost drivers and understanding the cost of the services you are using in the cloud is essential for governance and optimization. A great way to increase cost transparency is to use Amazon Cost Explorer to gain visibility into your AWS costs and identify cost trends, resource usage, and other data points. As part of a CFM strategy, you will also want to ensure that the right tags are applied to each cloud resource so that you can more easily track, manage, and optimize your costs.
Cost Control
Once you have established a reliable source of cost transparency, it is important to take steps to control costs. Utilizing AWS Reserved Instances and AWS Savings Plans can help you save up to 75% on your cloud infrastructure costs. Furthermore, establishing cost budgets and alerts can help you control spending and ensure that your team is aware of any potential overages.
Cost Forecasting
An effective CFM strategy will also include forecasting. Forecasting allows you to predict future costs and plan accordingly. AWS Cost Explorer can be used to forecast your usage and costs for up to 12 months. This will help you identify potential cost savings opportunities and budget accordingly.
Cost Optimization
Cost optimization is the final piece of the CFM puzzle. To optimize costs, you will want to identify opportunities to reduce costs, such as right sizing, idle resource management, and eliminating wasteful spending. It is also important to leverage advanced cloud practices such as automation and serverless computing to optimize costs and reduce operational complexity.
How KeyCore Can Help
At KeyCore, we have extensive experience helping companies establish effective cloud financial management strategies. Our team of AWS experts can help you identify cost drivers, forecast costs, and optimize your cloud spending. Contact us today to learn more about how we can help you optimize your cloud costs and maximize your return on investment.
Read the full blog posts from AWS
AWS Training and Certification Blog
Empowering Career Growth with AWS Training and Certification
In times of crisis, it’s important to provide pathways for self-improvement and career growth. This is why AWS Training and Certification is proud to partner with EPAM and ITSkills4U to empower 16,000 people living in a war zone or those who have been displaced, to gain access to the skills they need for the future.
Accelerating Paths to Cloud Careers with APU and AWS Academy
AWS Academy has partnered with Asia Pacific University of Technology & Innovation (APU) to assist their students in gaining the skills and expertise they need to land rewarding careers in the cloud. This also helps foster continued learning for APU’s instructors.
New Courses and Certification Updates from August 2023
AWS Training and Certification continues to equip individuals and teams with the skills they need to work with AWS services and solutions. During August 2023, there were 13 new digital training products on AWS Skill Builder, including 5 AWS Builder Labs, AWS Jam Journey: Containers, and a new AWS Networking knowledge badge readiness path. Additionally, AWS opened its first international Skills Center in Cape Town, South Africa, as well as providing sales resources to help AWS partners learn how to sell to public sector customers.
KeyCore: Helping You Reach Your AWS Goals
At KeyCore, we understand the importance of staying up-to-date on the latest AWS certifications and skills. As the leading Danish AWS consultancy, we offer both professional and managed services to help you reach your AWS goals. Whether you need help developing custom solutions or ongoing guidance, our team of experienced professionals is here to help. Contact us today to get started!
Read the full blog posts from AWS
- Empowering career growth in challenging times with ITSkills4U
- APU and AWS Academy accelerate students’ path to cloud careers
- New courses and certification updates from AWS Training and Certification in August 2023
Official Big Data Blog of Amazon Web Services
Amazon OpenSearch Service H1 2023 in review
Since its release in January 2021, Amazon OpenSearch Service has released 14 versions up to 2.7. OpenSearch Service provides two configuration options to deploy and manage the service at scale. The managed domain approach allows you to specify hardware configurations and set up scaling policies, while the serverless mode allows you to deploy OpenSearch without worrying about managing hardware infrastructure.
Automating the archive and purge data process using pg_partman, Amazon S3, and AWS Glue
This post outlines how to use pg_partman, Amazon S3, and AWS Glue to archive and purge data for Amazon RDS for PostgreSQL and Amazon Aurora with PostgreSQL compatibility. pg_partman is a PostgreSQL native range partitioning tool that allows you to partition your hot data, while historical cold data can then be archived in Amazon S3. AWS Glue is then used to crawl the S3 data and enable easy querying of the archived data.
Monitoring OpenSearch Service Storage and Shard Skew with Amazon CloudWatch Metrics
Amazon CloudWatch metrics can be deployed to monitor the storage and shard skew of an OpenSearch Service domain. This solution uses an AWS Lambda function to extract storage and shard distribution metadata from the domain, calculates the level of skew, and then pushes the information to CloudWatch metrics. This allows users to easily monitor, alert, and respond to changes in the domain.
Using the Vector Engine for Semantic Search with Amazon OpenSearch Service
Amazon OpenSearch Service supports both lexical and vector search, since the introduction of its kNN plugin in 2020. Amazon Bedrock-hosted models can now be used in conjunction with OpenSearch Service’s vector database capabilities to implement semantic search, retrieval augmented generation (RAG), recommendation engines, and rich media search. The recent launch of the vector engine for Amazon OpenSearch Serverless makes it even easier to deploy such solutions.
How KeyCore can help
The Amazon OpenSearch Service and its features can be complex and difficult to implement. At KeyCore, our experienced team of AWS consultants can help you design and implement the right solution for your business. We can also provide you with managed services to ensure that your OpenSearch Service runs smoothly and efficiently. Contact us today to learn more about how we can help.
Read the full blog posts from AWS
- Amazon OpenSearch Service H1 2023 in review
- Automate the archive and purge data process for Amazon RDS for PostgreSQL using pg_partman, Amazon S3, and AWS Glue
- Amazon CloudWatch metrics for Amazon OpenSearch Service storage and shard skew health
- Try semantic search with the Amazon OpenSearch Service vector engine
Networking & Content Delivery
Networking & Content Delivery
AdTech platforms rely heavily on internet conditions for optimal delivery of ads. In this post, we’ll explain how you can leverage Amazon CloudWatch Internet Monitor for continuous monitoring of internet conditions to ensure best workflow and delivery. This will improve end-user experiences in the world of AdTech.
New AWS Networking Core Digital Knowledge Badge
AWS Training and Certification has released a new Networking Core Knowledge Badge Readiness Path to help demonstrate your AWS Networking knowledge in a public and verifiable way. To get the badge, you’ll need to pass an exam and the badge is a great way to show potential employers your technical skills in the AWS world.
Configuring Client IP Address Preservation with a Network Load Balancer in AWS Global Accelerator
AWS Global Accelerator has released support for client IP address preservation with Network Load Balancer endpoints. This feature allows you to maintain the source IP address of the original client for packets arriving at Network Load Balancer endpoints configured as Global Accelerator endpoints. This will add an additional layer of security and ensure that the packets are coming from the intended source.
At KeyCore, we offer professional and managed services in the AWS world. We have the expertise to help you leverage the latest services like Amazon CloudWatch Internet Monitor for continuous monitoring, the new Networking Core Digital Knowledge Badge, and configuring client IP address preservation with a Network Load Balancer in AWS Global Accelerator. Our team of experienced AWS consultants can help you throughout the entire process and ensure that you are taking advantage of the best features to optimize your end-user experience. Contact us today to learn more.
Read the full blog posts from AWS
- Optimizing AdTech end-user experiences Using Amazon CloudWatch Internet Monitor
- New AWS Networking Core Digital Knowledge Badge
- Configuring client IP address preservation with a Network Load Balancer in AWS Global Accelerator
AWS Compute Blog
Protecting Data on AWS Snowball Edge and Lambda@Edge
The AWS Snow family of products are purpose-built devices that allow petabyte-scale movement of data from on-premises locations to AWS Regions. To ensure that your data is secure while in transit, as well as when it’s stored on the Snow device, it’s important to configure the security groups. In this post, we’ll discuss how to configure and manage security groups on AWS Snowball Edge devices.
Using Security Groups
Security groups are firewall rules that control inbound and outbound traffic from an instance. When you create a Snowball Edge device, you assign a security group to the network interface. The security group then controls which ports are open. This helps protect the Snowball Edge device from malicious traffic. For example, you can use a security group to limit access to the Snowball Edge device from a specific IP address.
Managing Security Groups
You can manage the security groups on Snowball Edge devices using the Amazon Snowball Console, the AWS CLI, or the AWS Snowball API. For example, you can use the AWS CLI to list the security groups associated with a Snowball Edge device. This helps you quickly identify any security group rules that may need to be modified.
Protecting Lambda Functions with CloudFront and Lambda@Edge
In addition to protecting Snowball Edge devices, Amazon CloudFront and Lambda@Edge can be used to protect Lambda functions. CloudFront helps protect from DDoS attacks, and the function at the edge adds appropriate headers to the request to authenticate it for Lambda. This ensures that only authorized users can access the Lambda function.
Using KeyCore To Secure Your Data
Although managing security groups and setting up CloudFront and Lambda@Edge can help secure your data, making sure that your data is secure can be complex. At KeyCore, our team of AWS experts can help you protect your data and ensure that it’s secure. We provide both professional services and managed services, so you don’t have to worry about managing your security groups or setting up CloudFront and Lambda@Edge. Contact us today to learn more about how we can help you keep your data secure.
Read the full blog posts from AWS
- Using and Managing Security Groups on AWS Snowball Edge devices
- Protecting an AWS Lambda function URL with Amazon CloudFront and Lambda@Edge
AWS for M&E Blog
Dynamically Mapping MediaPackage Origins with Amazon CloudFront and AWS Lambda@Edge
AWS Media Services are designed to be elastic and on-demand, allowing customers to spin up and tear down entire media pipelines for single live events. This is made possible by utilizing Amazon CloudFront and AWS Lambda@Edge, technologies that enable customers to create dynamic origin mappings for their media streaming pipelines.
By using CloudFront, customers can redirect requests from their viewers to the best origin for their content, and Lambda@Edge allows them to map their dynamic origins to the CloudFront distribution. This enables customers to quickly set up their streaming pipelines to respond quickly to changes in the environment and to improve the viewer experience.
Deploying Streaming Pipelines with CloudFormation
When deploying streaming pipelines, customers can use CloudFormation to quickly create templates that define the resources they need. These templates will enable customers to easily deploy their pipelines for a single event, and to easily tear down the pipeline after the event.
Using CloudFormation, customers can also define the mapping between their CloudFront distribution and the origin server they want to use. This will enable customers to quickly set up their streaming pipelines and to easily switch between different origins for their content.
Ensuring an Optimal Viewer Experience with Lambda@Edge
By leveraging Lambda@Edge, customers can also ensure that their viewers have the most optimal viewing experience. Lambda@Edge functions allow customers to make sure that the right origin is always used for their content, allowing them to quickly respond to changes in the environment.
The Lambda@Edge functions can also be used to map the dynamic origins to the CloudFront distributions. This enables customers to quickly switch between origins and to ensure that their viewers always have the best viewing experience.
KeyCore Can Help
At KeyCore, we are experts in AWS and can help you take advantage of the elastic, on-demand nature of AWS Media Services. Our experienced team can help you quickly create and deploy streaming pipelines with CloudFormation, and ensure an optimal viewer experience with Lambda@Edge.
We also offer both professional and managed services to help you get the most out of your AWS Media Services. Contact us to learn more about how we can help you get the most out of your live streaming pipelines.
Read the full blog posts from AWS
AWS Storage Blog
Continental Automotive Edge and Amazon Elastic Block Store
Continental and AWS Automotive Edge
Continental and AWS have collaborated to create the Continental Automotive Edge (CAEdge) framework – a modular hardware and software environment that connects the vehicle to the cloud and provides virtual workbenches to develop, supply, and maintain software-intensive system functions for a wide range of automotive applications. To enhance the performance of simulations, Continental decided to use Mountpoint to store data on Amazon S3. By doing this, Continental experienced a 20% improvement in performance.
Amazon Elastic Block Store at 15 Years
Amazon Elastic Block Store (EBS) has become an important part of Amazon Web Services (AWS) since its introduction in 2004. In 2009, when the author joined AWS, they had a meeting with Andrew Certain, a senior engineer of EBS. Andrew gave a detailed overview of how EBS was implemented and what their plans were for the future. EBS provides various features such as block-level storage for Amazon Elastic Compute Cloud (EC2) instances, and offers customers a wide range of options for configuring storage, such as selecting the type of disk, storage size, and performance. Furthermore, EBS enables customers to take snapshots of their data and store it in Amazon S3, as well as enabling encryption of data at rest and in transit.
How KeyCore Can Help
KeyCore can help customers leverage their existing AWS environment to increase performance and efficiency. Our consultants can provide customers with an in-depth analysis of their current setup and offer customized solutions to enable customers to get the most out of their AWS environment. Additionally, KeyCore can provide assistance with the setup and configuration of Amazon S3 and Amazon EBS, as well as help customers run simulations, take snapshots, and encrypt data.
Read the full blog posts from AWS
- How Continental uses Mountpoint for Amazon S3 in autonomous driving development – accelerating simulation performance by 20%
- Amazon Elastic Block Store at 15 Years
AWS Partner Network (APN) Blog
Gaining Secure Access to Environments in AWS Accounts with OpenID Connect and GitLab
When building out a CI/CD pipeline, there are several ways to ensure secure access to environments in AWS Accounts. Given that pipelines can have create and destroy access to important components of an AWS-based environment, it is important to evaluate how GitLab Runner authenticates and authorizes access to AWS accounts. OpenID Connect (OIDC) for GitLab CI/CD jobs can provide security when accessing AWS services using GitLab.
Migrating to AWS to Lower Cost and Speed Up Data Modeling
Migrating to a hosting provider to save costs presents an opportunity to improve application performance and optimize DevOps processes. MasterWorks, a Christian ministry near Seattle, needed to migrate an application that uses machine learning models from a SaaS provider to AWS. Avahi Technologies and AWS collaborated to help MasterWorks with the migration, resulting in lower costs and faster data modeling.
Harnessing Generative AI and Semantic Search for Intelligent Knowledge Management
As the digitization of businesses continues to increase, the value of effective knowledge management increases. To use large language models (LLMs) and semantic search effectively, businesses must have a modern data strategy and technical skill sets. Generative AI and semantic search can be used together to improve productivity and operational efficiency by providing enhanced search capabilities and actionable insights.
Automating SAP S/4HANA Migration with AWS and IT-Conductor
Cloud migrations for SAP HANA or S/4HANA can be challenging. Businesses must consider timing, tools, and cost when migrating SAP workloads to the cloud. IT-Conductor and BGP Management Consulting worked together on a real use-case scenario, helping a customer migrate their S/4HANA from on-premises to AWS. Automated solutions and services were used to complete the migration.
Utilizing Aviatrix Secure Networking to Simplify Multi-Cloud Connectivity and Leverage AWS
As businesses expand their cloud infrastructure, they require the ability to connect their AWS environments to other cloud providers. Connecting different cloud networks securely can be difficult due to varying networking architectures, security models, and operational tools. Aviatrix Systems simplifies this process and enables businesses to leverage AWS while connecting to other cloud providers.
At KeyCore, we know that cloud migrations, secure access to environments, and multi-cloud connectivity pose challenges for many businesses. Our team of certified AWS consultants can help you to migrate to the cloud, manage secure access to your environments, and simplify your multi-cloud connectivity. Get in touch with us to find out how we can help you make the most out of your cloud infrastructure.
Read the full blog posts from AWS
- Setting Up OpenID Connect with GitLab CI/CD to Provide Secure Access to Environments in AWS Accounts
- Avahi Migrates MasterWorks’ Machine Learning App to AWS to Lower Cost and Speed Up Data Modeling
- Harnessing Generative AI and Semantic Search to Revolutionize Enterprise Knowledge Management
- Automating SAP S/4HANA Migration with IT-Conductor, BGP Managed Services, and AWS
- Using Aviatrix Secure Networking to Simplify Multi-Cloud Connectivity and Fully Leverage AWS
AWS Cloud Enterprise Strategy Blog
What Are the Principles of Cloud Transformation?
Cloud transformation is a process of optimizing an organization’s resources and processes in order to achieve their goals and objectives more efficiently and effectively. In order to achieve this, organizations must adhere to certain principles. In this blog post, we’ll explore the tenet principles of cloud transformation.
Principle 1: Automate and Automate Again
Automating manual processes and operations is essential to cloud transformation. By removing the manual effort required to complete tasks, organizations can increase their efficiency and reduce costs. Automation also helps reduce the time and effort required to deploy and manage resources. Automation can also be used to optimize a system’s security and compliance posture, as well as to ensure high availability and reliability of services.
Principle 2: Monitor and Measure
Organizations must monitor their systems, processes, and resources to ensure that their cloud transformations are successful. This includes tracking the performance of the system and measuring its effectiveness. By doing this, organizations can identify areas for improvement and adapt their strategies accordingly.
Principle 3: Always Optimize
Cloud transformation involves optimizing resources and processes. Organizations must continuously analyze and optimize their systems and processes to ensure that they are meeting their goals and objectives. This includes optimizing the utilization of resources, streamlining processes, and improving the user experience.
Principle 4: Leverage the Cloud
Organizations must leverage the cloud to optimize their systems and resources. This includes taking advantage of cloud services such as storage, computing, networking, and analytics to ensure that their applications and services are running efficiently and effectively.
Principle 5: Security and Compliance
Organizations must ensure that their systems and resources are secure and compliant with all applicable regulations and standards. This includes developing and implementing security policies and procedures, as well as monitoring systems for potential vulnerabilities.
Principle 6: Engage with All Stakeholders
Organizations must engage with all stakeholders, including customers, partners, and employees, in order to ensure that they are aware of the cloud transformation initiatives and that their needs are being met. This includes providing clear and concise communication about the goals and objectives of the initiative.
Principle 7: Be Agile and Adaptive
Organizations must be agile and adaptive when executing their cloud transformation initiatives. This includes being able to respond quickly to changes in the environment and adapting their strategies and processes as needed.
Principle 8: Have Clear Goals and Objectives
Organizations must have clear goals and objectives for their cloud transformation initiatives. This includes identifying the desired outcomes and measuring the progress towards them.
Principle 9: Focus on Modern Technologies
Organizations must focus on modern technologies when executing their cloud transformation initiatives. This includes utilizing cloud services such as serverless and containerized applications, as well as modern development and deployment tools.
Principle 10: The Human Element
It is important to remember that cloud transformation initiatives involve people as well as technology. This includes providing training and support to ensure that employees are able to effectively utilize the new technologies and processes.
At KeyCore, we are an advanced AWS consultancy providing both professional services and managed services. We understand the complexities of cloud transformation and can help you to navigate the process with ease. Contact us today to learn more about how we can help your organization.
Read the full blog posts from AWS
AWS HPC Blog
Exploring Hpc7a – the newest AMD-powered member of the HPC Instance Family
AWS recently launched a new High Performance Computing (HPC) instance, Hpc7a, powered by AMD processors. This new instance is ideal for running compute-heavy workloads such as Computational Fluid Dynamics (CFD), molecular dynamics simulations, and weather prediction codes. To give our readers a better understanding of what this new instance offers, we have done some deep-diving into its performance results.
AMD-Powered Performance
Hpc7a is powered by 2nd Gen AMD EPYC 7742 processors. This allows it to offer high-performance computing capabilities, with up to 256 GiB of memory and up to 4, 8, or 16 V100 GPUs. It also supports up to 4 EBS volumes of up to 4 TiB each, enabling it to handle even the most challenging workloads.
Scalability and Flexibility
One of the key benefits of Hpc7a is its scalability and flexibility. For example, you can launch multiple Hpc7a instances and combine them into a cluster. This allows you to scale up your compute capacity quickly, as well as to run larger, more complex workloads. You can also use the Hpc7a instance for both on-demand and spot pricing, and can use it with both AWS Batch and AWS ParallelCluster. This makes it ideal for applications that require both high performance and flexibility.
How KeyCore Can Help
At KeyCore, we offer both professional and managed services for AWS HPC solutions. Our team of experts can help you take advantage of the new Hpc7a instance to get the most out of your compute-heavy workloads. We provide comprehensive setup and deployment services, including the creation of clusters and configuration of compute, storage, and networking. We also provide ongoing support and maintenance services to ensure that your HPC solutions are always running optimally. To learn more about our services, please visit our website at https://www.keycore.dk.
Read the full blog posts from AWS
The latest AWS security, identity, and compliance launches, announcements, and how-to posts.
The Latest AWS Security, Identity, and Compliance Launches
AWS Digital Sovereignty Pledge
At AWS, we’re dedicated to helping customers meet their digital sovereignty requirements. We strive to make AWS sovereign-by-design, offering a range of sovereignty controls and features. Last year, we announced the AWS Digital Sovereignty Pledge, our commitment to helping customers meet their digital sovereignty needs. Our pledge includes offering dedicated infrastructure options to enhance sovereignty and security for our customers.
Cedar: An Open Source Language for Writing and Evaluating Authorization Policies
We’ve also released Cedar, an open source language for writing and evaluating authorization policies. Cedar makes it easier to manage access to your application’s resources in a reusable and modular way. You can use Cedar policies to express your permissions, and the authorization engine can be used by your application to decide if a user has the right to access certain resources.
Alignment With BIO Thema-uitwerking Clouddiensten
We are pleased to announce that we have launched a Landing Zone for the Baseline Informatiebeveiliging Overheid (BIO) framework to support our Dutch customers in their compliance needs. We have also demonstrated compliance with the BIO Thema-uitwerking Clouddiensten. This demonstrates our continuous commitment to the BIO Thema-uitwerking Clouddiensten requirements.
KeyCore as Your Partner for AWS Security, Identity, and Compliance Launches
At KeyCore, we provide professional and managed services to help customers meet their AWS security needs. We have a team of experienced AWS professionals who can provide the best possible advice and help you ensure that your systems are secure and compliant. Our team also has the expertise to help you with the latest AWS security, identity, and compliance launches. Contact us today to learn more about how we can help you.
Read the full blog posts from AWS
- AWS Digital Sovereignty Pledge: Announcing new dedicated infrastructure options
- How we designed Cedar to be intuitive to use, fast, and safe
- AWS launched a Landing Zone for the Baseline Informatiebeveiliging Overheid (BIO) and is issued a certificate for the BIO Thema-uitwerking Clouddiensten
Innovating in the Public Sector
Innovating in the Public Sector with AWS
The AWS Well-Architected Framework (AWS WAF) enables organizations to build secure, high-performing, resilient, and efficient infrastructure for workloads in the cloud. To further help public sector organizations innovate with the AWS WAF, AWS has announced the launch of a Government Lens, which provides customer-proven design principles, scenarios, and technology agnostic best practices tailored to the unique context and requirements of governments.
Unifying Data Access with GraphQL
GraphQL is a powerful query language and server-side runtime system that prioritizes giving clients exactly the information they request and no more. This makes it an ideal tool for streamlining data access and helping public sector organizations focus on their data. AWS provides a reference architecture leveraging serverless technologies in the AWS GovCloud (US) Regions that makes it easy to build GraphQL-enabled solutions and unify data access in real-time.
How KeyCore Can Help
KeyCore is the leading Danish AWS consultancy and our expert team has the experience and knowledge needed to help public sector organizations take advantage of the latest AWS tools and technologies. Our professional services and managed services offerings make it easy to leverage the AWS WAF Government Lens and GraphQL reference architecture in order to maximize efficiency, flexibility, and security in the cloud. We always keep our clients at the forefront of innovation, helping to ensure that public sector organizations have the tools they need to successfully meet their goals.
Read the full blog posts from AWS
- Introducing the Government Lens for the AWS Well-Architected Framework
- Implement a secure, serverless GraphQL architecture in AWS GovCloud (US) to optimize API flexibility and efficiency
The Internet of Things on AWS – Official Blog
How AWS IoT Can Help Manage and Protect Your Renewable Energy Systems
Organizing Your IoT Software Packages with AWS IoT Software Package Catalog
As the number of connected IoT devices rapidly increases, the need for efficient fleet management rises. IoT device fleet management typically arranges deployment of software packages to IoT devices. This includes versions that contain various pieces of code to manage the device’s behavior and data.
AWS IoT Software Package Catalog provides a service to help manage and organize this software. It allows businesses to track the different versions of the software packages they provide to their customers. Users can also easily upgrade or downgrade their customer’s package versions, ensuring customer satisfaction and increasing efficiency.
Protecting IoT Devices with AWS IoT
The Internet of Things (IoT) has become increasingly relevant in many industries. With the growth of the connected device population and the transmission of sensitive data, security has become a priority. As the demand for energy increases, renewable energy systems have become vital to sustainable energy sources.
AWS IoT can provide security measures to protect renewable energy systems from potential threats. Through AWS IoT Device Defender, businesses can implement and manage security policies that reduce risks and ensure data and device integrity. Additionally, AWS IoT Events can identify patterns in data streams that are indicative of external attacks and send notifications to the user.
KeyCore’s Managed and Professional Services
Organizing and protecting IoT devices can be a daunting task. KeyCore provides professional services to help businesses with their fleet management needs. Our team of AWS certified professionals will assess your current system and provide the best solution for your renewable energy systems. We also offer managed services to maintain your system and check for any potential issues. To learn more about KeyCore’s offerings, visit https://www.keycore.dk.