With the recent most release of the AWS Toolkit for JetBrains, customers can connect to Amazon RDS or Redshift with only a few clicks. Using the AWS Toolkits for JetBrains, you can use both IAM or credentials in Secrets Manager to connect to Amazon Redshift or RDS databases. You no longer need to have long-lived database credentials, or copy-paste auth tokens from the AWS CLI; credentials are generated by the Toolkit as they are needed instead of being saved to disk.
Amazon ElastiCache is now available in the Los Angeles AWS Local Zones. You can now run latency-sensitive ElastiCache workloads local to end-users and resources in Local Zones.
Today, the AWS Copilot CLI for Amazon Elastic Container Service (ECS) launched version 0.4.0. Starting with this release, you can enable autoscaling for services based on average CPU and memory utilization and provide a maximum and minimum number of tasks. AWS Copilot will also retain the service’s desired count after autoscaling occurred, so that if a deployment starts, your service will remain scaled out or in based on resource utilization.
AWS Launch Wizard now allows customers to deploy SAP workloads using Red Hat Enterprise Linux Version 8.1.
Posted by Kanstantsin Sokal, Software Engineer, MediaPipe team
Earlier this year, the MediaPipe Team released the Face Mesh solution, which estimates the approximate 3D face shape via 468 landmarks in real-time on mobile devices. In this blog, we introduce a new face transform estimation module that establishes a researcher- and developer-friendly semantic API useful for determining the 3D face pose and attaching virtual objects (like glasses, hats or masks) to a face.
The new module establishes a metric 3D space and uses the landmark screen positions to estimate common 3D face primitives, including a face pose transformation matrix and a triangular face mesh. Under the hood, a lightweight statistical analysis method called Procrustes Analysis is employed to drive a robust, performant and portable logic. The analysis runs on CPU and has a minimal speed/memory footprint on top of the original Face Mesh solution.
Figure 1: An example of virtual mask and glasses effects, based on the MediaPipe Face Mesh solution.
The MediaPipe Face Landmark Model performs a single-camera face landmark detection in the screen coordinate space: the X- and Y- coordinates are normalized screen coordinates, while the Z coordinate is relative and is scaled as the X coordinate under the weak perspective projection camera model. While this format is well-suited for some applications, it does not directly enable crucial features like aligning a virtual 3D object with a detected face.
The newly introduced module moves away from the screen coordinate space towards a metric 3D space and provides the necessary primitives to handle a detected face as a regular 3D object. By design, you’ll be able to use a perspective camera to project the final 3D scene back into the screen coordinate space with a guarantee that the face landmark positions are not changed.
Metric 3D Space
The Metric 3D space established within the new module is a right-handed orthonormal metric 3D coordinate space. Within the space, there is a virtual perspective camera located at the space origin and pointed in the negative direction of the Z-axis. It is assumed that the input camera frames are observed by exactly this virtual camera and therefore its parameters are later used to convert the screen landmark coordinates back into the Metric 3D space. The virtual camera parameters can be set freely, however for better results it is advised to set them as close to the real physical camera parameters as possible.
Figure 2: A visualization of multiple key elements in the metric 3D space. Created in Cinema 4D
Canonical Face Model
The Canonical Face Model is a static 3D model of a human face, which follows the 3D face landmark topology of the MediaPipe Face Landmark Model. The model bears two important functions:
- Defines metric units: the scale of the canonical face model defines the metric units of the Metric 3D space. A metric unit used by the default canonical face model is a centimeter;
- Bridges static and runtime spaces: the face pose transformation matrix is – in fact – a linear map from the canonical face model into the runtime face landmark set estimated on each frame. This way, virtual 3D assets modeled around the canonical face model can be aligned with a tracked face by applying the face pose transformation matrix to them.
Face Transform Estimation
The face transform estimation pipeline is a key component, responsible for estimating face transform data within the Metric 3D space. On each frame, the following steps are executed in the given order:
- Face landmark screen coordinates are converted into the Metric 3D space coordinates;
- Face pose transformation matrix is estimated as a rigid linear mapping from the canonical face metric landmark set into the runtime face metric landmark set in a way that minimizes a difference between the two;
- A face mesh is created using the runtime face metric landmarks as the vertex positions (XYZ), while both the vertex texture coordinates (UV) and the triangular topology are inherited from the canonical face model.
The Effect Renderer is a component, which serves as a working example of a face effect renderer. It targets the OpenGL ES 2.0 API to enable a real-time performance on mobile devices and supports the following rendering modes:
- 3D object rendering mode: a virtual object is aligned with a detected face to emulate an object attached to the face (example: glasses);
- Face mesh rendering mode: a texture is stretched on top of the face mesh surface to emulate a face painting technique.
In both rendering modes, the face mesh is first rendered as an occluder straight into the depth buffer. This step helps to create a more believable effect via hiding invisible elements behind the face surface.
Figure 3: An example of face effects rendered by the Face Effect Renderer.
Using Face Transform Module
The face transform estimation module is available as a part of the MediaPipe Face Mesh solution. It comes with face effect application examples, available as graphs and mobile apps on Android or iOS. If you wish to go beyond examples, the module contains generic calculators and subgraphs – those can be flexibly applied to solve specific use cases in any MediaPipe graph. For more information, please visit our documentation.
We look forward to publishing more blog posts related to new MediaPipe pipeline examples and features. Please follow the MediaPipe label on Google Developers Blog and Google Developers twitter account (@googledevs).
We would like to thank Chuo-Ling Chang, Ming Guang Yong, Jiuqiang Tang, Gregory Karpiak, Siarhei Kazakou, Matsvei Zhdanovich and Matthias Grundman for contributing to this blog post.
Starting today, you can receive anomaly detection alert notifications with root cause analysis, so you can proactively take actions and minimize unintentional spend.
Amazon Announces Next-Generation Fire TV Stick, Fire TV Stick Lite, Redesigned User Experience, and Amazon Luna – A Cloud Gaming Service
Today, we announced the next-generation Fire TV Stick, all-new Fire TV Stick Lite, Fire TV Cube expansion in Europe, and a redesigned Fire TV user experience with improved content discovery and engagement features for Fire TV developers. Marc Whitten, VP of Entertainment Devices and Services, revealed the new Fire TV experience for the first time during a presentation today.
The Fire TV team is focused on making the entertainment experience better for customers by continuously improving device performance and making it easier for customers to find and discover great content. We continue to invent on behalf of customers, with continued growth and more than 100M devices sold globally. Here’s a look at what we announced today:
1) Our Best-Selling Fire TV Stick Now 50% More Powerful with HDR and Dolby Atmos Support
The all-new Fire TV Stick features an enhanced 1.7 GHz quad-core processor that makes it 50% more powerful than the previous generation. The new Fire TV Stick delivers fast streaming in 1080p at 60fps with HDR compatibility and is running on our latest Fire OS 7. The dual-band, dual-antenna WiFi supports 5 GHz networks for more stable streaming and fewer dropped connections. Fire TV Stick also features Dolby Atmos for immersive sound with compatible content and speakers, and an Alexa Voice Remote with dedicated power, volume, and mute buttons for easy control of TVs, soundbars, and A/V receivers. Shipping begins in select countries next week. Customers in Canada, France, Germany, India, Italy, Japan, Spain, the United Kingdom, and the United States can begin pre-ordering today.
2) Introducing Fire TV Stick Lite for Only $29.99
Fire TV Stick Lite is a new, even more affordable way to begin streaming in full HD. Fire TV Stick Lite is 50% more powerful than the previous-generation Fire TV Stick and includes the most processing power of any streaming media player under $30. It features HDR support and comes with Alexa Voice Remote Lite, a new remote that allows the use of voice to find, launch, and control content. The new Fire TV Stick Lite runs on Fire OS 7. Shipping begins in select countries next week. Customers in Australia, Brazil, Canada, France, Germany, India, Italy, Mexico, Spain, United Kingdom, and United States can begin pre-ordering today.
3) An All-New Fire TV Experience to Improve Content Discovery
Fire TV’s most significant experience update is redesigned to offer a more intuitive, simple, and customized experience for your customers. The Main Menu is at the center of the screen and makes it easy to find content. Customers can now jump into their favorite streaming service directly, or scroll over supported apps to quickly peek at what’s inside and begin playback. A brand-new Find experience makes it easier than ever to discover great movies, TV shows, and more, with browsing capabilities that allow for broad and specific searches based on genres (e.g. comedies, action), helpful categories (e.g. free, sports, Live TV), and more. The redesigned Fire TV experience will begin rolling out globally later this year, starting with the new Fire TV Stick and Fire TV Stick Lite. Additional devices will follow through early 2021; exact features will vary by country. The new Fire TV experience will be available in Australia, Brazil, Canada, France, Germany, India, Italy, Japan, Mexico, Spain, the United Kingdom, and the United States.
4) Fire TV Cube is Now Available in France, Italy and Spain
Today, we’re excited to announce the Fire TV Cube is now available in France, Italy and Spain. Fire TV Cube features a hexa-core processor, and is more than twice as powerful as the first generation Fire TV Cube. The Fire TV Cube delivers a fast, fluid experience, with support for Dolby Vision and 4K Ultra HD content at up to 60 frames per second.
5) Introducing Luna – Amazon’s New Cloud Gaming Service
Starting today, customers in the U.S. can request early access to Amazon Luna, a new cloud gaming service designed for instant play. With Luna and the incredible scale and capability of Amazon Web Services (AWS), it’s easy to stream high-quality, immersive games. Players can enjoy Luna games on their favorite devices without lengthy downloads or updates, expensive hardware or complicated configuration. They can even start playing on one screen and seamlessly pick up and continue on another. Luna will launch with native apps on Fire TV, PC, and Mac, and web apps for mobile play on iOS, with Android coming soon. Read more about Luna here.
We’re incredibly excited to bring these new devices and experiences to customers later this year. This is a great time to publish your app on Amazon Fire TV. Learn how you can build apps for Amazon Fire TV Devices here.
The Amazon Appstore Team
Starting today, Amazon Aurora PostgreSQL supports the pglogical extension. pglogical is an open source PostgreSQL extension that helps customers replicate data between independent Aurora PostgreSQL databases while maintaining consistent read-write access and a mix of private and common data in each database. Amazon Aurora pglogical uses logical replication to copy data changes between independent Aurora PostgreSQL databases, optionally resolving conflicts based on standard algorithms. Customers can enable pglogical from within their Aurora PostgreSQL instances, and pay only for the additional clusters and cross-region traffic needed, with no upfront costs or software purchases required. Fully integrated, pglogical requires no triggers or external programs. This alternative to physical replication is a highly efficient method of replicating data using a publish/subscribe model for selective replication.
You can now create Amazon Aurora database clusters with up to 128TB of storage. The new storage limit is available for both the MySQL- and PostgreSQL-compatible editions of Amazon Aurora. Previously, Aurora database instances supported 64TB of storage.
You can now use AWS Lake Formation in the Europe (Milan) region.
You now can restore Amazon DynamoDB table backups as new tables in the Africa (Cape Town), Asia Pacific (Hong Kong), Europe (Milan), and Middle East (Bahrain) Regions
You can use Amazon DynamoDB backup and restore to create on-demand and continuous backups of your DynamoDB tables—and then restore from those backups. You also can restore DynamoDB table backups as new tables in other AWS Regions. Starting today, you can restore table backups as new tables in the Africa (Cape Town), Asia Pacific (Hong Kong), Europe (Milan), and Middle East (Bahrain) Regions.
AWS Training and Certification has launched a new course with three programming language options: Building Modern Java Applications on AWS, Building Modern Node.js Applications on AWS, and Building Modern Python Applications on AWS.