Ahora puede publicar registros desde sus bases de datos de Amazon RDS for SQL Server en Amazon CloudWatch Logs. Los registros compatibles incluyen registros de agente y error. La publicación de estos logs en CloudWatch le permite mantener una visibilidad continua de la actividad y los errores de sus bases de datos. Por ejemplo, los clientes pueden configurar alarmas en CloudWatch para recibir notificaciones sobre reinicios frecuentes registrados en el registro de errores. De forma similar, los clientes pueden crear alarmas de errores o advertencias registradas en los registros del Agente SQL Server que estén relacionadas con sus trabajos del agente de SQL.
Amazon EC2 Auto Scaling, Auto Scaling de aplicaciones y AWS Auto Scaling son compatibles a partir de ahora con AWS PrivateLink
Ahora puede acceder a Amazon EC2 Auto Scaling, Auto Scaling de aplicaciones y AWS Auto Scaling (planes de escalado) dentro de su Amazon Virtual Private Cloud (VPC) puesto que estos servicios de escalado automático ya son compatibles con AWS PrivateLink. Con AWS PrivateLink, ahora puede obtener acceso de forma privada a los servicios de escalado automático desde su VPC sin necesidad de usar IP públicas ni de que el tráfico circule por Internet.
Starting today, Amazon Redshift adds support for materialized views in preview. Materialized views provide significantly faster query performance for repeated and predictable analytical workloads such as dashboarding, queries from business intelligence (BI) tools, and ELT (Extract, Load, Transform) data processing.
Customers can now connect Azure Active Directory to AWS Single Sign-on (SSO) once, manage permissions to AWS centrally in AWS SSO, and enable users to sign in using Azure AD to access assigned AWS accounts and applications. This makes it easier for administrators to grant access to their existing users and groups, and provides users the convenience of the sign-in experience they know from Office 365 with single-click access to assigned AWS accounts.
Azure Media Services provides a platform with which you can broadcast live events. You can use our APIs to ingest, transcode, and dynamically package and encrypt your live video feeds for delivery via industry-standard protocols like HTTP Live Streaming (HLS) and MPEG-DASH. You can also use our APIs to integrate with CDNs and deliver to millions of concurrent viewers. Customers are using this platform for scenarios ranging from multi-day sporting events and entire seasons of professional sports, to webinars and town-hall meetings.
Live transcriptions is a new preview feature in our v3 APIs, wherein you can enhance the streams delivered to your viewers with machine-generated text that is transcribed from spoken words in the audio feed. This feature is an option you can enable for any type of Live Event that you create in our service, including pass-through Live Events, where you configure a live encoder upstream to generate and push a multiple bitrate live feed into the service (visualized in the diagram below).
Figure 1. Schematic diagram for live transcription
When a live contribution feed is sent to the service, it extracts the audio signal, decodes it, and calls to the Azure Cognitive Services speech-to-text APIs to get the speech transcribed. The resultant text is then packaged into formats that are suitable for delivery via streaming protocols. For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.
You can use Azure Media Player (version 2.3.3 or newer) to play the video, as well as display the text on a wide variety of browsers and devices. You can also play back the streams on the iOS native player. If building an app for Android devices, playback of transcriptions has been verified by NexPlayer. You can contact them to request a demo.
Figure 2. Display of live transcription on Azure Media Player
For HTTP Live Streaming (HLS) protocol with media packaged into MPEG Transport Stream (TS) fragments, the text is packaged into WebVTT fragments. For delivery via MPEG-DASH or HLS with CMAF protocols, the text is wrapped in IMSC1.1 compatible TTML, and then packaged into MPEG-4 Part 30 (ISO/IEC 14496-30) fragments.
The live transcription feature is now available in preview in the West US 2 region. Read the full article here to learn how to get started with this preview feature.
Congratulations! You’ve decided to go with Google Kubernetes Engine (GKE) as your managed container orchestration platform. Your first order of business is to familiarize yourself with Kubernetes architecture, functionality and security principles. Then, as you get ready to install and configure your Kubernetes environment (on so-called day one), here are some security questions to ask yourself, to help guide your thinking.
How will you structure your Kubernetes environment?
What is your identity provider service and source of truth for users and permissions?
How will you manage and restrict changes to your environment and deployments?
Are there GKE features that you want to use that can only be enabled at cluster-creation time?
Ask these questions before you begin designing your production cluster, and take them seriously, as it’ll be difficult to change your answers after the fact.
Structuring your environment
As soon as you decide on Kubernetes, you face a big decision: how should you structure your Kubernetes environment? By environment, we mean your workloads and their corresponding clusters and namespaces, and by structure we mean what workload goes in what cluster, and how namespaces map to teams. The answer, not surprisingly, depends on who’s managing that environment.
If you have an infrastructure team to manage Kubernetes (lucky you!), you’ll want to limit the number of clusters to make it easier to manage configurations, updates and consistency. A reasonable approach is to have separate clusters for production, test, and development.
Separate clusters also make sense for sensitive or regulated workloads that have substantially different levels of trust. For example, you may want to use controls in production that would be disruptive in a development environment. If a given control doesn’t apply broadly to all your workloads, or would slow down some development teams, segment them out into separate clusters and give each dev team or service its own namespace within a cluster.
If there’s no central infrastructure team managing Kubernetes–if it’s more “every team for itself”—then each team will typically run its own cluster. This means more work and responsibility for them enforcing minimum standards—but also much more control over which security measures they implement, including upgrades.
Setting up permissions
Most organizations use an existing identity provider, such as Google Identity or Microsoft Active Directory, consistently across the environment, including for workloads running in GKE. This allows you to manage users and permissions in a single place, avoiding potential mistakes like accidentally over-granting permissions, or forgetting to update permissions as users’ roles and responsibilities change.
What permissions should each user or group have in your Kubernetes environment? How you set up your permission model is strongly tied to how you segmented your workloads. If multiple teams share a cluster, you’ll need to use Role-Based Access Control (RBAC) to give each team permissions in their own namespaces (some services automate this, providing a self-service way for a team to create and get permissions for its namespace). Thankfully, RBAC is built into Kubernetes, which makes it easier to ensure consistency across multiple clusters, including different providers. Here is an overview of access control in Google Cloud.
Deploying to your Kubernetes environment
In some organizations, developers are allowed to deploy directly to production clusters. We don’t recommend this. Giving developers direct access to a cluster is fine in test and dev environments, but for production, you want a more tightly controlled continuous delivery pipeline. With this in place, you can set up steps to run tests, ensure that images meet your policies, scan for vulnerabilities, and finally, deploy your images. And yes, you really should set up these pipelines on day one; it’s hard to convince developers who have always deployed to production to stop doing so later on.
Having a centralized CI/CD pipeline in place lets you put additional controls on which images can be deployed. The first step is to consolidate your container images into a single registry such as Container Registry, typically one per environment. Users can check images into a test registry, and once tests pass and they’re promoted to the production registry, push them to production.
We also recommend that you only allow service accounts (not people) to deploy images to production and make changes to cluster configurations. This lets you audit service account usage as part of a well-defined CI/CD pipeline. You can still give someone access if necessary, but in general it’s best to follow the principle of least privilege when granting service account permissions, and ensure that all administrative actions are logged and audited.
Features to turn on from day one
A common day-one misstep is failing to enable certain security features that you might need down the road at cluster-creation time, because you’ll have to migrate your cluster once it’s up and running to turn them on. There are some GKE security features that you can’t turn on in an existing cluster that aren’t turned on by default, for example private clusters and
Google Groups for GKE. Rather than trying to make a cluster you’ve used to experiment with these different features production-ready, a better plan is to create a test cluster, make sure its features work as intended, resolve issues, and only then make a real cluster with your desired configurations.
As you can see, there’s a lot to keep in mind when setting up GKE. Keep up to date with the latest advice and day two to-dos with the GKE hardening guide.
Editor’s note: David Grasty is the Corporate Head of Digital Transformation at two councils located in Southwest London. The councils employ roughly 5,000 council workers across 114 sites to keep local services operating for a combined 400,000 residents. This is how Grasty helped his IT team operate more efficiently, while giving his workers a more modern work experience using G Suite and Chrome Enterprise.
About six years ago, our Kingston and Sutton councils began sharing IT services—a step that helped us save money and reduce our IT workload. The process also provided our employees with a more modern work experience, so they can keep local services for borough residents up and running, like libraries, hospitals, schools, sustainable transportation and environmental health programs.
Using technology to help teams collaborate
We knew we wanted to replace our Windows 7 computers with cloud-based technology that allowed our employees to work together without interruption, including everyone from social workers to hospital staff—so we chose G Suite. We replaced our legacy email solution that operated on each borough’s individual servers and transitioned to Gmail with help from a tool called CloudMigrator offered from our Google Partner, Cloud Technology Solutions. After our council employees got used to working in Gmail—which many already knew from personal use—they began to work in other G Suite apps. Our work habits started to change.
In one meeting, we discussed areas of responsibility for different council organizations. I created a spreadsheet in Google Sheets to collaborate on ideas and others jumped into the document to add additions. We’d never make progress that quickly if we shared documents back and forth in email attachments. Similarly, it became second nature for people to connect face-to-face in video meetings in Meet or to send messages about projects in Chat. It sounds like such a simple change, but being able to attend meetings from home or from different administrative buildings, has saved people thousands of hours of commuting and walking time.
Providing flexible devices to help “sitters,” “walkers,” and “runners”
Our goal as an IT team is to be seen as more than just “wires and Wi-Fi.” Instead, we want to be integral partners for digital transformation. With G Suite technology in place, we were primed to offer flexible work options for employees, like shared desks or the ability to work from home. With that said, many employees work in historic buildings which can’t easily be renovated. The devices we chose needed to be flexible, installed and usable from anywhere.
We rolled out 3,800 Chromebooks in 80 locations, followed by about 1,700 Chromeboxes that replaced PCs. It was simple to deploy Chrome devices—I don’t think we could have rolled out the same number of Windows laptops in only four months. We knew that Chromebooks and Chromeboxes would work right out of the box in just a few minutes, with the correct policies applied.
To ensure success with the rollout, we carefully matched device types to each worker depending on their role and preference, classifying workers as: “sitters,” “walkers,” or “runners.”
- Sitters have assigned desks, so they received Chromeboxes with large displays that help them get their work done.
- Walkers work at their desks but like the freedom to work from home, so they received Acer Chromebooks, which are more portable.
- Runners frequently travel throughout the boroughs and offices, so they received Acer Spin Chromebooks, which can convert into tablets for presentations. The Acer Spin devices helped people in the field connect more easily with local residents, who might not be able to visit council offices.
With Chromebooks and G Suite, we’re not tied down to particular offices and data centers because just about every application we use is web-based or is a G Suite app. If we need to use legacy apps, such as council tax and planning systems, we can access them through Chrome’s Legacy Browser Support. Legacy Browser Support allows us to seamlessly access from Chrome just the legacy apps we need to in the required legacy browser, limiting the time we spend in unsecure browsers, while also not stalling what we need to get done in those legacy applications.
What I also particularly love about Chrome devices is that they require very little administration. With Chrome Enterprise Upgrade they’re secure and manageable right out of the box: we simply set policies within the Google Admin console, and from there we can track device usage, choose network settings, and even lock down devices.
Evolving workspaces to support modern collaboration
In our two main buildings, we’ve turned very traditional offices into flexible spaces where workers can set up their Chromebook at any open space. We have fewer desks now; many people work from home and join video meetings.
When I see employees working efficiently in the cloud, instead of pushing bits of paper around, I’m confident we’ll have greater impact on Kingston and Sutton residents.
Technology has played a key role in retail for decades, from early innovations like barcode scanning and digital point of sale devices, to the global frontier of modern logistics. Through it all, however, the fundamentals remain the same: retailers generate huge quantities of data, face unpredictable environments, and need to continually adapt to the ever-evolving needs of the customer. Throw in the chaos of Black Friday and Cyber Monday, and you’ve got one of the most complex enterprise challenges in the world.
It’s also a challenge tailor-made for AI: a technology that thrives on big data, adapts to change fluidly, and can deliver personalized experiences at scale. With the holiday rush upon us, let’s take a look at how two Cloud AI customers—3PM for online shoppers and Tulip for in-store—are helping make retail more efficient, more personal, and more trustworthy.
Tulip is helping brands across the world bring the flexibility and personalization of e-commerce to their in-store experiences. Online, 3PM continuously tracks millions of sellers across a range of e-commerce marketplaces, helping to turn the tide against predatory practices like counterfeit products and trademark infringement.
3PM: Safeguarding online marketplaces at a global scale
Trust is the foundation of every retail experience, and that’s especially true online. With the proliferation of online marketplaces like Amazon, eBay, and Walmart.com, however, trademarks, copyrighted content, and other brand assets are often spread across too many places to be effectively monitored.
Particularly disconcerting is the fast-growing world of counterfeit products. It’s not just knock-off sneakers and handbags, either. Fraudulent supplements, prescription drugs, and even baby food are readily available online, presented in convincing detail intended to fool customers and could pose a danger to consumer health. Small merchants and global brands alike have found it difficult to contain counterfeiting, largely due to its decentralized nature. This calls for a solution that lies outside marketplaces.
3PM Solutions saw an opportunity to help. By combining the power of advanced analytics with data at a global scale, 3PM’s suite of tools can detect counterfeit goods automatically, monitor a brand’s reputation over time, and help the brand understand its customers more deeply.
But getting such an ambitious vision off the ground presented some significant technical challenges for 3PM. Online marketplaces routinely change the format and structure of their listings, quickly confounding hand-written rules and filters. To make matters worse, the content within those listings is notoriously unreliable. For example, counterfeiters often intentionally misspell brand and product names to keep their goods under the radar. It’s a level of complexity that calls for a particularly flexible solution that’s capable of ingesting massive quantities of data, while also evolving as the nature of that data changes.
These challenges prompted 3PM to migrate to Google Cloud Platform, bringing the company’s data and infrastructure—and, more importantly, a state-of-the-art AI toolkit—into a single environment.
Google Cloud’s flexibility helped 3PM implement a creative, agile development process. The company’s developers designed a TensorFlow-based image classifier and trained it on billions of examples, forming the basis of a self-serve tool that lets brands accurately detect improper use of product photography, logos, and other trademarks. They built custom machine-learning models to intelligently analyze product listings. These models can look past the basics like image and title to incorporate a wide range of data points to detect subtle features correlated with fraud that rule-based systems—not to mention humans—would miss. 3PM even used the Cloud Translate API to transcend language barriers automatically.
Tulip: Bringing digital personalization to the in-store experience
Of course, brick-and-mortar remains fundamental to the identity of countless brands, with 80% of all sales still taking place in physical stores. Nevertheless, the speed, flexibility, and extreme personalization of e-commerce is influencing customer expectations everywhere—even when shopping in person—and retailers are scrambling to keep up.
Tulip helps retailers keep up with these demands with a suite of powerful mobile apps that gives retail workers the power of the digital world anywhere in their store, whether they’re looking up products, managing customer information, checking out shoppers, or communicating with customers. Tulip helps physical stores establish deeper relationships with their patrons based on their preferences, behaviors, and purchases—just as they would online—and it’s changing the way global brands do business.
A major challenge in any retail application is forecasting. Whether it’s an unexpected fashion craze or an annual event like Black Friday, retail’s surges and lulls can make traditional allocation of compute resources extremely challenging.
“Because we had to scale for peak demand, we had to buy capacity up front, which sat idle much of the time when sales demand was lower,” explains Jeff Woods, director of software for infrastructure at Tulip. “It became difficult and expensive. We were constantly asking the vendor to waive arbitrary limits. We had to use massive instances, and it was difficult to scale down.”
After migrating to Google Cloud, Tulip could deploy on an infrastructure capable of scaling to any size at a moment’s notice—and only pay for what they used. In the process, they also gained access to some of the world’s most advanced machine learning technologies. Now, wIth their data, infrastructure, and AI tools in one place, the stage was set for Tulip to build an entirely new level of intelligence into their solutions.
Tulip’s solutions use a set of custom TensorFlow models running on AI Platform to identify customer insights and sales opportunities based on data from a customer’s in-store mobile applications. This drives recommendations on when to connect with customers and how to engage them with highly personal and relevant communications.
Tulip’s solution is a textbook example of what makes Deployed AI so powerful: using previously unseen patterns in large quantities of data to solve a clearly defined business challenge, all at the speed of retail. “Every day, Tulip collects millions of data points from customer interactions across its channels,” says Ali Asaria, Tulip’s founder and CEO. “By integrating Google machine learning and big data products into our core platform, we can now use that data to provide intelligent insights and recommendations to retail associates.”
Just a few years ago, AI seemed too expensive and complex for companies like 3PM and Tulip. In both cases, however, moving to Google Cloud has demonstrated this technology’s affordability, interoperability, and ease of use. And the results have been transformative.
Whether the crowds are in stores or online, companies like Tulip and 3PM are demonstrating the power—and sometimes, the necessity—of using AI to make every retail interaction safer and more engaging. It’s another example of Deployed AI in action: using state-of-the-art technology to overcome age-old business challenges.
New Amazon CloudWatch Contributor Insights for Amazon DynamoDB (Preview) helps you identify frequently accessed keys and database traffic trends
Amazon CloudWatch Contributor Insights for Amazon DynamoDB (Preview) is a new diagnostic tool that provides an at-a-glance view of the traffic trends of your DynamoDB table and helps you identify the most frequently accessed keys. Now, you can monitor a table’s item access patterns continuously and also use CloudWatch Contributor Insights to provide graphs and visualizations of the table’s activity. You can use this information to better understand the top drivers of your application’s traffic and respond appropriately to unsuccessful requests.
AWS Managed Services (AMS) launched support for AWS CloudFormation (CFN) Stack Update. You can now make changes to your stack’s configurations or change it’s resources, such as new input parameter values or updated template, through the AMS request for change (RFC) process. Changes submitted are validated for safety and only nondestructive changes are automatically executed. For destructive changes, a change set is provided to you for approval before automated execution.
You can now assign AWS resource tags to Amazon Elastic Inference accelerators. Each tag consists of a key and an optional value, both of which you define. You can use tags to easily organize and identify your resources and create cost allocation reports, among other benefits. You can add or remove resource tags from Elastic Inference accelerators using API, CLI, or SDK.