Passionate former DSC lead Irene inspires others to learn Google technologies with her new podcast and more


Posted by Erica Hanson, Global Program Manager, Google Developer Student Clubs

(Irene (left) and her DSC team from the Polytechnic University of Cartagena (photo prior to COVID-19)

Irene Ruiz Pozo is a former Google Developer Student Club (DSC) Lead at the Polytechnic University of Cartagena in Murcia, Spain. As one of the founding members, Irene has seen the club grow from just a few student developers at her university to hosting multiple learning events across Spain. Recently, we spoke with Irene to understand more about the unique ways in which her team helped local university students learn more about Google technologies.

Real world ML and AR learning opportunities

Irene mentioned two fascinating projects that she had the chance to work on through her DSC at the Polytechnic University of Cartagena. The first was a learning lab that helped students understand how to use 360º cameras and 3D scanners for machine learning.

(A DSC member giving a demo of a 360º camera to students at the National Museum of Underwater Archeology in Cartagena)

The second was a partnership with the National Museum of Underwater Archeology, where Irene and her team created an augmented reality game that let students explore a digital rendition of the museum’s exhibitions.

(An image from the augmented reality game created for the National Museum of Underwater Archeology)

In the above AR experience created by Irene’s team, users can create their own character and move throughout the museum and explore different virtual renditions of exhibits in a video game-like setting.

Hash Code competition and experiencing the Google work culture

One particularly memorable experience for Irene and her DSC was participating in Google’s annual programming competition, Hash Code. As Irene explained, the event allowed developers to share their skills and connect in small teams of two to four programmers. They would then come together to tackle engineering problems like how to best design the layout of a Google data center, create the perfect video streaming experience on YouTube, or establish the best practices for compiling code at Google scale.

(Students working on the Hash Code competition (photo taken prior to COVID-19)

To Irene, the experience felt like a live look at being a software engineer at Google. The event taught her and her DSC team that while programming skills are important, communication and collaboration skills are what really help solve problems. For Irene, the experience truly bridged the gap between theory and practice.

Expanding knowledge with a podcast for student developers

(Irene’s team working with other student developers (photo taken before COVID-19)

After the event, Irene felt that if a true mentorship network was established among other DSCs in Europe, students would feel more comfortable partnering with one another to talk about common problems they faced. Inspired, she began to build out her mentorship program which included a podcast where student developers could collaborate on projects together.

The podcast, which just released its second episode, also highlights upcoming opportunities for students. In the most recent episode, Irene and friends dive into how to apply for Google Summer of Code Scholarships and talk about other upcoming open source project opportunities. Organizing these types of learning experiences for the community was one of the most fulfilling parts of working as a DSC Lead, according to Irene. She explained that the podcast has been an exciting space that allows her and other students to get more experience presenting ideas to an audience. Through this podcast, Irene has already seen many new DSC members eager to join the conversation and collaborate on new ideas.

As Irene now looks out on her future, she is excited for all the learning and career development that awaits her from the entire Google Developer community. Having graduated from university, Irene is now a Google Developer Groups (GDG) Lead – a program similar to DSC, but created for the professional developer community. In this role, she is excited to learn new skills and make professional connections that will help her start her career.

Are you also a student with a passion for code? Then join a local Google Developer Student Club near you, here.

The Go language turns 10: A Look at Go’s Growth in the Enterprise

Posted by Steve Francia, Go TeamGo's gopher mascot

The Go gopher was created by renowned illustrator Renee French. This image is adapted from a drawing by Egon Elbre.

November 10 marked Go’s 10th anniversary—a milestone that we are lucky enough to celebrate with our global developer community.

The Gopher community will be celebrating Go’s 10th anniversary at conferences such as Gopherpalooza in Mountain View and KubeCon in San Diego, and dozens of meetups around the world.

In recognition of this milestone, we’re taking a moment to reflect on the tremendous growth and progress Go (also known as golang) has made: from its creation at Google and open sourcing, to many early adopters and enthusiasts, to the global enterprises that now rely on Go everyday for critical workloads.

New to Go?

Go is an open-source programming language designed to help developers build fast, reliable, and efficient software at scale. It was created at Google and is now supported by over 2100 contributors, primarily from the open-source community. Go is syntactically similar to C, but with the added benefits of memory safety, garbage collection, structural typing, and CSP-style concurrency.

Most importantly, Go was purposefully designed to improve productivity for multicore, networked machines and large codebases—allowing programmers to rapidly scale both software development and deployment.

Millions of Gophers!

Today, Go has more than a million users worldwide, ranging across industries, experience, and engineering disciplines. Go’s simple and expressive syntax, ease-of-use, formatting, and speed have helped it become one of the fastest growing languages—with a thriving open source community.

As Go’s use has grown, more and more foundational services have been built with it. Popular open source applications built on Go include Docker, Hugo, Kubernetes. Google’s hybrid cloud platform, Anthos, is also built with Go.

Go was first adopted to support large amounts of Google’s services and infrastructure. Today, Go is used by companies including, American Express, Dropbox, The New York Times, Salesforce, Target, Capital One, Monzo, Twitch, IBM, Uber, and Mercado Libre. For many enterprises, Go has become their language of choice for building on the cloud.

An Example of Go In the Enterprise

One exciting example of Go in action is at MercadoLibre, which uses Go to scale and modernize its ecommerce ecosystem, improve cost-efficiencies, and system response times.

MercadoLibre’s core API team builds and maintains the largest APIs at the center of the company’s microservices solutions. Historically, much of the company’s stack was based on Grails and Groovy backed by relational databases. However this big framework with multiple layers was soon found encountering scalability issues.

Converting that legacy architecture to Go as a new, very thin framework for building APIs streamlined those intermediate layers and yielded great performance benefits. For example, one large Go service is now able to run 70,000 requests per machine with just 20 MB of RAM.

“Go was just marvelous for us,” explains Eric Kohan, Software Engineering Manager at MercadoLibre. “It’s very powerful and very easy to learn, and with backend infrastructure has been great for us in terms of scalability.”

Using Go allowed MercadoLibre to cut the number of servers they use for this service to one-eighth the original number (from 32 servers down to four), plus each server can operate with less power (originally four CPU cores, now down to two CPU cores). With Go, the company obviated 88 percent of their servers and cut CPU on the remaining ones in half—producing a tremendous cost-savings.

With Go, MercadoLibre’s build times are three times (3x) faster and their test suite runs an amazing 24 times faster. This means the company’s developers can make a change, then build and test that change much faster than they could before.

Today, roughly half of Mercadolibre’s traffic is handled by Go applications.

“We really see eye-to-eye with the larger philosophy of the language,” Kohan explains. “We love Go’s simplicity, and we find that having its very explicit error handling has been a gain for developers because it results in safer, more stable code in production.”

Visit go.dev to Learn More

We’re thrilled by how the Go community continues to grow, through developer usage, enterprise adoption, package contribution, and in many other ways.

Building off of that growth, we’re excited to announce go.dev, a new hub for Go developers.

There you’ll find centralized information for Go packages and modules, a wealth of learning resources to get started with the language, and examples of critical use cases and case studies of companies using Go.

MercadoLibre’s recent experience is just one example of how Go is being used to build fast, reliable, and efficient software at scale.

You can read more about MercadoLibre’s success with Go in the full case study.

Serverless Mullet Architectures

Business in the front, party in the back. Bring on the mullets!

A 1930’s Bungalow in Sydney that preserved its historical front facade while radically updating the yard-facing rear of the house. Credit Dwell.

In residential construction, a mullet architecture is a house with a traditional front but with a radically different — often much more modern — backside where it faces the private yard.

Like the mullet haircut after which the architecture is named, it’s conventional business in the front — but a creative party in the back.

I find the mullet architecture metaphor useful in describing software designs that have a similar dichotomy. Amazon API Gateway launched support for serverless web sockets at the end of 2018, and using them with AWS Lambda functions is a great example of a software mullet architecture.

In this case, the “front yard” is a classic websocket — a long-lived, duplex TCP/IP socket between two systems established via HTTP.

Classic uses for websockets include enabling mobile devices and web browsers to communicate with backend systems and services in real time, and to enable those services to notify clients proactively — without requiring the CPU and network overhead of repeated polling by the client.

In the classic approach, the “server side” of the websocket is indeed a conventional server, such as an EC2 instance in the AWS cloud.

The serverless version of this websockets looks and works the same on the front — to the mobile device or web browser, nothing changes. But the “party in the back” of the mullet is no longer a server — now it’s a Lambda function.

To make this work, API Gateway both hosts the websocket protocol (just as it hosts the HTTP protocol for a REST API) and performs the data framing and dispatch. In a REST API call, the relationship between the call to the API and API Gateway’s call to Lambda (or other backend services) is synchronous and one-to-one.

Both of these assumptions get relaxed in a web socket, which offers independent, asynchronous communication in both directions. API Gateway handles this “impedance mismatch” — providing the long-lived endpoint to the websocket for its client, while handling Lambda invocations (and response callbacks — more on those later) on the backend.

Here’s a conceptual diagram of the relationships with its communication patterns:

A Serverless Websocket Architecture on AWS

When is a serverless mullet a good idea?

When (and why) is a serverless mullet architecture helpful? One simple answer: Anywhere you use a websocket today, you can now consider replacing it with a serverless backend.

Amazon’s documentation uses a chat relay server between mobile and/or web clients to illustrate one possible case where a serverless approach can replace a scenario that historically could only be accomplished with servers.

However, there are also interesting “server-to-server” (if you’ll forgive the expression) applications of this architectural pattern beyond long-lived client connections. I recently found myself needing to build a NAT puncher rendezvous service — essentially a simplified version of a STUN server.

You can read more about NAT punching here, but for the purposes of our discussion here, what matters is that I had the following requirements:

  1. I needed a small amount of configuration information from each of two different Lambda functions. Let’s call this info a “pairing key” — it can be represented by a short string. For discussion purposes, we’ll refer to the two callers as “left” and “right”. Note that the service is multi-tenanted, so there are potentially a lot of left/right pairs constantly coming and going, each using different pairing keys.
  2. I also needed a small amount of metadata that I can get from API Gateway about the connection itself (basically the source IP as it appears to API Gateway, after any NATting has taken place).
  3. I have to exchange the data from (2) between clients who provide the same pairing key in (1); that is, left gets right’s metadata and right gets left’s metadata. There’s a lightweight barrier synchronization here: (3) can’t happen until both left and right have shown up…but once they have shown up, the service has to perform (3) as quickly as possible.

The final requirement above is the reason a simple REST API backed by Lambda isn’t a great solution: It would require the first arriver to sit in a busy loop, continuously polling the database (Amazon DynamoDB in my case) waiting for the other side to show up.

Repeatedly querying DynamoDB would drive up costs and we’d be subject to maximum integration duration of an API call of 30 seconds. Using DynamoDB change streams doesn’t work here, either, as the Lambda they would invoke can’t “talk” to the Lambda instance created by invoking the API. It’s also tricky to use Step Functions — “left” and “right” are symmetric peers here, so neither one knows who should kick off a workflow.

Enter…The Mullet

So what can we do that’s better? Well, left and right aren’t mobile or web clients, they’re Lambdas — but they have a very “websockety” problem. They need to coordinate some data and event timing through an intermediary that can “see” both conversations and they benefit from a communication channel that can implicitly convey the state of the barrier synchronization required.

The protocol is simple and looks like this (shown with left as the first arrival):

Here we take full advantage of the mullet architecture:

  • Clients arrive (and communicate) asynchronously with respect to one another, but we can also track the progression of the workflow and coordinate them from the “server” — here, a Lambda/Dynamo combo — that tracks the state of each pairing.
  • API Gateway does most of the heavy lifting, including detecting the data frames in the websocket communication and turning them into Lambda invocations.
  • API Gateway model validation verifies the syntax of incoming messages, so the Lambda code can assume they’re well formed, making the code even simpler.

The architecture is essentially the equivalent of a classic serverless “CRUD over API Gateway / Lambda / Dynamo” but with the added benefits of asynchronous, bidirectional communication and lightweight cross-call coordination.

One important piece of the puzzle is the async callback pathway. There’s an inherent communication asymmetry when we hook up a websocket to a Lambda.

Messages that flow from client to Lambda are easy to model — API Gateway turns them into the arguments to a Lambda invocation. If that Lambda wants to synchronously respond, that’s also easy — API Gateway turns its result into a websocket message and sends it back to the client after the Lambda completes.

But what about our barrier synchronization? In the sequence chart above, it has to happen asynchronously with respect to left’s conversation. To handle this, API Gateway creates a special HTTPS endpoint for each websocket. Calls to this URL get turned into websocket messages that are sent (asynchronously) back to the client.

In our example, the Lambda handling the conversation with right uses this special endpoint to unblock left when the pairing is complete. This represents more “expressive power” than normally exists when a client invokes a Lambda function.

Serverless Benefits

The serverless mullet architecture offers all the usual serverless advantages. In contrast to a serverful approach, such as running a (fleet of) STUN server(s), there are no EC2 instances to deploy, scale, log, manage, or monitor and fault tolerance and scalability comes built in.

Also unlike a server-based approach that would need a front end fleet to handle websocket communication, the code required to implement this approach is tiny — only a few hundred lines, most of which is boilerplate exception handling and error checking. Even the JSON syntax checking of the messages is handled automatically.

One caveat to this “all in on managed services” approach is that the configuration has a complexity of its own — unsurprisingly, as we’re asking services like API Gateway, Lambda, and Dynamo to do a lot of the heavy lifting for us.

For this project, my AWS CloudFormation template is over 500 lines (including comments), while the code, including all its error checking, is only 383 lines. Asingle data point, but illustrative of the fact that “configurating” the managed services to handle things like data frame syntax checking by exhibiting an embedded JSON Schema makes for some non trivial CloudFormation.

However, a little extra complexity in the config is well worth it to gain the operational operational benefits of letting AWS maintain and scale all that functionality!

Mullets all Around

Serverless continues to expand its “addressable market” as new capabilities and services join the party. Fully managed websockets backed by Lambda is a great step forward, but it’s far from the only example of mullet architectures.

Amazon AppSync, a managed GraphQL service, is another example. It offers a blend of synchronous and asynchronous JSON-based communication channels — and when backed by a Lambda instead of a SQL or NoSQL database, it offers another fantastic mullet architecture that makes it easy to build powerful capabilities with built-in query capabilities, all without the need for servers.

AWS and other cloud vendors continue to look for ways to make development easier, and hooking up serverless capabilities to conventional developer experiences continues to be a rich area for new innovation.

Business in the front, party in the back …

bring on the mullets!


Serverless Mullet Architectures was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

What’s the friction with Serverless?

Three pain points of serverless adoption from The Agile Monkeys

The team at The Agile Monkeys has worked on non-trivial applications with a wide range of technologies for more than a decade — mainly in the retail sector on solutions from e-commerce management to warehouse automation and everything in between. Our engineers are very aware of the enormous challenges of scalability, reliability, and codebase management that many companies face when developing business solutions.

Based on our experience, we’re convinced that serverless is the execution paradigm of the future that solves many challenges of modern application development. But we still see friction in the currently available abstractions — and available tooling still makes it hard to take advantage of the true potential of Serverless.

In the past decade, most successful retail companies opted for using pre-built monolithic e-commerce platforms that they customized for their needs. But with the growth of their user base, those platforms can no longer manage peak load (like Black Friday) anymore.

As a result, we’ve repeatedly been involved in “brain-surgery projects” to split their big monolithic codebases into microservices over the past few years. But the architecture change came with new challenges: handling synchronization and communication between services efficiently, and a huge increase in operational complexity.

We started researching Serverless as a potential solution for those challenges two years ago and we saw its tremendous possibilities.

Serverless not only removes most of the operational complexity but, thanks to the many hosted service offerings, it allows us to deprecate part of the glue code required to deal with coordinating services. And for companies that are already selling, it can be smoothly introduced when implementing new features or when a redesign of a service is needed, without affecting the original codebase.

It’s very easy to deploy a lambda function and implement basic use cases, going beyond that requires a lot of knowledge about the cloud service offerings.

But while Serverless is evolving very quickly, it’s still a relatively new idea. Many VPs and Senior Engineers are still hesitant to introduce it into their production systems because of the mindset change and the training it would require for teams that are already working as well-oiled machines.

While it’s very easy to deploy a lambda function and implement basic use cases with the existing tools, going beyond the basics requires a lot of knowledge about the cloud, managed service offerings, and their guarantees and limitations.

Faster time to market?

It’s a common mantra in Serverless forums to say that Serverless means “faster time to market,” and we’ve found this can be true for well-trained teams. However, teams that are just starting the journey may not find it to be the case, and can easily become frustrated and end up dropping Serverless in favor of better-known tools.

Both from our experiences with clients adopting (or rejecting) Serverless, and our own experience releasing applications like Made for Serverless ourselves, we’ve found the following three pain points along the way:

Pain Point #1: Engineers starting with Serverless might need more time than you’d expect to be productive.

It requires a paradigm shift. You have to switch to “the Serverless way,” start thinking in a more event-driven way, and resist the temptation to build the same kind of backend code you always have but deploy it on lambda.

Serverless is still in a relatively early stage, so we don’t have well-known and established application-level design patterns in the same way that we have MVC for classic database-driven applications.

For instance, if you google “CQRS in AWS” you’ll find half a dozen articles with half a dozen different designs, all of them valid under certain circumstances and offering different guarantees.

As tools are under heavy development, new utilities that look amazing in demos and getting-started guides may have more bugs and hidden limitations than we’d like to admit, requiring some trial, error, and troubleshooting (oh! the price of being on the cutting edge of technology).

Pain Point #2: You definitely need cloud knowledge to succeed.

The existing frameworks provide handy abstractions to significantly reduce the configuration burden, but you still have to know what you’re doing and understand the basics of roles, managed cloud services, and lambdas in order to build anything non-trivial. You need to pick the right services and configure them properly, which requires a lot of knowledge beyond lambda functions.

We see a trend on current frameworks for serverless where they’re providing higher-level abstractions and building blocks. But, when it’s time to build an application, we miss an experience like Ruby on Rails or Spring Boot, which help developers write business logic and provide some basic structure to their programs.

Existing tools are optimizing the process of configuring the cloud, but a team can’t safely ignore that to fully focus on modeling their domain.

Is a sense, we’re still at a point where the tools are optimizing the process of configuring the cloud (and they’re doing great work there!), but we haven’t yet reached the point where a team can safely forget about that and focus on modeling their domain.

Paint Point #3: Functions are actually a pretty low-level abstraction.

I know this might be a hot take, but for us, functions are a very low-level abstraction that might make it challenging to properly architect your project as your services grow.

When you’re starting with Serverless, the idea of splitting your code into small, manageable functions is compelling. But since there are no clear guides on properly architecting the code in a lambda function, we rely on engineers to manage this every time.

And while more experienced engineers will figure out solutions, less experienced ones might find this difficult. In any case, moving from one project to another will require reinventing the wheel, because there are no well-established conventions.

Identifying the challenges is just the first step to improvement. We strongly believe in a Serverless future where everyone is using this technology, because it’s what makes sense from a business perspective (companies need to focus on what makes them special and externalize everything else).

So what do we think is needed to get to that point?

Our innovation team is working on some ideas that we will share in Serverlessconf NYC. Stay tuned for our next article in the series that we will publish during the event!

This is a guest article written by The Agile Monkeys’ innovation team: Javier Toledo, Álvaro López Espinosa and Nick Tchayka, with reviews, ideas and contributions from many other people in our company. Thank you, folks!


What’s the friction with Serverless? was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Rise of the Serverless Architect

The focus has expanded to the entire application lifecycle

Over the last 4 years of developing the Serverless Framework, and working closely with many of the thousands of organizations that use it to build their applications, I’ve had the good fortune of watching the movement evolve considerably.

In the early days I met countless pioneering developers who were utilizing this new architecture to build incredible things, despite the considerable challenges and relatively crude tooling that existed.

I also worked with many of these early pioneers to convince their organizations to go all-in on Serverless, despite the lack of successful case studies and tried and true best practices — often based simply on an internal POC that promised a shorter development cycle and lower total cost of ownership.

As the tooling has evolved, and the case studies have piled up, I’ve noticed that these early Serverless pioneers have forged a new title that is gaining prominence within organizations — that of Serverless Architect.

What is a Serverless Architect?

Early on in the Serverless journey, when we were initially developing the Serverless Framework (in those days known as JAWS), all of the focus was on development and deployment.

It was clear that this new piece of infrastructure called Lambda had some amazing qualities, but how could we as developers actually build something meaningful with it? And seeing as how Lambda is a cloud native service, the question that followed shortly after was: how can we actually deploy these apps in a sane way?

As various solutions to these problems were developed and improved upon, the focus of developers building Serverless applications expanded to the entire application lifecycle, including testing, monitoring and securing their Serverless apps.

The focus of Serverless has expanded to the entire application lifecycle

A Serverless Architect is a developer who takes this lifecycle focused view and often personally owns at least part of every stage of the Serverless Application Lifecycle. They don’t simply write functions — they implement business results while thinking through how the code that delivers those results will be developed, deployed, tested, monitored, and secured.

Why is the Serverless Architect essential?

Serverless architectures are essentially collections of managed services connected by functions. Because of this unique and novel model it’s important that the architect has a deep understanding of the event-driven, cloud native paradigm of the architecture.

The demand for the Serverless Architect is a direct result of the unique nature of this architecture and the Serverless Application Lifecycle that accompanies it. Unlike legacy architectures, these various lifecycle stages are no longer separate concerns handled by separate teams at separate times — but rather a single integrated lifecycle that needs to be addressed in a unified way.

There are a couple specific reasons this is the case with Serverless:

  1. Due to the reduced complexity and self-serve nature of the Serverless architecture, developers are more likely to be responsible for the monitoring and security of their applications.
  2. Due to the cloud native nature of the services that make up a Serverless Architecture, develop, deploy, and test stages are naturally more integrated.
  3. Due to the focus on simplicity with Serverless architecture, there’s a stronger desire for fewer tools and more streamlined experiences.

As organizations mature in their Serverless adoption, the demand for these Serverless Architectures grows quickly. While one person thinking this way in the early days is often all that is needed to get adoption off the ground, it often takes teams of Serverless Architects to scale to a ‘serverless first’ mindset.

What types of tooling does the Serverless Architect need?

As Serverless continues to grow in adoption and the number of Serverless Architects continues to increase, it’s becoming clear that unified tooling that addresses the entire Serverless Application Lifecycle is going to be increasingly valuable.

Cobbling together multiple complex solutions is antithetical to the whole Serverless mindset — and if that’s what’s required to be successful with Serverless than somethings gone wrong.

At Serverless Inc. we’re evolving the Serverless Framework to address the complete application lifecycle while maintaining the streamlined developer workflow that our community has grown to love. We’re working hard to ensure that Serverless Architects have the tools they need to flourish and we’re always excited to hear feedback.

Sign up free and let us know what you think.


The Rise of the Serverless Architect was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

CloudFormation is an infrastructure graph management service — and needs to act more like it

AWS CloudFormation is an infrastructure graph management service — and needs to act more like it

CloudFormation should represent our desired infrastructure graphs in the way we want to build them

What’s AWS CloudFormation?

As Richard Boyd says, CloudFormation is not a cloud-side version of the AWS SDK. Rather, CloudFormation is an infrastructure-graph management service.

But it’s not clear to me that CloudFormation fully understands this, and I think it should more deeply align with the needs that result from that definition.

Chief among these needs is that CloudFormation resources should be formed around the lifecycle of the right concepts in each AWS service — rather than simply mapping to the API calls provided by those services.

What’s the Issue?

For an example, let’s talk about S3 bucket notifications. If there’s a standard “serverless 101”, it’s image thumbnailing. Basic stuff, right? You have an S3 bucket, and you use bucket notifications to trigger a Lambda that will create the thumbnails and write them back to the bucket.

Any intro-to-serverless demo should show best practices, so you’ll put this in CloudFormation. The best practice for CloudFormation is to never explicitly name your resources unless you absolutely have to — so you never have to worry about name conflicts.

But surprise! You simply can’t trigger a Lambda from an S3 bucket that has a CloudFormation-assigned name. The crux of it is this:

  • Bucket notification configuration is only settable through the AWS::S3::Bucket resource, and bucket notifications check for permissions at creation time. If the bucket doesn’t have permission to invoke the Lambda, creation of that notification config will fail.
  • The AWS::Lambda::EventSourcePermission resource that creates that permission requires the name of the bucket.
  • If CloudFormation is assigning the bucket name, it’s not available in the stack until the bucket (and its notification configuration) are created.

Thus, you end up with a circular dependency. The AWS-blessed solution, described in several different places, is to hard-code an explicit bucket name on both the Bucket and EventSourcePermission resources.

This isn’t necessary. If we look at the lifecycle of the pieces involved, we can see that existence of the bucket should be decoupled with the settings of that bucket.

If we had a AWS::S3::BucketNotification resource that took the bucket name as a parameter, we could create the AWS::S3::Bucket first, and provide its name to both the BucketNotification and the EventSourcePermission.

Despite this option, we’re still years into AWS explicitly punting on this issue and telling customers, in official communications, to just work around it.

What about Lambda?

Going back to infrastructure graph representation, let’s talk about Lambda. CloudFormation has traditionally managed the infrastructure onto which applications were deployed. But in a serverless world, the infrastructure is the application.

When I want to do a phased rollout of a new version of a Lambda function, I’m supposed to have a CodeDeploy resource in the same template as my function. I update the AWS::Lambda::Function resource, and CodeDeploy takes care of the phased rollout using a weighted alias—all while my stack is in the UPDATING state.

The infrastructure graph during the rollout, when two versions of the code are deployed at the same time, has no representation within CloudFormation — and that’s a problem.

What if I want this rollout to happen over an extended period of time? What if I want to deploy two versions of a Lambda function to exist alongside each other indefinitely?

The latter is literally impossible to achieve with a single CloudFormation template today. The AWS::Lambda:Version resource publishes what’s in the $LATEST, which is what is set by AWS::Lambda::Function.

Instead, when we have phased rollouts, we should be speaking of deployments, decoupled from the existence of the function itself.

A resource like AWS::Lambda::Deployment that had parameters for the function name, and the code and configuration, and published that, with the version number available as an attribute.

Multiple of these resources could be included in the same template without conflicting, and your two deployments could then be wired to a weighted alias for phased rollout. Note: To do this properly, we’d need an atomic UpdateFunctionCodeAndConfiguration API call from the Lambda service.

In this way, CloudFormation could represent the state of the graph during a rollout, not just on either side of it.

What’s the So What?

The important notion here is that a resource’s create/update/delete lifecycle doesn’t need to be mapped directly to create/update/delete API calls. Instead, the resources for a service need to match the concepts that allow coherent temporal evolution of an infrastructure that uses the service.

When this is achieved, CloudFormation can adequately represent our desired infrastructure graphs in the way we want to build them, which will only become more critical as serverless/service-full architecture grows in importance.

Epilogue: New tools like the CDK look to build client-side abstractions on CloudFormation. In general, I’m not a fan of those approaches, for reasons that I won’t detail here. In any case , they will never be fully successful if CloudFormation doesn’t support the infrastructure graph lifecycles that those abstractions need to build upon.


CloudFormation is an infrastructure graph management service — and needs to act more like it was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

AWS App Mesh Walkthrough

How to Deploy the Color App on Amazon ECS

This is a walkthrough for deploying the Color App that was demonstrated at the AWS App Mesh launch. The new service helps you to run and monitor HTTP and TCP services at scale with a consistent way to route and monitor traffic.

The following diagram shows the programming model of this simple application. This is literally the programmer’s perspective of the application:

Figure 1. Programmer perspective of the Color App.

In this post, we’ll walk through creating specific abstract resources for AWS App Mesh that will be used to drive a physical mapping to compute resources to stitch our application together, providing us with fine-grained control over traffic routing and end-to-end visibility of application request traffic and performance.

The following diagram represents the abstract view in terms of App Mesh resources:

Figure 2. App Mesh abstract perspective of the Color App.

Finally, we deploy the services that will comprise our application to ECS along with proxy sidecars for each service task; these proxies will be governed by App Mesh to ensure our application traffic behaves according to our specifications.

Figure 3. Amazon ECS compute perspective of the Color App.

The key thing to note about this is that actual routing configuration is completely transparent to the application code. The code deployed to the gateway containers will send requests to the DNS name colorteller.demo.local, which we configure as a virtual service in App Mesh.

App Mesh will push updates to all the envoy sidecar containers to ensure traffic is sent directly to colorteller tasks running on EC2 instances according to the routing rules we specify through App Mesh configuration.

There are no physical routers at runtime since App Mesh route rules are transformed to Envoy configuration and pushed directly to the envoy sidecars within the dependent tasks.

Here’s what we’ll cover in this walkthrough …

  1. Overview
  2. Prerequisites
  3. Deploy infrastructure for the application
    . . . Create the VPC and other core Infrastructure
    . . . Create an App Mesh
    . . . Create compute resources
    . . . Review
  4. Deploy the application
    . . . Configure App Mesh resources
    . . . Deploy services to ECS
    . . . . . . Deploy images to ECR for your account
    . . . . . . Deploy gateway and colorteller services
    . . . . . . Test the application
  5. Shape traffic
    . . . Apply traffic rules
    . . . Monitor with AWS X-Ray
  6. Review
  7. Summary
  8. Resources

1. Overview

This brief guide will walk you through deploying the Color App on ECS. The process has been automated using shell scripts and AWS CloudFormation templates to make deployment straightforward and repeatable.

Core networking and compute infrastructure doesn’t need to be recreated each time the Color App is redeployed. Since this can be time-consuming, resource provisioning is divided among a layered set of CloudFormation stack templates.

The App Mesh deployment is also partitioned into different stages as well, but this is for for performance reasons since App Mesh operations are very fast. The reason for the separation is simply so you can tear down the Color App without tearing down the demo mesh in case you also have other sample apps running in it for experimentation.

Infrastructure templates:

  • examples/infrastructure/vpc.yaml – creates the VPC and other core networking resources needed for the application independent of the specific compute environment (e.g., ECS) provisioned for cluster.
  • examples/infrastructure/ecs-cluster.yaml – creates the compute instances and other resources needed for the cluster.
  • examples/infrastructure/appmesh-mesh.yaml – creates an App Mesh mesh.

Application resource templates:

  • examples/apps/colorapp/ecs/ecs-colorapp.yaml – deploys application services and related resources for the Color App.
  • examples/apps/colorapp/ecs/servicemesh/appmesh-colorapp.yaml – creates mesh resources for the Color App.

Each template has a corresponding shell script with a .sh extension that you run to create the CloudFormation stack. These scripts rely on the following environment variables values that must be exported before running:

  • AWS_PROFILE – your AWS CLI profile (set to default or a named profile).
  • AWS_DEFAULT_REGION – set to one of the Currently available AWS regions for App Mesh.
  • ENVIRONMENT_NAME – will be applied as a prefix to deployed CloudFormation stack names.
  • MESH_NAME – name to use to identify the mesh you create.
  • SERVICES_DOMAIN – the base namespace to use for service discovery (e.g., cluster.local).
  • KEY_PAIR_NAME – your Amazon EC2 Key Pair.
  • ENVOY_IMAGE – see Envoy Image for latest recommended Docker image (currently: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.1.0-prod)
  • COLOR_GATEWAY_IMAGE – Docker image for the Color App gateway microservice in ECR.
  • COLOR_TELLER_IMAGE – Docker image for the Color App colorteller microservice in ECR.

See below for more detail and to see where these environment variables are used.

2. Prerequisites

  1. You have version 1.16.124 or higher of the AWS CLI installed.
  2. Your AWS CLI configuration has a default or named profile and valid credentials.
  3. You have an Amazon EC2 Key Pair that you can use to log into your EC2 instances.
  4. You have cloned the github.com/aws/aws-app-mesh-examples repo and changed directory to the project root.
  5. You have jq installed.

3. Deploy infrastructure for the application

Create the VPC and other core Infrastructure

An Amazon Virtual Private Cloud (VPC) is a virtual network that provides isolation from other applications in other networks running on AWS. The following CloudFormation template will be used to create a VPC for our mesh sample applications:

examples/infrastructure/vpc.yaml

Set the following environment variables:

  • AWS_PROFILE – your AWS CLI profile (set to default or a named profile)
  • AWS_DEFAULT_REGION – set to one of the Currently available AWS regions for App Mesh
  • ENVIRONMENT_NAME – will be applied as a prefix to deployed CloudFormation stack names

Run the vpc.sh script to create a VPC for the application in the region you specify. It will be configured for two availability zones (AZs); each AZ will be configured with a public and a private subnet.

You can choose from one of the nineteen Currently available AWS regions for App Mesh. The deployment will include an Internet Gateway and a pair of NAT Gateways (one in each AZ) with default routes for them in the private subnets.

Create the VPC

examples/infrastructure/vpc.sh

$ export AWS_PROFILE=default
$ export AWS_DEFAULT_REGION=us-west-2
$ export ENVIRONMENT_NAME=DEMO
$ ./examples/infrastructure/vpc.sh
...
+ aws --profile default --region us-west-2 cloudformation deploy --stack-name DEMO-vpc --capabilities CAPABILITY_IAM --template-file examples/infrastructure/vpc.yaml --parameter-overrides EnvironmentName=DEMO
Waiting for changeset to be created..
Waiting for stack create/update to complete
...
Successfully created/updated stack - DEMO-vpc
$

Create an App Mesh

A service mesh a logical boundary for network traffic between services that reside in it. AWS App Mesh is a managed service mesh control plane. It provides application-level networking support, standardizing how you control and monitor your services across multiple types of compute infrastructure.

The following CloudFormation template will be used to create an App Mesh mesh for our application:

examples/infrastructure/appmesh-mesh.yaml

We will use the same environment variables from the previous step, plus one additional one (MESH_NAME), to deploy the stack.

  • MESH_NAME – name to use to identify the mesh you create (we’ll use appmesh-mesh)

Create the mesh

examples/infrastructure/appmesh-mesh.sh

$ export AWS_PROFILE=default
$ export AWS_DEFAULT_REGION=us-west-2
$ export ENVIRONMENT_NAME=DEMO
$ export MESH_NAME=appmesh-mesh
$ ./examples/infrastructure/appmesh-mesh.sh
...
+ aws --profile default --region us-west-2 cloudformation deploy --stack-name DEMO-appmesh-mesh --capabilities CAPABILITY_IAM --template-file /home/ec2-user/projects/aws/aws-app-mesh-examples/examples/infrastructure/appmesh-mesh.yaml --parameter-overrides EnvironmentName=DEMO AppMeshMeshName=appmesh-mesh
Waiting for changeset to be created..
Waiting for stack create/update to complete
...
Successfully created/updated stack - DEMO-appmesh-mesh
$

At this point we have now created our networking resources (VPC and App Mesh), but we have not yet deployed:

  • compute resources to run our services on
  • mesh configuration for our services
  • actual services

Create compute resources

Our infrastructure requires compute resources to run our services on. The following CloudFormation template will be used to create these resources for our application:

examples/infrastructure/ecs-cluster.yaml

In addition to the previously defined environment variables, you will also need to export the following:

  • SERVICES_DOMAIN – the base namespace to use for service discovery (e.g., cluster.local). For this demo, we will use demo.local. This means that the gateway virtual service will send requests to the colorteller virtual service at colorteller.demo.local.
  • KEY_PAIR_NAME – your Amazon EC2 Key Pair to log into your EC2 instances.

Create the ECS cluster

examples/infrastructure/ecs-cluster.sh

$ export AWS_PROFILE=default
$ export AWS_DEFAULT_REGION=us-west-2
$ export ENVIRONMENT_NAME=DEMO
$ export SERVICES_DOMAIN=demo.local
$ export KEY_PAIR_NAME=tony_devbox2
$ ./examples/infrastructure/ecs-cluster.sh
...
+ aws --profile default --region us-west-2 cloudformation deploy --stack-name DEMO-ecs-cluster --capabilities CAPABILITY_IAM --template-file /home/ec2-user/projects/aws/aws-app-mesh-examples/examples/infrastructure/ecs-cluster.yaml --parameter-overrides EnvironmentName=DEMO KeyName=tony_devbox2 ECSServicesDomain=demo.local ClusterSize=5
Waiting for changeset to be created..
Waiting for stack create/update to complete
...
Successfully created/updated stack - DEMO-ecs-cluster
$

Review

You have provisioned the infrastructure you need. You can confirm in the AWS Console that all of your CloudFormation stacks have been successfully deployed. You should see something like this:

Figure 4. AWS Cloudformation stack deployments.

You can also confirm status with the AWS CLI:

$ aws cloudformation describe-stacks --stack-name DEMO-vpc --query 'Stacks[0].StackStatus'
"CREATE_COMPLETE"
$ aws cloudformation describe-stacks --stack-name DEMO-appmesh-mesh --query 'Stacks[0].StackStatus'
"CREATE_COMPLETE"
$ aws cloudformation describe-stacks --stack-name DEMO-ecs-cluster --query 'Stacks[0].StackStatus'
"CREATE_COMPLETE"

4. Deploy the application

Now that we’ve deployed our infrastructure resources for testing, let’s configure our mesh and finally deploy the Color App.

Configure App Mesh resources

We will now add our mesh resource definitions so that when we finally deploy our services, the mesh will be able to push computed configuration down to each Envoy proxy running as a sidecar for each ECS task. The following CloudFormation template will be used to create these resources for our application:

examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml

We will use the same exported environment variables created previously. No new environment variables are needed.

Create mesh resources

examples/apps/colorapp/servicemesh/appmesh-colorapp.sh

$ export AWS_PROFILE=default
$ export AWS_DEFAULT_REGION=us-west-2
$ export ENVIRONMENT_NAME=DEMO
$ export SERVICES_DOMAIN=demo.local
$ export MESH_NAME=appmesh-mesh
$ ./examples/apps/colorteller/servicemesh/appmesh-colorapp.sh
...
+ aws --profile default --region us-west-2 cloudformation deploy --stack-name DEMO-appmesh-colorapp --capabilities CAPABILITY_IAM --template-file /home/ec2-user/projects/aws/aws-app-mesh-examples/examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml --parameter-overrides EnvironmentName=DEMO ServicesDomain=demo.local AppMeshMeshName=appmesh-mesh
Waiting for changeset to be created..
Waiting for stack create/update to complete
...
Successfully created/updated stack - DEMO-appmesh-colorapp
$

Note: the App Mesh resources for the Color App are created before the app itself is deployed in the final step; this is so Envoy, which is deployed as a task sidecar, is able to communicate with the Envoy Management Service. If the mesh itself isn’t configured first, the sidecar will remain unhealthy and eventually the task will fail.

Deploy services to ECS

Deploy images to ECR for your account

Before you can deploy the services, you will need to deploy the images that ECS will use for gateway and colortellerto ECR image repositories for your account. You can build these images from source under the examples/apps/colorteller/src and push them using the provided deploy scripts after you create repositories for them on ECR, as shown below.

Deploy the gateway image:

# from the colorapp repo root...
$ cd examples/apps/colorapp/src/gateway
$ aws ecr create-repository --repository-name=gateway
$ export COLOR_GATEWAY_IMAGE=$(aws ecr describe-repositories --repository-names=gateway --query 'repositories[0].repositoryUri' --output text)
$ ./deploy.sh
+ '[' -z 226767807331.dkr.ecr.us-west-2.amazonaws.com/gateway ']'
+ docker build -t 226767807331.dkr.ecr.us-west-2.amazonaws.com/gateway .
Sending build context to Docker daemon 1MB
Step 1/11 : FROM golang:1.10 AS builder
...
+ docker push 226767807331.dkr.ecr.us-west-2.amazonaws.com/gateway
The push refers to repository [226767807331.dkr.ecr.us-west-2.amazonaws.com/gateway]
latest: digest: sha256:ce597511c0230af89b81763eb51c808303e9ef8e1fbe677af02109d1f73a868c size: 528
$

Deploy the colorteller image:

# from the colorapp repo root....
$ cd examples/apps/colorapp/src/colorteller
$ aws ecr create-repository --repository-name=colorteller
$ export COLOR_TELLER_IMAGE=$(aws ecr describe-repositories --repository-names=colorteller --query 'repositories[0].repositoryUri' --output text)
$ ./deploy.sh
+ '[' -z 226767807331.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest ']'
+ docker build -t 226767807331.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest .
Sending build context to Docker daemon 996.4kB
Step 1/11 : FROM golang:1.10 AS builder
...
+ docker push 226767807331.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
The push refers to repository [226767807331.dkr.ecr.us-west-2.amazonaws.com/colorteller]
69856c2b3fc6: Layer already exists
latest: digest: sha256:ca16f12268907c32140586e2568e2032f04b95d70b373c00fcee7e776e2d29da size: 528
$

Deploy gateway and colorteller services

We will now deploy our services on ECS. The following CloudFormation template will be used to create these resources for our application:

examples/apps/colorapp/ecs/ecs-colorapp.yaml

In addition to the previously defined environment variables, you will also need to export the following:

  • ENVOY_IMAGE – see Envoy Image for latest recommended Docker image (currently: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod)
  • COLOR_GATEWAY_IMAGE – Docker image for the Color App gateway microservice (see example below).
  • COLOR_TELLER_IMAGE – Docker image for the Color App colorteller microservice (see example below).

Deploy services to ECS

examples/apps/colorapp/ecs/ecs-colorapp.sh

$ export AWS_PROFILE=default
$ export AWS_DEFAULT_REGION=us-west-2
$ export ENVIRONMENT_NAME=DEMO
$ export SERVICES_DOMAIN=demo.local
$ export KEY_PAIR_NAME=tony_devbox2
$ export ENVOY_IMAGE=111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
$ export COLOR_GATEWAY_IMAGE=$(aws ecr describe-repositories --repository-names=gateway --query 'repositories[0].repositoryUri' --output text)
$ export COLOR_TELLER_IMAGE=$(aws ecr describe-repositories --repository-names=colorteller --query 'repositories[0].repositoryUri' --output text)
$ ./examples/apps/colorapp/ecs/ecs-colorapp.sh
...
Waiting for changeset to be created..
Waiting for stack create/update to complete
...
Successfully created/updated stack - DEMO-ecs-colorapp
$

Test the application

Once we have deployed the app, we can curl the frontend service (gateway). To get the endpoint, run the following code:

$ colorapp=$(aws cloudformation describe-stacks --stack-name=$ENVIRONMENT_NAME-ecs-colorapp --query="Stacks[0
].Outputs[?OutputKey=='ColorAppEndpoint'].OutputValue" --output=text); echo $colorapp
http://DEMO-Publi-M7WJ5RU13M0T-553915040.us-west-2.elb.amazonaws.com
$ curl $colorapp/color
{"color":"red", "stats": {"red":1}}

TIP: If you don’t see a newline after curl responses, you might want to use curl -w “n” or add -w “n” to $HOME/.curlrc.

5. Shape traffic

Currently, the the app equally distributes traffic among blue, red, and white color teller virtual nodes through the default virtual router configuration, so if you run the curl command a few times, you might see something similar to this:

$ curl $colorapp/color
{"color":"red", "stats": {"blue":0.33,"red":0.36,"white":0.31}}

In the following section, we’ll walk through how to modify traffic according to rules we set.

Apply traffic rules

Open up examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml in an editor. In the definition for ColorTellerRoute, you will see the spec for an HttpRoute (around line 123):

ColorTellerRoute:
Type: AWS::AppMesh::Route
DependsOn:
- ColorTellerVirtualRouter
- ColorTellerWhiteVirtualNode
- ColorTellerRedVirtualNode
- ColorTellerBlueVirtualNode
Properties:
MeshName: !Ref AppMeshMeshName
VirtualRouterName: colorteller-vr
RouteName: colorteller-route
Spec:
HttpRoute:
Action:
WeightedTargets:
- VirtualNode: colorteller-white-vn
Weight: 1
- VirtualNode: colorteller-blue-vn
Weight: 1
- VirtualNode: colorteller-red-vn
Weight: 1
Match:
Prefix: "/"

Modify the HttpRoute block of code to look like this:

HttpRoute:
Action:
WeightedTargets:
- VirtualNode: colorteller-black-vn
Weight: 1
Match:
Prefix: "/"

Apply the update:

./examples/apps/colorapp/servicemesh/appmesh-colorapp.sh
...
+ aws --profile default --region us-west-2 cloudformation deploy --stack-name DEMO-appmesh-colorapp
--capabilities CAPABILITY_IAM --template-file /ho me/ec2-user/projects/aws/aws-app-mesh-examples/examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml --parameter-overrides EnvironmentName=DEMO Se
rvicesDomain=demo.local AppMeshMeshName=appmesh-mesh
...
Waiting for changeset to be created..
Waiting for stack create/update to complete
Successfully created/updated stack - DEMO-appmesh-colorapp

Now, when you curl the app, you will see a responses like the following:

$ curl $colorapp/color
{"color":"black", "stats": {"black":0.19,"blue":0.28,"red":0.27,"white":0.26}}
...
# repeated calls will increase the stats for black since it's the only color response now
{"color":"black", "stats": {"black":0.21,"blue":0.28,"red":0.26,"white":0.25}}

The following query will clear the stats history:

$ curl $colorapp/color/clear
cleared
# now requery
$ curl $colorapp/color
{"color":"black", "stats": {"black":1}}

Since there are no other colors for the histogram, that’s all you will see no matter how many times you repeat the query.

Simulate A/B tests with a 50/50 split between red and blue:

Edit examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml

WeightedTargets:
- VirtualNode: colorteller-red-vn
Weight: 1
- VirtualNode: colorteller-blue-vn
Weight: 1

Any integer proportion will work for the weights (as long as the sum doesn’t exceed 100), so you could have used 1 or 5 or 50 for each to reflect the 1:1 ratio that distributes traffic equally between the two colortellers. App Mesh will use the ratio to compute the actual percentage of traffic to distribute along each route.

You can see this in the App Mesh console when you inspect the route:

Figure 5. Route weighted targets.

In a similar manner, you can perform canary tests or automate rolling updates based on healthchecks or other criteria using weighted targets to have fine-grained control over how you shape traffic for your application.

To prepare for the next section, go ahead and update the HttpRoute to send all traffic only to the blue colorteller.

examples/apps/colorapp/servicemesh/appmesh-colorapp.yaml

WeightedTargets:
- VirtualNode: colorteller-blue-vn
Weight: 1

Then deploy the update and then clear the color history for fresh histograms:

$ ./examples/apps/colorapp/servicemesh/appmesh-colorapp.sh
...
$ curl $colorapp/color/clear
cleared

In the next section we’ll experiment with updating the route using the App Mesh console and analyze results visually with AWS X-Ray.

Monitor with AWS X-Ray

AWS X-Ray helps us to monitor and analyze distributed microservice applications through request tracing, providing an end-to-end view of requests traveling through the application so we can identify the root cause of errors and performance issues. We’ll use X-Ray to provide a visual map of how App Mesh is distributing traffic and inspect traffic latency through our routes.

When you open the AWS X-Ray console the view might appear busier than you expected due to traffic from automated healthchecks. We’ll create a filter to focus on the traffic we’re sending to the application frontend (color gateway) when we request a color on the /color route.

The Color App has already been instrumented for X-Ray support and has created a Segment called “Default” to provide X-Ray with request context as it flows through the gateway service. Click on the “Default” button (shown in the figure below) to create a group to filter the visual map:

Figure 6. Creating a group for the X-Ray service map.

Choose “Create group”, name the group “color”, and enter an expression that filters on requests to the /color route going through the colorgateway-vn node:

(service("appmesh-mesh/colorgateway-vn")) AND http.url ENDSWITH "/color"
Figure 7. Adding a group filter expression.

After creating the group, make sure to select it from the dropdown to apply it as the active filter. You should see something similar to the following:

Figure 8. Analyzing the X-Ray service map.

What the map reveals is that

  1. Our color request first flows through an Envoy proxy for ingress to the gateway service.
  2. Envoy passes the request to the gateway service, which makes a request to a colorteller.
  3. The gateway service makes a request to a colorteller service to fetch a color. Egress traffic also flows through the Envoy proxy, which has been configured by App Mesh to route 100% of traffic for the colorteller to colorteller-blue.
  4. Traffic flows through another Envoy proxy for ingress to the colorteller-blue service.
  5. Envoy passes the request to the colorteller-blue service.

Click on the colorgateway-vn node to display Service details:

Figure 9. Tracing the colorgateway virtual node.

We can see an overview on latency and that 100% of the requests are “OK”.

Click on the “Traces” button:

This provides us with a detailed view about how traffic flowed for the request.

Figure 9. Analyzing a request trace

If we log into the console for AWS App Mesh and drill down into “Virtual routers” for our mesh, we’ll see that currently the HTTP route is configured to send 100% of traffic to the colorteller-blue virtual node.

Figure 10. Routes in the App Mesh console.

Click the “Edit” button to modify the route configuration:

Figure 11. Editing a route.

Click the “Add target” button, choose “colorteller-red-vn”, and set the weight to 1.

Figure 12. Adding another virtual node to a route.

After saving the updated route configuration, you should see:

Figure 13. The updated route for splitting traffic across two virtual nodes.

Now when you fetch a color, you should start to see “red” responses. Over time, the histogram (stats) field will show the distribution approaching 50% for each:

$ curl $colorapp/color
{"color":"red", "stats": {"blue":0.75,"red":0.25}}

And if you refresh the X-Ray Service map, you should see something like this:

Figure 14. The updated service map with split traffic.

AWS X-Ray is a valuable tool for providing insight into your application request traffic. See the AWS X-Ray docs to learn more instrumenting your own microservice applications to analyze their performance and the effects of traffic shaping with App Mesh.

6. Review

The following is the condensed version of all the steps we performed to run the Color App.

Step #1
Export the following environment variables needed by our deployment scripts. You can use most of the example values below for your own demo, but you will need to modify the last three using your own EC2 key pair and ECR URLs for the color images (see [Deploy gateway and colorteller services]).

.env

export AWS_PROFILE=default
export AWS_DEFAULT_REGION=us-west-2
export ENVIRONMENT_NAME=DEMO
export MESH_NAME=appmesh-mesh
export SERVICES_DOMAIN=demo.local
export ENVOY_IMAGE=111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.9.0.0-prod
export KEY_PAIR_NAME=tony_devbox2
export COLOR_GATEWAY_IMAGE=226767807331.dkr.ecr.us-west-2.amazonaws.com/gateway
export COLOR_TELLER_IMAGE=226767807331.dkr.ecr.us-west-2.amazonaws.com/colorteller:latest
# source environment variables into the current bash shell
$ source .env

Step #2
Run the following scripts in order to provision the resources we need for the application.

$ ./examples/infrastructure/vpc.sh
$ ./examples/infrastructure/appmesh-mesh.sh
$ ./examples/infrastructure/ecs-cluster.sh
$ ./examples/apps/colorapp/servicemesh/appmesh-colorapp.sh
$ ./examples/apps/colorapp/ecs/ecs-colorapp.sh

Step #3
After the application is deployed, fetch the Color Gateway endpoint

$ colorapp=$(aws cloudformation describe-stacks --stack-name=$ENVIRONMENT_NAME-ecs-colorapp --query="Stacks[0
].Outputs[?OutputKey=='ColorAppEndpoint'].OutputValue" --output=text); echo $colorapp
http://DEMO-Publi-M7WJ5RU13M0T-553915040.us-west-2.elb.amazonaws.com

Step #4
Query the Color App to fetch a color

$ curl $colorapp/color
{"color":"red", "stats": {"red":1}}

7. Summary

In this walkthrough, we stepped through the process of deploying the Color App example with App Mesh. We saw how easy it was to update routes to distribute traffic between different versions of a backend service and to access logs and distributed traces for the app in the AWS Console.

One of the key takeaways is that our control of traffic routing is transparent to the application. The application code for the gateway service that was deployed as an ECS task used the DNS name associated with the virtual service for the colorteller configured in App Mesh (colorteller.demo.local).

App Mesh propagated the configuration updates throughout the mesh that ensured traffic from dependent services to their backends was routed according to the policies we specified using App Mesh configuration, not application configuration.

In this demo, our services ran only on ECS. In the next post in this series, we’ll update the demo and deploy some of the services across different compute environments, including EC2, and see how App Mesh lets us control and monitor our running application managed within the same mesh.

8. Resources


AWS App Mesh Walkthrough was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

Getting started with the Amazon Aurora Serverless Data API

Lean how modern application developers can use the new Data API to create and connect to an Aurora Serverless Database

In this article with screencast videos, we’ll go over the three ways to create an Aurora Serverless Database with Data API. We’ll also cover the four ways to connect to your Data API enabled Aurora Serverless Database.

Let’s quickly go over what Amazon Aurora offers, why a serverless database, and answer, what is the Data API and why should I care?

About Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud, that combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open source databases.

Why Amazon Aurora Serverless?
Amazon Aurora Serverless is an on-demand, auto-scaling configuration for Amazon Aurora (MySQL-compatible edition). The database will automatically start up, shut down, and scale capacity up or down based on your application’s needs. It’s a simple, cost-effective option for infrequent, intermittent, or unpredictable workloads.

What is Amazon Aurora Serverless Data API?
The Data API is a new managed API layer on top of an Aurora Serverless database, allowing you to connect directly with your MySQL or PostgreSQL database. It also allows you to execute SQL statements from any application over HTTP without using a MySQL driver, plugin, or need to manage a connection pool or VPC.

Think of the Data API as a fully-managed API for interacting with your relational data. As the Data API is connected to your Serverless Cluster, you also get inherited auto-scaling, available, and backups of your database along with pausing of the database while not in use.

Three Ways to Create an Aurora Serverless Database w/ Data API:

My personal favorite is Option 2 CloudFormation for deployment as you can deploy predictable resources (not missing a step vs manual creation) and you can add additional AWS services and permissions in the CloudFormation template.

Option #1: Amazon RDS Console — Manual
The RDS Console is a quick way to deploy your resources and be done in just a few minutes following these instructions:

Create a new Aurora Serverless Cluster:

  1. Launch the Amazon RDS Console.
  2. Select Amazon Aurora engine.
  3. Select MySQL 5.6-compatible (the only option for serverless) and select Next.
  4. Select Serverless for the capacity type and provide a cluster name and master credentials and select Next.
  5. Leave all defaults and choose Create database.
Screencast for creating an Aurora Serverless Cluster via the RDS Console

Enable the Data API (for an existing Aurora Serverless Cluster):
We are going to enable the Data API for the newly created Aurora serverless cluster manually via the RDS Management Console. Hopefully, this is temporary until we can enable the Data API via a CloudFormation template using the EnableHTTPEndpointAPI parameter once this puppy is out of BETA. In the meantime …

  1. Launch the RDS Management Console
  2. Select your cluster (even though it says database)
  3. Select the Modify button on the upper right
  4. Select “Data API” under the Network & Security section
  5. Select Continue button
  6. Select “Apply immediately” under Scheduling of modifications
  7. Select “Modify cluster” radio button

Option #2: CloudFormation — Automated
CloudFormation is the best way to deploy a consistent environment in an automated way. This solution consists of creating a CloudFormation stack based on the CloudFormation template provided in this GitHub repo here.

The template has all the parameters defined for deploying an Aurora Serverless Database in just a few clicks. In addition to creating a serverless database, the template also creates an AWS Lambda function (written in Node.js 8.10) to access your Data API enabled database using the new AWS RDSDataService API. More on the Lambda function later when we connect to a Data API enabled database in Solution 3 below.

The CloudFormation template will provision the following resources:

  • Aurora Serverless (MySQL) Cluster
  • Aurora Serverless (MySQL) Database
  • AWS Lambda function for connecting to your database via Data API using the AWS RDSDataService API.
  • IAM Policies for Lambda execution of RDSDataService API, CloudWatch Logs, and read only from AWS Secrets to get database master credentials.

Get Started with CloudFormation
Follow the two (2) steps as outlined in this GitHub repo for deploying the resources in your AWS account.

The first step will deploy the resources via a CloudFormation Stack, and the second step walks through enabling the Data API for the created Aurora Serverless Database.

  • Step 1: Deploy CloudFormation Stack via GitHub repo here.
  • Step 2: Enable Data API following GitHub Step 2 instructions here.

Option #3: AWS CLI or SDK — Scripted
The AWS CLI or AWS SDK can be used to create your resources with just a few lines. Here’s a full AWS CLI statement for creating a new Aurora Serverless Cluster:

Once the Aurora Serverless Cluster has been created, follow Step 2: Enable Data API from the previous section and you now have an Aurora Serverless database with Data API enabled.

Four Ways to Connect to Your Database Using the Data API

Solution #1: Data Query (via RDS Console)

  1. Launch Amazon RDS Management Console.
  2. Select Query Editor.
  3. In the Connect to Database window, select your cluster, provide your master user, master password, and database name (optional).
  4. Select Connect to database.

Note: When you provide the cluster credentials the first time, the service will create an AWS Secret automatically for you, then each subsequent access to this cluster via the Query Editor, the service will use this AWS Secret to pull in the master user credentials for this cluster.

Once in the Query Editor, select the Clear button and type:

use MarketPlace;
select * from Customers;

Select Run.

Here’s the result if you have a Customers table with data:

SQL statement in RDS Console — Query Editor

TIP: The Query Editor is a nice tool to have when you need to verify the contents of the table or just perform some quick SQL statements.

Solution #2: AWS CLI
Modify the provided AWS CLI script with your own cluster arn, secrets arn, database name, and the SQS select statement of your choice.

Solution #3: AWS Lambda function
Once you deploy an Aurora Serverless Data API enabled database, you can easily connect and execute any SQL statement against the database with a simple AWS Lambda function using the RDSDataService API.

This function does not need to be inside an Amazon VPC and it doesn’t need to have any MySQL drivers or worry about connection pooling. Just make SQL statements as HTTP request and Aurora Serverless takes care of the rest!

Get Started Using a Lambda function to connect via Data API:

You can use the deploy Option #2 — CloudFormation above that provisions a database, a Lambda function, and fills out the environment variables to get you started OR… you can copy this code and deploy to Lambda directly.

If you do this manually, you’ll need to fill-in the Lambda environment variables to match your Aurora Serverless environment like Cluster Arn, DB name, and AWS Secrets arn.

Here’s the entire code to make SQL statement against your Aurora Serverless Data API enabled database. Again, make sure to provide the environment variables and then pass in something like:

{ “sqlStatement”: “SELECT * FROM ” }

The Lambda function will then make a connection to your database using the master credentials pulled from AWS Secrets and return a JSON response. It’s that easy!

Note: As of April 25, 2019 the Data API (beta) is only available to US-EAST-1 region serverless databases. Also, the function uses the RDSDataService API that is not currently available (beta) in the default Node 8 engine for Lambda, so you’ll need to build and deploy the function to Lambda as a zip file.

Solution #4: AWS AppSync
When building a GraphQL API via AWS AppSync, you now have direct access to your serverless Data API enabled database as a datasource, and… AppSync will generate a GraphQL schema for you, based on the existing database table design!

For this solution, we’ll use the new add-graphql-datasource plugin for the AWS Amplify CLI that automatically takes your serverless database table(s) and creates/updates a GraphQL schema, generates the appropriate mutations, queries, and subscriptions, and sets your database as a GraphQL DataSource to an existing GraphQL API. Yes, please.

Before we use the plugin, we want to make sure our Aurora serverless database has at least one table. If a database doesn’t exist, let’s create those now by using the RDS Query Editor in the RDS Console and then we’ll switch over to the Amplify CLI + add-graphql-datasource plugin.

Here, we are going to use the RDS Query Editor to create a database and sample table to get us started. If you already have a database and table from previous steps, move onto the [Amplify CLI — Installing the CLI] section below.

  • Launch Amazon RDS Management Console
  • Select Query Editor
  • In the Connect to Database window, select your cluster, provide your master user, master password, and database name (optional).
  • Select Connect to database.

In the query editor window, run the following commands:

Create database ‘MarketPlace’ if you haven’t already.

CREATE DATABASE MarketPlace;

Create a new Customers table.

USE MarketPlace;
CREATE TABLE Customers (
id int(11) NOT NULL PRIMARY KEY,
name varchar(50) NOT NULL,
phone varchar(50) NOT NULL,
email varchar(50) NOT NULL
);

Now that we have a database and a Customers table, we can now use the add-graphql-datasource plugin to generate a GraphQL schema, add the database as a datasource, and add mutations, queries, and subscriptions based on the [Customers] table.

Here’s how the add-graphql-datasource plugin works:

Amplify CLI — Installing the CLI
If you haven’t installed the AWS Amplify CLI before, here’s a quick shortcut. If you have the AWS CLI installed, the Amplify CLI will utilize those credentials and therefore amplify configure is not necessary.

$ npm install -g @aws-amplify/cli
$ amplify configure

Amplify CLI — Init
Launch Mac Terminal in the root of your iOS project folder. Now, we’ll initialize our AWS backend project using the following Amplify command.

$ amplify init

You will be guided through the process of setting up the project.

Amplify CLI — Add API

$ amplify add api

Amplify CLI — Add GraphQL DataSource

$ amplify api add-graphql-datasource

Now, let’s create a new customer in the Customers table using the Queries tool in the AppSync Console.

AppSync Queries — in action

We can then query the Customers table from the Query Editor in the RDS Console to double check.

Amazon RDS Query Editor — Aurora Serverless Data API SQL Statement

Now that we have the schema, mutations, queries, subscriptions, and our serverless Aurora database as a datasource for our GraphQL API, we can start using a mobile or web client with the generated code to interact with this structured data!

Conclusion

Although the Data API and the RDSDataService API are currently in beta, there’s so much buzz and potential here for modern application developers.

The Data API allows any developer to take advantage of structured collections of data by having more control over the database, less management, no drivers or connection pools, and executing SQL statements as HTTP requests is a game changer.

I’ll update this article as this service feature progresses.


Getting started with the Amazon Aurora Serverless Data API was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Good and the Bad of Google Cloud Run

A general critique of Cloud Run relative to FaaS and managed API services — and how this is different from AWS Lambda

This week, at Google Cloud Next, GCP announced an interesting new service: Cloud Run. My thoughts about the new Cloud Run service are a bit more complicated than this Twitter thread, so I’ve expanded on them in this blog and welcome your comments.

In this article, I’ll compare Google’s Cloud Run with AWS Lambda and API Gateway because I am most familiar with those services and provider. But my thoughts below are a general critique of Cloud Run relative to FaaS including Google Cloud Functions and managed API services in general — regardless of the provider.

What is Cloud Run?

Google’s Cloud Run allows you to hand over a container image with a web server inside, and specify some combination of memory/CPU resources and allowed concurrency. The logic inside your container must be stateless.

Cloud Run then takes care of creating an HTTP endpoint, receiving requests and routing them to containers, and making sure enough containers are running to handle the volume of requests. While your containers are handling requests, you are billed in 100 ms increments.

How is this different than AWS Lambda?

This sounds a lot like Lambda. How is it different? You are handling multiple requests within a single container. At a fundamental level, Cloud Run is just serving as a very fancy load balancer.

All of the web logic is inside your container; auth*, validation, error handling, the lot of it. But instead of just measuring resource utilization or other metrics that are a proxy for load, Cloud Run understands requests and uses that directly as a measure of load to know how to scale and route.

Note: if you’re using GCP IAM and you’re running on the managed version of Cloud Run, the service can do the auth for you. There’s no custom authorizers, nor do I expect there to be in the future, because why not just put it in your web server?

In Lambda, while you can set up API Gateway to do no validation or auth and pass everything through to your Lambda, you’d be missing out on a wealth of managed features that perform these functions for you.

What’s the Good?

So what’s good about Cloud Run?

  • It’s going to make it very, very simple for people who are running containerized, stateless web servers today to get serverless benefits.
  • Better scaling and fine-grained billing.
  • It’s also dead simple to test locally, because inside your container is a fully-featured web server doing all the work.

What’s the Bad?

So what’s bad about Cloud Run? Inside your container is a fully-featured web server doing all the work!

  • The point of serverless is to focus on business value, and the best way to do that is to use managed services for everything you can — ideally only resorting to custom code for your business logic.
  • If we try to compare Cloud Run to what’s possible inside a Lambda function execution environment, we’re missing the point.
  • The point is that the code you put inside Lambda, the code that you are liable for, can be smaller and more focused because so much of the logic can be moved into the services themselves.

The FaaS Model: Handling each request in isolation

API Gateway allows you to use custom authentication. That means that your code, in a Lambda that does nothing else, can reject the request.

  • You don’t have to even think about that request touching your downstream web handling code.
  • You don’t have to pay for invocation of the request-handler Lambda.
  • You don’t even pay for the API Gateway request.
  • You aren’t paying for evaluating auth on every request since that custom authentication response is cached by API Gateway.

API Gateway allows you to perform schema validation on incoming requests. If the request fails validation, your Lambda doesn’t get invoked. Your code doesn’t have to worry about malformed requests.

Note: The API Gateway model validation is sadly a little more complicated than I’ve described above. Expect a post from myself and Richard Boyd on this topic in the near future.

The FaaS model is that each request is handled in isolation. Some people complain about this. I’ve even heard someone claim that AWS is pushing Lambda because users’ inability to optimize resource usage across requests is lucrative for them — which is about the most outlandish conspiracy theory I’ve heard this side of flat-earthers.

But the slightly less efficient usage model comes with benefits: you never have to worry about cross-talk effects. In Lambda, I don’t have to think about whether one request might have an impact on another. Everything’s isolated. This makes it easy to reason about, and removes one more thing I need to think about in the development process.

Security is hard, and the ability to scope your code’s involvement with it as small as possible is a huge win. Beyond the security implications, it’s also fewer moving parts that are your responsibility.

Cloud Run is also not the same as Lambda’s custom runtimes. Beyond the fact that custom runtimes should be a last resort, they don’t require running a server. Instead, you only need an HTTP client, which makes it more clear that what your code is doing is not acting as a tiny web server.

Cloud Run is not FaaS

All this is to say that Cloud Run should not be seen as equivalent, or even analogous, to pure FaaS — Cloud Run fundamentally involves significantly more code ownership. Cloud Run is still a valid rung on the serverless ladder, but there are many more above this service.

And that gets to my biggest concern. Cloud Run, and GCP in general, are providing people with a system that is going to make them complacent with traditional architecture, and not push them to gain the immense benefits of shifting (however slowly) to service-full architecture that offloads as many aspects of an application as possible to fully managed services.

Google’s strategy is to push Kubernetes as the solution to cloud architecture. And for good reason: Kubernetes is really good at solving people’s pain points while staying within the familiar architecture paradigm. And Google is doing a great job creating a Kubernetes layer on top of every possible base infrastructure.

But Kubernetes keeps us running servers. It removes the infrastructure notion of server, but encourages us to keep running application servers, like the ones inside Cloud Run containers.

Google’s ability to put Kubernetes on-prem is going to satisfy developers, and this will potentially come at the cost of delaying organizational moves to the public cloud. The difference from an application development perspective will be less apparent and will hide the higher total cost of ownership for being on-prem.

While Cloud Run is going to enable better usage of existing web server infrastructure, it’s also going to provide a safety blanket for developers intimidated by the paradigm shift of FaaS and service-full architecture. This will further delay the shift to the more value-oriented approach to development.


The Good and the Bad of Google Cloud Run was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

How to deploy a custom domain with the Amplify Console

Add a custom domain to an Amplify Console deployment in just a couple of minutes — let’s take a look at how this works!

What is the Amplify Console?

The Amplify console offers hosting for full stack serverless web apps with continuous Git-based deployment. You connect your Github repo, click deploy, and the application is deployed to a live URL.

Built-in atomic deployments eliminate maintenance windows by ensuring that the web app is only updated when the entire deployment has finished.

If you are launching a project with an Amplify backend, the console will also give you the option of deploying and maintaining the Amplify project.

Adding a custom domain

After you’ve deployed an application, the next step is deploying your app to a custom domain purchased through domain registrars such as GoDaddy or Google Domains.

When you initially deploy your web app with the Amplify Console, it is hosted at a location similar to this:

https://branch-name.d1m7bkiki6tdw1.amplifyapp.com

When you use a custom domain, users will be able to easily access your app that is hosted from a vanity URL, such as the following:

https://www.myawesomedomain.com

Let’s learn how to do this!

Launching the app in the Amplify Console

If you already have an application launched in the Amplify Console, you can skip this step and go directly the next step — adding the custom domain.

There is also a ready-made Gatsby Blog to get started quickly. Just click here and then jump ahead to the next step to adding the custom domain.

To deploy an app already in your GitHub account, let’s launch a new application in the Amplify Console. The first step is to direct your browser to https://console.aws.amazon.com/amplify and click GET STARTED under the Deploy section.

Next, connect the Git repository you’d like to launch and select the branch, then click Next. Accept the default build settings, then click Save and deploy.

Now your application is launched and we can move on to setting it up in a custom domain.

Adding the custom domain

In the AWS dashboard, go to Route53 & click on Hosted Zones. Choose Create Hosted Zone. From there, enter your domain name & click Create.

ProTip: Be sure to enter your domain name as is, without www. E.g. myawesomedomain.com

Now in Route53 dashboard you should be given 4 nameservers.

In your hosting account (GoDaddy, Google Domains, etc..), set these custom nameservers in your DNS setting for the domain you’re using.

These nameservers should look something like ns-1355.awsdns-41.org, ns-1625.awsdns-11.co.uk, etc…

Next, in the Amplify Console, click Domain Management in the left menu. Next, click the Add Domain button.

Here, the dropdown menu should show you the domain you have in Route53. Choose this domain & click Configure domain.

This should deploy the app to your domain (this will take between 5–20 minutes). The last thing is to set up redirects. Click on Rewrites & redirects.

Make sure the redirect for the domain looks like this (i.e. redirect the https://websitename to https://www.websitename):

That’s it! Once the DNS propagates, you should see your domain live at the URL that you have set up in the above steps.

My Name is Nader Dabit. I am a Developer Advocate at Amazon Web Services working with projects like AWS AppSync and AWS Amplify. I specialize in cross-platform & cloud-enabled application development.


How to deploy a custom domain with the Amplify Console was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.