Migrating from App Engine Memcache to Cloud Memorystore (Module 13)

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction and background

The previous Module 12 episode of the Serverless Migration Station video series demonstrated how to add App Engine Memcache usage to an existing app that has transitioned from the webapp2 framework to Flask. Today’s Module 13 episode continues its modernization by demonstrating how to migrate that app from Memcache to Cloud Memorystore. Moving from legacy APIs to standalone Cloud services makes apps more portable and provides an easier transition from Python 2 to 3. It also makes it possible to shift to other Cloud compute platforms should that be desired or advantageous. Developers benefit from upgrading to modern language releases and gain added flexibility in application-hosting options.

While App Engine Memcache provides a basic, low-overhead, serverless caching service, Cloud Memorystore “takes it to the next level” as a standalone product. Rather than a proprietary caching engine, Cloud Memorystore gives users the option to select from a pair of open source engines, Memcached or Redis, each of which provides additional features unavailable from App Engine Memcache. Cloud Memorystore is typically more cost efficient at-scale, offers high availability, provides automatic backups, etc. On top of this, one Memorystore instance can be used across many applications as well as incorporates improvements to memory handling, configuration tuning, etc., gained from experience managing a huge fleet of Redis and Memcached instances.

While Memcached is more similar to Memcache in usage/features, Redis has a much richer set of data structures that enable powerful application functionality if utilized. Redis has also been recognized as the most loved database by developers in StackOverflow’s annual developers survey, and it’s a great skill to pick up. For these reasons, we chose Redis as the caching engine for our sample app. However, if your apps’ usage of App Engine Memcache is deeper or more complex, a migration to Cloud Memorystore for Memcached may be a better option as a closer analog to Memcache.

Migrating to Cloud Memorystore for Redis featured video


Performing the migration

The sample application registers individual web page “visits,” storing visitor information such as IP address and user agent. In the original app, the most recent visits are cached into Memcache for an hour and used for display if the same user continuously refreshes their browser during this period; caching is a one way to counter this abuse. New visitors or cache expiration results new visits as well as updating the cache with the most recent visits. Such functionality must be preserved when migrating to Cloud Memorystore for Redis.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how the most recent visits are cached into Memcache. After completing the migration, the underlying caching infrastructure has been swapped out in favor of Memorystore (via language-specific Redis client libraries). In this migration, we chose Redis version 5.0, and we recommend the latest versions, 5.0 and 6.x at the time of this writing, as the newest releases feature additional performance benefits, fixes to improve availability, and so on. In the code snippets below, notice how the calls between both caching systems are nearly identical. The bolded lines represent the migration-affected code managing the cached data.

Switching from App Engine Memcache to Cloud Memorystore for Redis

Wrap-up

The migration covered begins with the Module 12 sample app (“START”). Migrating the caching system to Cloud Memorystore and other requisite updates results in the Module 13 sample app (“FINISH”) along with an optional port to Python 3. To practice this migration on your own to help prepare for your own migrations, follow the codelab to do it by-hand while following along in the video.

While the code migration demonstrated seems straightforward, the most critical change is that Cloud Memorystore requires dedicated server instances. For this reason, a Serverless VPC connector is also needed to connect your App Engine app to those Memorystore instances, requiring more dedicated servers. Furthermore, neither Cloud Memorystore nor Cloud VPC are free services, and neither has an “Always free” tier quota. Before moving forward this migration, check the pricing documentation for Cloud Memorystore for Redis and Serverless VPC access to determine cost considerations before making a commitment.

One key development that may affect your decision: In Fall 2021, the App Engine team extended support of many of the legacy bundled services like Memcache to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache even when upgrading to 3.x so long as you retrofit your code to access bundled services from next-generation runtimes.

A move to Cloud Memorystore and today’s migration techniques will be here if and when you decide this is the direction you want to take for your App Engine apps. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we plan to cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

How to use App Engine Memcache in Flask apps (Module 12)

Posted by Wesley Chun

Background

In our ongoing Serverless Migration Station series aimed at helping developers modernize their serverless applications, one of the key objectives for Google App Engine developers is to upgrade to the latest language runtimes, such as from Python 2 to 3 or Java 8 to 17. Another objective is to help developers learn how to move away from App Engine legacy APIs (now called “bundled services”) to Cloud standalone equivalent services. Once this has been accomplished, apps are much more portable, making them flexible enough to:

In today’s Module 12 video, we’re going to start our journey by implementing App Engine’s Memcache bundled service, setting us up for our next move to a more complete in-cloud caching service, Cloud Memorystore. Most apps typically rely on some database, and in many situations, they can benefit from a caching layer to reduce the number of queries and improve response latency. In the video, we add use of Memcache to a Python 2 app that has already migrated web frameworks from webapp2 to Flask, providing greater portability and execution options. More importantly, it paves the way for an eventual 3.x upgrade because the Python 3 App Engine runtime does not support webapp2. We’ll cover both the 3.x and Cloud Memorystore ports next in Module 13.

Got an older app needing an update? We can help with that.

Adding use of Memcache

The sample application registers individual web page “visits,” storing visitor information such as the IP address and user agent. In the original app, these values are stored immediately, and then the most recent visits are queried to display in the browser. If the same user continuously refreshes their browser, each refresh constitutes a new visit. To discourage this type of abuse, we cache the same user’s visit for an hour, returning the same cached list of most recent visits unless a new visitor arrives or an hour has elapsed since their initial visit.

Below is pseudocode representing the core part of the app that saves new visits and queries for the most recent visits. Before, you can see how each visit is registered. After the update, the app attempts to fetch these visits from the cache. If cached results are available and “fresh” (within the hour), they’re used immediately, but if cache is empty, or a new visitor arrives, the current visit is stored as before, and this latest collection of visits is cached for an hour. The bolded lines represent the new code that manages the cached data.

Adding App Engine Memcache usage to sample app

Wrap-up

Today’s “migration” began with the Module 1 sample app. We added a Memcache-based caching layer and arrived at the finish line with the Module 12 sample app. To practice this on your own, follow the codelab doing it by-hand while following the video. The Module 12 app will then be ready to upgrade to Cloud Memorystore should you choose to do so.

In Fall 2021, the App Engine team extended support of many of the bundled services to next-generation runtimes, meaning you are no longer required to migrate to Cloud Memorystore when porting your app to Python 3. You can continue using Memcache in your Python 3 app so long as you retrofit the code to access bundled services from next-generation runtimes.

If you do want to move to Cloud Memorystore, stay tuned for the Module 13 video or try its codelab to get a sneak peek. All Serverless Migration Station content (codelabs, videos, source code [when available]) can be accessed at its open source repo. While our content initially focuses on Python users, we hope to one day cover other language runtimes, so stay tuned. For additional video content, check out our broader Serverless Expeditions series.

How can App Engine users take advantage of Cloud Functions?

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Introduction

Recently, we discussed containerizing App Engine apps for Cloud Run, with or without Docker. But what about Cloud Functions… can App Engine users take advantage of that platform somehow? Back in the day, App Engine was always the right decision, because it was the only option. With Cloud Functions and Cloud Run joining in the serverless product suite, that’s no longer the case.

Back when App Engine was the only choice, it was selected to host small, single-function apps. Yes, when it was the only option. Other developers have created huge monolithic apps for App Engine as well… because it was also the only option. Fast forward to today where code follows more service-oriented or event-driven architectures. Small apps can be moved to Cloud Functions to simplify the code and deployments while large apps could be split into smaller components, each running on Cloud Functions.

Refactoring App Engine apps for Cloud Functions

Small, single-function apps can be seen as a microservice, an API endpoint “that does something,” or serve some utility likely called as a result of some event in a larger multi-tiered application, say to update a database row or send a customer email message. App Engine apps require some kind web framework and routing mechanism while Cloud Function equivalents can be freed from much of those requirements. Refactoring these types of App Engine apps for Cloud Functions will like require less overhead, helps ease maintenance, and allow for common components to be shared across applications.

Large, monolithic applications are often made up of multiple pieces of functionality bundled together in one big package, such as requisitioning a new piece of equipment, opening a customer order, authenticating users, processing payments, performing administrative tasks, and so on. By breaking this monolith up into multiple microservices into individual functions, each component can then be reused in other apps, maintenance is eased because software bugs will identify code closer to their root origins, and developers won’t step on each others’ toes.

Migration to Cloud Functions

In this latest episode of Serverless Migration Station, a Serverless Expeditions mini-series focused on modernizing serverless apps, we take a closer look at this product crossover, covering how to migrate App Engine code to Cloud Functions. There are several steps you need to take to prepare your code for Cloud Functions:

  • Divest from legacy App Engine “bundled services,” e.g., Datastore, Taskqueue, Memcache, Blobstore, etc.
  • Cloud Functions supports modern runtimes; upgrade to Python 3, Java 11, or PHP 7
  • If your app is a monolith, break it up into multiple independent functions. (You can also keep a monolith together and containerize it for Cloud Run as an alternative.)
  • Make appropriate application updates to support Cloud Functions

    The first three bullets are outside the scope of this video and its codelab, so we’ll focus on the last one. The changes needed for your app include the following:

    1. Remove unneeded and/or unsupported configuration
    2. Remove use of the web framework and supporting routing code
    3. For each of your functions, assign an appropriate name and install the request object it will receive when it is called.

    Regarding the last point, note that you can have multiple “endpoints” coming into a single function which processes the request path, calling other functions to handle those routes. If you have many functions in your app, separate functions for every endpoint becomes unwieldy; if large enough, your app may be more suited for Cloud Run. The sample app in this video and corresponding code sample only has one function, so having a single endpoint for that function works perfectly fine here.

    This migration series focuses on our earliest users, starting with Python 2. Regarding the first point, the app.yaml file is deleted. Next, almost all Flask resources are removed except for the template renderer (the app still needs to output the same HTML as the original App Engine app). All app routes are removed, and there’s no instantiation of the Flask app object. Finally for the last step, the main function is renamed more appropriately to visitme() along with a request object parameter.

    This “migration module” starts with the (Python 3 version of the) Module 2 sample app, applies the steps above, and arrives at the migrated Module 11 app. Implementing those required changes is illustrated by this code “diff:”

    Migration of sample app to Cloud Functions

    Next steps

    If you’re interested in trying this migration on your own, feel free to try the corresponding codelab which leads you step-by-step through this exercise and use the video for additional guidance.

    All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 as well as content for the next-generation Cloud Functions service, so stay tuned. If you’re curious whether it’s possible to write apps that can run on App Engine, Cloud Functions, or Cloud Run with no code changes at all, the answer is yes. Hope this content is useful for your consideration when modernizing your own serverless applications!

How to get started in cloud computing

Posted by Google Cloud training & certifications team

Validated cloud skills are in demand. With Google Cloud certifications, employers know that certified individuals have proven knowledge of various professional roles within the cloud industry. Google Cloud certifications have also been recognized as some of the highest-paying IT certifications for the past several years. This year, the Google Cloud Certified Professional Data Engineer topped the list with an average salary of $171,749, while the Google Cloud Certified Professional Cloud Architect came in second place, with an average salary of $169,029.

You may be wondering what sort of background you need to take advantage of these opportunities: What sort of classes should you take? How exactly do you get started in the cloud without experience? Here are some tips to start learning about Google Cloud and build your cloud computing skills.

Get hands-on experience with cloud computing

Google Cloud training offers a wide range of learning paths featuring comprehensive courses and hands-on labs, so you get to practice with the real Google Cloud console. For instance, If you wanted to take classes to prepare for the Professional Data Engineer certification mentioned above, there is a complete learning path featuring four courses and 31 hands-on labs to help familiarize you with relevant topics like BigQuery, machine learning, IoT, TensorFlow, and more.

There are nine learning paths providing you with a launch pad to all major pillars of cloud computing, from networking, cloud security, database management, and hybrid cloud infrastructure. Each broader learning path contains specific learning paths to help you specifically train for job roles like Machine Learning Engineer. Visit the Google Cloud training page to find the right path for you.

Learn live from cloud experts

Google Cloud regularly hosts a half-day live training event called Cloud OnBoard which features hands-on learning led by experts. All sessions are also available to watch on-demand after the event.

If you’re a developer new to cloud computing, we recommend you start with Google Cloud Fundamentals, an entry-level course to learn about the basics of Google Cloud. Experts guide you through hands-on labs where you can practice using the Google Console, Google Cloud Shell, and more.

You’ll be introduced to core components of Google Cloud and given an overview of how its tools impact the entire cloud computing landscape. The curriculum covers Compute Engine and how to create VM instances from scratch and from existing templates, how to connect them together, and end with projects that can talk to each other safely and securely. You will also learn about the different storage and database options available on Google Cloud.

Other Cloud OnBoard event topics include cloud architecture, Kubernetes, data analytics, and cloud application development.

Explore Google Cloud infrastructure

Cloud infrastructure is the backbone of the internet. Understanding cloud infrastructure is a good starting point to start digging deeper into cloud concepts because it will give you a taste of the various aspects of cloud computing to figure out what you like best, whether it’s networking, security, or application development.

Build your foundational Google Cloud knowledge with our on-demand infrastructure training in the cloud infrastructure learning path. This learning path will provide you with practical experience through expert-guided labs which dive into Cloud Storage and other key application services like Google Cloud’s operations suite and Cloud Functions.

Show off your skills

Once you have a strong grasp on Google Cloud basics, you can start earning skill badges to demonstrate your experience.

Skill badges are digital credentials that recognize your ability to solve real-world problems with your cloud knowledge. You can share them on your resume or social profile so your professional network sees your technical skills. This can be useful for recruiters or employers as you transition to cloud computing work.Skill badges also enable you to get in-depth, hands-on experience with different Google Cloud offerings on the way to earning the credential.

You can also use them to start preparing for Google Cloud certifications which are more intensive and show employers that you are a cloud expert. Most Google Cloud certifications recommend having at least 6 months or up to several years of industry experience depending on the material.

Ready to get started in the cloud? Visit the Google Cloud training page to see all your options from in-person classes, online courses, special events, and more.

An easier way to move your App Engine apps to Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Blue header

An easier yet still optional migration

In the previous episode of the Serverless Migration Station video series, developers learned how to containerize their App Engine code for Cloud Run using Docker. While Docker has gained popularity over the past decade, not everyone has containers integrated into their daily development workflow, and some prefer “containerless” solutions but know that containers can be beneficial. Well today’s video is just for you, showing how you can still get your apps onto Cloud Run, even If you don’t have much experience with Docker, containers, nor Dockerfiles.

App Engine isn’t going away as Google has expressed long-term support for legacy runtimes on the platform, so those who prefer source-based deployments can stay where they are so this is an optional migration. Moving to Cloud Run is for those who want to explicitly move to containerization.

Migrating to Cloud Run with Cloud Buildpacks video

So how can apps be containerized without Docker? The answer is buildpacks, an open-source technology that makes it fast and easy for you to create secure, production-ready container images from source code, without a Dockerfile. Google Cloud Buildpacks adheres to the buildpacks open specification and allows users to create images that run on all GCP container platforms: Cloud Run (fully-managed), Anthos, and Google Kubernetes Engine (GKE). If you want to containerize your apps while staying focused on building your solutions and not how to create or maintain Dockerfiles, Cloud Buildpacks is for you.

In the last video, we showed developers how to containerize a Python 2 Cloud NDB app as well as a Python 3 Cloud Datastore app. We targeted those specific implementations because Python 2 users are more likely to be using App Engine’s ndb or Cloud NDB to connect with their app’s Datastore while Python 3 developers are most likely using Cloud Datastore. Cloud Buildpacks do not support Python 2, so today we’re targeting a slightly different audience: Python 2 developers who have migrated from App Engine ndb to Cloud NDB and who have ported their apps to modern Python 3 but now want to containerize them for Cloud Run.

Developers familiar with App Engine know that a default HTTP server is provided by default and started automatically, however if special launch instructions are needed, users can add an entrypoint directive in their app.yaml files, as illustrated below. When those App Engine apps are containerized for Cloud Run, developers must bundle their own server and provide startup instructions, the purpose of the ENTRYPOINT directive in the Dockerfile, also shown below.

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

Starting your web server with App Engine (app.yaml) and Cloud Run with Docker (Dockerfile) or Buildpacks (Procfile)

In this migration, there is no Dockerfile. While Cloud Buildpacks does the heavy-lifting, determining how to package your app into a container, it still needs to be told how to start your service. This is exactly what a Procfile is for, represented by the last file in the image above. As specified, your web server will be launched in the same way as in app.yaml and the Dockerfile above; these config files are deliberately juxtaposed to expose their similarities.

Other than this swapping of configuration files and the expected lack of a .dockerignore file, the Python 3 Cloud NDB app containerized for Cloud Run is nearly identical to the Python 3 Cloud NDB App Engine app we started with. Cloud Run’s build-and-deploy command (gcloud run deploy) will use a Dockerfile if present but otherwise selects Cloud Buildpacks to build and deploy the container image. The user experience is the same, only without the time and challenges required to maintain and debug a Dockerfile.

Get started now

If you’re considering containerizing your App Engine apps without having to know much about containers or Docker, we recommend you try this migration on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when available), codelab tutorials, and source code, can be found in the migration repo. While our content initially focuses on Python users, we hope to one day also cover other legacy runtimes so stay tuned. Containerization may seem foreboding, but the goal is for Cloud Buildpacks and migration resources like this to aid you in your quest to modernize your serverless apps!

Containerizing Google App Engine apps for Cloud Run

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Cloud

Google App Engine header

An optional migration

Serverless Migration Station is a video mini-series from Serverless Expeditions focused on helping developers modernize their applications running on a serverless compute platform from Google Cloud. Previous episodes demonstrated how to migrate away from the older, legacy App Engine (standard environment) services to newer Google Cloud standalone equivalents like Cloud Datastore. Today’s product crossover episode differs slightly from that by migrating away from App Engine altogether, containerizing those apps for Cloud Run.

There’s little question the industry has been moving towards containerization as an application deployment mechanism over the past decade. However, Docker and use of containers weren’t available to early App Engine developers until its flexible environment became available years later. Fast forward to today where developers have many more options to choose from, from an increasingly open Google Cloud. Google has expressed long-term support for App Engine, and users do not need to containerize their apps, so this is an optional migration. It is primarily for those who have decided to add containerization to their application deployment strategy and want to explicitly migrate to Cloud Run.

If you’re thinking about app containerization, the video covers some of the key reasons why you would consider it: you’re not subject to traditional serverless restrictions like development language or use of binaries (flexibility); if your code, dependencies, and container build & deploy steps haven’t changed, you can recreate the same image with confidence (reproducibility); your application can be deployed elsewhere or be rolled back to a previous working image if necessary (reusable); and you have plenty more options on where to host your app (portability).

Migration and containerization

Legacy App Engine services are available through a set of proprietary, bundled APIs. As you can surmise, those services are not available on Cloud Run. So if you want to containerize your app for Cloud Run, it must be “ready to go,” meaning it has migrated to either Google Cloud standalone equivalents or other third-party alternatives. For example, in a recent episode, we demonstrated how to migrate from App Engine ndb to Cloud NDB for Datastore access.

While we’ve recently begun to produce videos for such migrations, developers can already access code samples and codelab tutorials leading them through a variety of migrations. In today’s video, we have both Python 2 and 3 sample apps that have divested from legacy services, thus ready to containerize for Cloud Run. Python 2 App Engine apps accessing Datastore are most likely to be using Cloud NDB whereas it would be Cloud Datastore for Python 3 users, so this is the starting point for this migration.

Because we’re “only” switching execution platforms, there are no changes at all to the application code itself. This entire migration is completely based on changing the apps’ configurations from App Engine to Cloud Run. In particular, App Engine artifacts such as app.yaml, appengine_config.py, and the lib folder are not used in Cloud Run and will be removed. A Dockerfile will be implemented to build your container. Apps with more complex configurations in their app.yaml files will likely need an equivalent service.yaml file for Cloud Run — if so, you’ll find this app.yaml to service.yaml conversion tool handy. Following best practices means there’ll also be a .dockerignore file.

App Engine and Cloud Functions are sourced-based where Google Cloud automatically provides a default HTTP server like gunicorn. Cloud Run is a bit more “DIY” because users have to provide a container image, meaning bundling our own server. In this case, we’ll pick gunicorn explicitly, adding it to the top of the existing requirements.txt required packages file(s), as you can see in the screenshot below. Also illustrated is the Dockerfile where gunicorn is started to serve your app as the final step. The only differences for the Python 2 equivalent Dockerfile are: a) require the Cloud NDB package (google-cloud-ndb) instead of Cloud Datastore, and b) start with a Python 2 base image.

Image of The Python 3 requirements.txt and Dockerfile

The Python 3 requirements.txt and Dockerfile

Next steps

To walk developers through migrations, we always “START” with a working app then make the necessary updates that culminate in a working “FINISH” app. For this migration, the Python 2 sample app STARTs with the Module 2a code and FINISHes with the Module 4a code. Similarly, the Python 3 app STARTs with the Module 3b code and FINISHes with the Module 4b code. This way, if something goes wrong during your migration, you can always rollback to START, or compare your solution with our FINISH. If you are considering this migration for your own applications, we recommend you try it on a sample app like ours before considering it for yours. A corresponding codelab leading you step-by-step through this exercise is provided in addition to the video which you can use for guidance.

All migration modules, their videos (when published), codelab tutorials, START and FINISH code, etc., can be found in the migration repo. We hope to also one day cover other legacy runtimes like Java 8 so stay tuned. We’ll continue with our journey from App Engine to Cloud Run ahead in Module 5 but will do so without explicit knowledge of containers, Docker, or Dockerfiles. Modernizing your development workflow to using containers and best practices like crafting a CI/CD pipeline isn’t always straightforward; we hope content like this helps you progress in that direction!

13 Most Common Google Cloud Reference Architectures

Posted by Priyanka Vergadia, Developer Advocate

Google Cloud is a cloud computing platform that can be used to build and deploy applications. It allows you to take advantage of the flexibility of development while scaling the infrastructure as needed.

I’m often asked by developers to provide a list of Google Cloud architectures that help to get started on the cloud journey. Last month, I decided to start a mini-series on Twitter called “#13DaysOfGCP” where I shared the most common use cases on Google Cloud. I have compiled the list of all 13 architectures in this post. Some of the topics covered are hybrid cloud, mobile app backends, microservices, serverless, CICD and more. If you were not able to catch it, or if you missed a few days, here we bring to you the summary!

Series kickoff #13DaysOfGCP

#1: How to set up hybrid architecture in Google Cloud and on-premises

Day 1

#2: How to mask sensitive data in chatbots using Data loss prevention (DLP) API?

Day 2

#3: How to build mobile app backends on Google Cloud?

Day 3

#4: How to migrate Oracle Database to Spanner?

Day 4

#5: How to set up hybrid architecture for cloud bursting?

Day 5

#6: How to build a data lake in Google Cloud?

Day 6

#7: How to host websites on Google Cloud?

Day 7

#8: How to set up Continuous Integration and Continuous Delivery (CICD) pipeline on Google Cloud?

Day 8

#9: How to build serverless microservices in Google Cloud?

Day 9

#10: Machine Learning on Google Cloud

Day 10

#11: Serverless image, video or text processing in Google Cloud

Day 11

#12: Internet of Things (IoT) on Google Cloud

Day 12

#13: How to set up BeyondCorp zero trust security model?

Day 13

Wrap up with a puzzle

Wrap up!

We hope you enjoy this list of the most common reference architectures. Please let us know your thoughts in the comments below!

AWS re:Invent 2019 Swag Review

The complete guide to swag from the biggest cloud conference in the world — it was the year of the reusable straw

Well, it’s December 2019 and you know what that means — it’s AWS re:Invent time again! While the announcement of new services are great, let’s get to the real fun — a swag review from the biggest cloud conference in the world.

This year I tried as hard as possible to give a COMPLETE review of re:Invent swag. I visited almost every single booth, save for a few that either had no swag or were only offering a sticker or some chocolate.

I also didn’t collect things I got a LOT of in previous years — so less socks, no pins and no t-shirts. I did however take photos of as many of them as possible, as there were still some amazing designs out there!

So, without further ado, here we go!

Amazon

We begin each year with the hoodie and the bottle. This year AWS have gone blue and it looks fantastic! It also comes with the re-usable bottle as well, which are solid and were available in a bunch of colors. They also worked together with Cupanion which will donate water every time your bar code is scanned.

AWS Certification Lounge
Next we head over to the certification lounge to get our certified swag! This year, a pullover, socks, pin and sticker. The pullover is very nice, with a thin sports-like fabric. Thanks cert team!

4k/8k Charity Run
For the 4th year running I will be taking part in the charity run, this year it promises to be a lovely 0c/32f degrees.. brrrrrr ❄️ To make up for that, the t-shirt is really nice!

Throughout the week there was also a bunch of swag available depending on the different AWS booth departments you visited. Last year I scored a spoon with “Serverless for breakfast” teaspoon on it. This year the Serverless booth gave out a “Serverless for lunch” fork, and I look forward to my Serverless for butter knife in 2020 and Serverless for dessert spoon in 2021!

re:Play

The biggest tech party of the year always has some cool shirts and this year did NOT disappoint! They were really awesome neon t-shirts. One was a 3d text rotation thing and the other was space invaders.

A Cloud Guru
And then we move on to our own swag from A Cloud Guru — this years t-shirt. We didn’t choose the life, it chose us!

Ultra exclusive ACG swag: work for us and I’ll get you one of these light up shirts for re:Play. It’s extremely rare, but a good enough incentive to come work for us, don’t you think? 😉

Swag of the year

I have two winners this year for my favorite pieces of swag.

Anitian

The first goes to Anitian for their packets of Stumptown Coffee. I’d never seen bags of coffee being given away before, a truly unique and well received offering from the tech crowd!

Lucidchart

There’s really no explanation required here, Lucidchart were giving out hammocks amongst all their other cool swag. HAMMOCKS!

Me, ridiculously attempting to use a hammock between trees that obviously don’t support my weight

Honorable mention — Solace

I *loved* solace’s keychain Sriracha sauce! such a cool idea and also the first time I’ve seen it at a conference (the pop top was cool too).

Fresh Swag

Last year pins were all the rage, in 2017 the socks were the new thing (and are still quite popular in 2019), but this year the new and environmentally conscious swag was re-usable metal straws.

I think about 8 different companies gave them out this year, and they are a fantastic idea. All came with a pipe cleaner too, which is useful for keeping them clean.

Some were also collapsible as well, which is super convenient! Straws came courtesy of Rackspace, LG, Barracuda, GitLab, DXC, Acquia, Tech Mahindra and the AWS Partner Network (and probably a few more I missed).

The Booth Run

This is where I attempted to visit every booth to see what they were giving away. There’s no bad here, everyone put in a lot of effort and were really happy to show me what they had.

Thank you to ALL the booth staff and marketing staff and everyone involved in letting me take these photos and welcoming me with open arms, even our own competitors who were wondering why on earth I was at their booth. I just wanted to show the world the swag!

So, let’s get started in no particular order except the one I used to attempt to navigate the expo floor.

BMC gave out an Oculus Quest and a claw machine with lots of goodies

FireEye with the playing cards

Forcepoint had Rubiks cubes and lip balm

SuSe with the amazing plush geckos, socks, stickers and webcam covers!

Postman had shirts, fidget spinners and an awesome collection of stickers

CloudLink with the sweet sunglasses, tickets, pins and pens

Velocity had pens, straws, cups, travel bags and so many things:

Percona had candy, sunglasses, a really nice cup and.. I think they’re more stickers or tape? (please let me know if you know what that orange roll is :D)

Hitachi had some really nice clear bottles and koozies (I think)

Goldman Sachs Engineering had some sweet bluetooth speakers, pens, mints and travel mug.

Citrix had pens, mints, a nice tote bag and car charger

ThreatModeler had hats and phone card holders

Infinidat had a really nice shirt and pin

Split Software were giving out Nintendo Switch Lite’s which I seriously wanted and didn’t win 😢 (the wall of them was very cool though).

At Sysdig, you didn’t pick your swag, Zoltar did it for you.

And they had awesome bottles, stickers, international power adapters and pop sockets.

Datica Health had some sleek notebooks, pens and webcam covers

Giant Swarm had some SWEET t-shirts and even a baby onesie!

RoundTower had a koozie, a shirt, pin and socks!

Timescale had sunglasses, lip balm, a tote bag and coasters

DXC had a shirt, straw, socks, stickers, pen and notebook, as well as a cable holder/taco thing.

Fastly had a really nice wooden wireless phone charger, stickers and a shirt.

neqto by JIG-SAW had clips, stickers, phone holders, pens and silly putty (I think?)

Signal Sciences with the live WAF love shirt, and the booth staff were excited to model it, so thank you for that!

VictorOps has been a favourite of mine since their 2016 custom t-shirt printing station, this year they were giving out the Millennium falcon, pins and their famous cat shirt!

Coalfire had a fire tv stick and amazon alexa you could win

VividCortex always deliver with their hats! unicorns, wolves, bears.. and.. I’m sure I had a seal or snow leopard in my 2016 review.

LaunchDarkly had an awesome looking bandanna and stickers

Quest had light up bouncy balls, cups, stickers, pens and stick-a-ribbons!

Rubrik never disappoint with their socks and stickers.

Cloudability not only had their shirts, they also gave away Nintendo Switches and Oculus Quests!

D2IQ had an AMAZING custom t-shirt stand. I always have full respect for the people running these things, it’s extremely hard work to pump out these shirts all day long and they did such a great job.

DataDog are a staple of reInvent, their shirts are a hot item, and even rarer are their socks, this was from their business booth.

Pluralsight had a game at their booth to see what you won, they had wall adapter power banks, 3-in-1 chargers, some funky socks and even an Oculus Go.

Rapid7 had a nice t-shirt and stickers

Lightstep had a drone, pens, lanyard and awesome shirt and stickers!

Delphix had a SWEET Starwars theme, they had light sabers and the cutest luggage tags i’ve ever seen.

Cohesity with the greenest socks you’ll ever see! and a squishy truck! my son loves these things!

intermix.io had a pretty sweet shirt and sticker

and SenecaGlobal were giving out some mugs, pens, stickers and various echos

Fugue always have some lovely shirts, stickers and webcam covers and this year was no different!

opsani’s shirt and stickers were really colorful as well!

Sun Technologies had a bag for everyone walking past, which from what I saw was roughly 50,000 of them.

CenturyLink had a skill tester with lots of goodies inside

and the AWS JAM Lounge had some powerbanks, shirts, coins and stickers (as well as a memory game I was unable to get a photo of)

CapitalOne had one of the best designed shirts for the event in my opinion, and ran out fast. Also, some awesome decks of cards. Whoever was your designer this year did an outstanding job!

This guy I ran into in the food hall, only guy in the Venetian with more swag than me. Look at that suit. If anyone knows this gentleman’s name please let me know as I’d love to link him here 😉

Splunk always have their catch phrase shirts, pick what you want from the list! also, socks!

TrendMicro had some decks of cards and a chance to win Bluetooth Sunglasses!

Xerox had clips and a dancing robot

PrinterLogic had a fantastic shirt

8×8 had the CUTEST little cheetahs

LG had a push the button game with lots of prizes, including metal straws, echo shows and dots, switches and fitbits.

AVI Networks had a koozie, usb charger cables and a sweet ninja shirt.

Evolven had a great collection of coasters, I really should have taken one of each but my luggage space was basically non-existent at this point. Also pictured: me!

tugboat logic with the cutest stickers and tugboat bath toy

and extrahop with their light up bouncy balls and play-doh

ivanti had sunglasses, a yo-yo, dancing robot and koozie.

Blameless had some drones, Starwars toys, The Unicorn Project, Nintendo Switch Lites, as well as stickers to give away.

The awesome guys I chatted to at Presidio couldn’t stop talking about their luggage tags and the chance to win a 3 piece luggage set (actually awesome, I own the smaller one).

ManageEngine and Device42 with the sweet socks!

komprise with ACTUAL DONUTS and a sticker and pen. But DONUTS. They looked so good… mmm…

Hammerspace had.. a hammer. and a hammer pen. and a t-shirt with a hammer on it. and a USB key with a hammer on it. They’re experts at hammering things, like picking awesome swag.

Igneous had the cable taco things too, and the Imperial Star Destroyer lego to be won

readme had stickers and usb-c converters and gumballs

Qumulo had a Millennium Falcon, webcam covers and an angry carrot dude

Flowmill with BINOCULARS! what an awesome piece of swag! and stickers, too.

Matillion, who a few years ago won my most useful swag prize for a single stick of chapstick have stepped it up so far you can now not only build your own lego person, they donated to Girls who Code for every badge scan. Simply awesome, guys and girls.

I made our founder Ryan Kroonenburg, can you see the resemblance?

Deloitte had a nice bottle

GitHub let you build your own OctoCat!

This PagerDuty mascot Pagey, made of squishy stuff so you can throw it at the wall when your phone keeps buzzing with alerts. We’ve all been there guys, still an awesome piece of swag. Stickers too!

Cloudtamer had pins, stickers, pens and a bottle opener keychain compass carabiner.

NS1 know that it’s always DNS (I completely agree). They also had some mugs and Switches.

Hypergrid had straws which for some reason didn’t make it into my other original post about straws (also pens).

SoftNAS were giving away light up cubes and had a chance to win some cool drones.

Harness had a slot machine with a few prizes, namely their duck mascot!

Threat Stack had the coolest light up AXE and pins, stickers and shirt.

sas had a fidget pen, usb extension cable, stickers, t-shirt and mouse! they also had some giveaways if your key opened the box.

redis had a very nice shirt and stickers and a daily scooter giveaway.

TEKsystems also had straws, stickers and a pin. They didn’t make my original straw post either because a friend wanted some straws, so they got these ones! #sharetheswag

Cloudinary with the CUTEST unicorns and stickers

(x) matters with the fanny pack / bum bag (depending where in the world you’re from) which they advertised as the biggest bag you can bring in to re:Play, which was great because I actually brought it to re:Play to carry my stuff. Thanks guys and girls! oh also, a freakin Electric GoKart up for grabs.

Sentry had BATH BOMBS. This was a really beautiful piece of swag, in both uniqueness and presentation. Really nice work whoever came up with this one, I know quite a few of these went back to our offices to give out to the people who couldn’t attend re:Invent and they were very well received!

Symbee were the booth next to ours last year.. and this year they happened to be next to us again. I’m not sure what the odds of that happening were, but it’s pretty amazing. They’re a great bunch of guys and always have this really nice mug to give out!

GitLab.. how can I put this? They had a whole DINER going on. Coasters were records, they had pins and straws, cold drinks.. and at one point I even got an actual vinyl record from them. I’m going to have to go to my fathers house to listen to what’s actually on it (hard to see in the pic but it is grooved, not just blank).

Sungard had a nice bottle

Unisys had some flashing shoelaces!

and New Relic had A beanie, many colours of their “Deploy glambda” nail polish and stickers! They also had an awesome switch/zelda pack and rubiks cubes.

App Associates had.. so much stuff! pens, hats, bags, stickers, tattoos(!?)

Spotist with the sock chute

Qubole with shot glasses and hand sanitizer and.. i’m not sure what those square things are!

Scylla Cloud had these cute octopus dudes, shirts and egg drones!

ServiceNow had the socks and pins (and a pen i didn’t seem to get a photo of)

JFrog had such a cool shirt and frog squishy

Qualys had a huge tote, pins, coin and cards

Nutanix had a great portable anti slip mouse mat, charger cable, luggage tag, sticker and 5000mAh power bank. I really love the design of these!

Our pals at LinuxAcademy had their Pinehead squishies and stickers!

and DOMO had some cool stickers

I mentioned them earlier, but Rackspace also had some stickers in addition to their golden straw!

Liberty Mutual had a nice bluetooth headphone set and sticker!

and memSQL had some really pretty socks and lip balm

Software AG had a huge offering of shirts, socks, stickers and lip balm

Turbot had a skill testing machine where you could win.. actually I’m not sure. Please tag me if you know what these were!

and mongoDB had about a billion of these socks they were giving out all week, they look awesome!

Valtix with the sweet sport socks and t-shirt.

and Snowflake with the I ❤ data shirt and cute polar bear!

Acqueon had these pens with spinny heads and mini footballs.

VMWare had a huge slot machine with a few prizes, t-shirts, bottles, travel organizers, wireless chargers and lens covers.

logz.io had a huuuuge offering of mugs, bottle openers, notepads with a pen, tshirts, foldup backpacks and koozies.

and moogsoft had their squishy cow, nice stickers and pen

Cognizant had a lovely bottle and tote bag

druva had a great shirt, socks and giveaways

RedHat let you CUSTOMIZE YOUR RED HAT. They had a bunch of patches available and you got to pick two of them to be pressed onto your hat. Seriously awesome!

Densify had the BEST light saber at the show, not only does it light up in 3 different colours it makes lightsaber noises as you swing it around. They also had stress balls, a blinking wrist band which could earn you more swag if you were found wearing one, a dancing robot and lip balm.

Jet Brains had an awesome collection of stickers

Logicworks had stickers and the torpedo throw toy

tibco had a hat, usb hub, charger cable, pen, pin, hat, bose headphones and signed mini helmet prizes!! they looked so awesome!

zendesk had the BEST mugs i’ve ever seen at a conference, with the cork bottom to save your table from coffee rings or even heat damage, as well as the wooden wireless chargers.

telos had some pens and charger cables

Hewlett Packard Enterprise had a phone holder, webcam cover, wireless charger (i think?) and an instant photo printer!

arm were very security conscious, providing a webcam cover, mic blocker and USB data blocker.

EDB had bottles, socks, phone card holders, webcam covers and pens!

fortinet were in the sock game as well as a pen.

shi had two awesome sock designs, some stickers and m&m’s

Clubhouse had a FANTASTIC childrens sized t-shirt which my son is now proudly wearing, as well as some awesome stickers, pin and hand sanitizer.

Atlassian had this years version of their awesome socks. I think the first swag socks I ever got were from Atlassian, and I wear them to this day.

McAfee had an awesome tote bag, shirt, bouncy ball and pen

Capsul8 had an awesome trucker cap and Tux the penguin!

The king of socks, Sophos, had their collection on display for me. These aren’t all they give out, they usually have about 10–15 different pairs for any given re:Invent!

dremio had their cute narwhal shirt and plushy

SentinelOne would give you a Rubiks cube (and sticker) if you could solve it. My colleague brock tubre accomplished that in under a minute! (I need a lot more practice).

wish.com had tote bags and a nice t-shirt

and chef had stickers, a pen and a Bagito, which is a reuseable shopping bag.

Sisense spiced things up with their hot sauce and stickers

HANCOM group had some really cute keychain plushies, sticky note pads and some awesomely shiny stickers on offer

Kong had amazing socks, pins, pens and stickers

circleci had stickers and shirts

SailPoint had a pin and an extendable water bottle, which is very cool! I’d never seen that before.

taos had some pens, webcam covers and quicksnap koozies which were really cool.

zadara had a lot of things but by the time I got there they had just pens and organiser bags left over (sorry zadara! lots of booths to get through!)

Logic Monitor had their funky socks

Aporeto had the CUTEST octopus plush toy and shirt which this gentleman was only too happy to show off.

Cloudera had all important bags to carry all the other swag.

SearchBlox had a coaster/bottle opener combo with some stickers

Cloudzero had pins, stickers and a koozie

Informatica had some of the nicest socks at the show, Bombas. Each pair given out also donated a pair to someone in need. They also had some pins.

CloudAcademy had some hats, shirts, cable taco, webcam cover and stickers

CloudFlare had some sweet socks too (anyone counting the total amount of socks yet?)

Transposit had octopus stickers and COSMIK ICE CREAM. I’d never seen this before, seriously cool!

and Rockset had a cool t-shirt

Sumologic with their sumo dudes, always a favourite

StackRox certainly rocked the place with their purple light up bouncy balls, stickers and pens!

Nylas had some fantastic stickers, t-shirt and socks!

Cloudbees had a nice shirt and carry case

and Qualcomm had socks, cord organizers, straws and a phone holder.

Teradata had the only piece of swag I wanted but was unable to get, this awesome tiny bluetooth speaker (it was so cute!), as well as a cable organizer.

Now a word from Check Point, the only booth trying to do away with swag and instead was allowing you to choose two charities they donate to on your behalf instead, Girls who Code and Boys and Girls Clubs of America. Despite this being a swag promoting blog, I think it was a fantastic idea and fully support their mission!

Clumio had a whole range of things on their spin wheel, socks, phone battery packs and charging cables and webcam covers.

Instana had these cool squishy dudes

and Collibra had pens, mins and koozies

Fivetran had socks, shirt, pencil case and phone charging cables

boomi had really nice contigo bottles, stickers and pins

and talend had the socks, pen and webcam cover, and were also giving away a PS4 with PS VR. Sweet!

SignalFx had some bright pink socks, lip balm and some stickers and a pin

and dynatrace had some bright shirts too

tableau had loads of pins, stickers, a REALLY nice backpack and fortune cookies!

Stackery then had a tshirt, pins, stickers and glasses cleaning cloths

GitOps had a difficult rubiks cube (because the center square is actually directional, making it harder than normal), and some stickers

coinbase had some pins and stickers and free bitcoin (one of those may or may not be true).

Thundra had the sweet sunglasses, shirt and stickers

Wandisco had a shirt, a really nice beanie, stress ball, webcam cover and lip balm

O’Reilly had a huge collection of stickers, sunglasses and pens

Refinitiv had some stickers and a cool cube man! If you could fold him back up into a cube you got to keep him!

Prisma (by paloalto networks) had a shirt, webcam cover, pin and socks.

Databricks had heaps of stuff but by the time I got there it was a phone charger, pen and really nice notepad.

Attunity had a nice mug, pen and hand sanitizer

Gremlin had a gremlin!

CTO.ai had a bottle opener, square stress cube, tote bag and tshirt

sigma had a whole bunch of cool things, a hangover kit, t-shirts, stickers and bottle openers

Cloudwiry had hydration packs and a luggage scale, which is really useful for determining if you’ve picked up too much swag before heading home. They also had an amenity kit, pencils and drawing pads, Tide markers, whatever those black and white things are.. they had a lot of useful things that I didn’t have time to ask what they all were!

ChaosSearch had some pens, stickers and some relation to Corey Quinn!

Acquia had the straws, pens and stickers

and imperva had heat packs, sunglasses, cards, vegas survival kit, lint roller, pad and pen. I sadly missed the bacon station they had in the mornings!

synopsys had usb fans and pins

and Slalom had some women who build stickers and shirts

radware also had a wooden cloud dude!

and Wavefront by vmware had these cute little Volkswagon toys that my son absolutely ADORES.

nginx had some stickers and pins

veeam also had some international power adapters, really useful for those of us visiting other countries!

and the AWS Partner Network had hand sanitizer, notepads, straws and another cable taco!

and FINALLY, some AWS serverless heroes were wandering around with #wherespeter shirts. Did you get one? and did you find Peter? I did!

now.. believe it or not… I think that’s it. Every booth I could get to, I did. You’ve seen it all. Well, you think you’ve seen it all. This was also new this year and I’m really impressed it has been implemented AND used:

Super awesome on AWS’s part. Not everyone can take everything home, so being able to donate it instead of throwing things out is a great initiative from AWS.

OK, that’s about it for this year, please let me know what I missed (I realllly tried hard to get everything so if I did miss something I’ll be happy to add it!), I know there will be something awesome (like the DeepComposer) I didn’t have time to line up for. What did you get? what were your favorites? let me know in the comments below!

Thanks Cloud Gurus! and see you all next year.


AWS re:Invent 2019 Swag Review was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

A Cloud Guru at AWS re:Invent 2019

Where to find the gurus in Las Vegas!

Here’s where you can find A Cloud Guru at AWS re:Invent 2019!

We’re looking forward to meeting you, hearing your feedback, handing out some awesome swag, and sharing our latest content and features.

Monday | Tuesday | Wednesday | Thursday | Info SessionsSocial

Monday, Dec 2

10:00 AM — 8:00 PM: Hackathon with Ryan Kroonenburg
To get re:invent started, hackathon with Ryan judging and winning teams getting a full-year membership to A Cloud Guru!

The Non-Profit Hackathon for Good provides a hands-on and team-oriented experience while supporting non-profit organizations. It is open to all skill levels. Be sure to attend the mixer on Sunday from 6–9pm at the Level Up inside the MGM Grand to build your team! More info here.

Non-Profit Hackathon for Good
10:00 AM — 8:00 PM
Venue: MGM Grand

Join the Non-Profit Hackathon for Good!

10:00AM — Machine Learning with Kesha Williams
In this session, learn how to level-up your skills and career through the journey of Kesha Williams, an AWS Machine Learning Hero.

CMY201— Java developer to machine-learning practitioner
10:00 AM — 11:00 AM
Venetian, Level 4, Delfino 4005

1:45PM — Getting Started with Machine Learning
In this chalk talk with Kesha Williams, learn how to get started building, training, and deploying your first machine learning model.

AIM226 — How to successfully become a machine learning developer
1:45 PM — 2:45 PM
Venetian, Level 3, Murano 3201A

Tuesday, Dec 3

All Day — A Cloud Guru at Booth 727!
When the exhibition hall opens on Tuesday, head over to booth #727 to say hello to Ryan and the crew from A Cloud Guru — see you there!

Wednesday, Dec 4

All Day — A Cloud Guru Booth 727!
After the keynote, A Cloud Guru will be heading back to Expo Hall in the Venetian. Stop by and say hello!

6:00PM — AWS Certification Reception
Are you AWS Certified? Register for the AWS Certification Reception and celebrate alongside our A Cloud Guru instructors! Space is limited, so be sure to register early for this event. Hope to see you there!

AWS Certification Reception
6:00 PM — 8:00 PM
Brooklyn Bowl |The LINQ

“Hello Cloud Gurus!” — Ryan and Sam Kroonenburg, co-Founders of A Cloud Guru

Thursday, Dec 5

10:30 AM — AWS DeepRacer with Scott Pletcher
Scott Pletcher will share how to host your own AWS DeepRacer event with everything from building a track, logistics, getting support from AWS, planning, leaderboards and more.

How to Roll Your Own DeepRacer Event
10:30 AM –11:00 AM
Venetian, Level 2, Hall C, Expo, Developer Lounge

Check out the Fast and Curious — our FREE DeepRacer Series!

1:00pm — AWS Security with Faye Ellis
AWS has launched a security certification for specialists to demonstrate their skills, which are in high demand. Learn about the major areas of security and AWS services you’ll need to know to become a security specialist and obtain the certification.

DVC07 — Preparing for the AWS Certified Security Specialty exam
1:00 PM — 1:30 PM
Venetian, Level 2, Hall C, Expo, Developer Lounge

All Week — Info Sessions

A Cloud Guru will be available every day for info sessions to share our latest content and features for business memberships. Be sure to schedule an appointment today — sessions are limited!

A Cloud Guru on Social Media
Follow us on Twitter, Facebook, and LinkedIn for updates! Be sure to subscribe to A Cloud Guru’s AWS This Week — and stay tuned for Ryan’s video summary of all the major re:Invent announcements!

Keep being awesome cloud gurus!


A Cloud Guru at AWS re:Invent 2019 was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

The State of Serverless, Circa 10/2019

The State of Serverless, circa 2019

My observations from Serverlessconf NYC and the current state of serverless, the ecosystem, the friction, and innovation

Back in the spring of 2016, A Cloud Guru hosted the first ever Serverless conference in a Brooklyn warehouse. In many ways, that inaugural conference was the birth of the Serverless ecosystem.

Serverlessconf was the first time that this growing community came together to talk about the architectures, benefits, and approaches powering the Serverless movement.

Last week A Cloud Guru once again brought top cloud vendors, startups, and thought leaders in the Serverless space to New York City to exchange ideas, trends, and practices in this rapidly growing space.

In addition to the “hallway track”, which was a great way to meet and (re)connect with talented and passionate technology experts — there were multiple tracks of content.

Collectively, these conferences are a great way to take the pulse of the community — what’s getting better, what’s still hard, and where the bleeding edge of innovation sits.

With apologies to the vendors and talks I didn’t manage to get to, here’s my take on the State of Serverless after spending nearly a week with many of its best and brightest.

Enterprise users have shown up — with their stories
Back in 2016, much of the content (and nearly every talk’s opening slides) at Serverlessconf was some flavor of “Here’s how we define Serverless.”

Content focused on how to get started and lots of how-to talks. Notably absent back in 2016? Enterprise users talking about their experiences applying Serverless in real life with the sole exception of Capital One.

While the latest Serverlessconf retains its technology and practice focus, it was fantastic to see companies like the Gemological Institute of America, Expedia, T-mobile, Mutual of Enumclaw Insurance, and LEGO up on stage in 2019 talking about adopting and benefitting from Serverless architectures.

Growing ecosystem
The highly scientific metric of “square feet of floor space devoted to vendors” continues to grow year over year. But more importantly, those vendors have moved from early stage awareness and information gathering to offering products and services in the here and now.

System integrators and consulting firms specializing in Serverless practices are also showing up — more evidence of enterprise traction in the space.

Configuration & AWS CloudFormation are still creating friction
The buzz was around whether declarative or imperative “Infrastructure-as-Code” is the better approach, alternatives to CloudFormation, easier ways to construct and deploy Serverless architectures. Topics like these featured strongly in both actual content and hallway conversations in 2019 — just as they did in 2016.

Whatever your position on recent approaches like AWS’s cdk and the utility of declarative approaches like AWS SAM, it’s clear that CloudFormation and other vendor-provided options still aren’t nailing it.

Vendors like Stackery.io got a lot of foot traffic from attendees looking for easier ways to build and deploy Serverless apps, while talks from Brian LeRoux and Ben Kehoe explored both the problem, and potential solutions, to the difficulties of using CloudFormation today.

Google and Cloudflare are playing the role of category challengers
Google Cloud Run is taking an intriguing approach — offering customers a container-based specification with the scales-on-usage and pay-per-request model of AWS Lambda. It’s still too early to call GCR’s product market fit, but it’s exciting to see Google step back and reimagine what a Serverless product can be.

Meanwhile, Cloudflare workers exploit that company’s massive edge infrastructure investment to run chunks of computation that make Lambda functions look huge by comparison. It’s not necessarily a solution to general compute, but given expectations that the bulk of silicon will live on the edge, rather than in data centers, in the future, I’d keep my eye on this one.

Serverless innovation isn’t over
Johann Schleier-Smith talked about UC Berkeley’s position paper on Serverless and the growing attention that Serverless is getting from the research community.

Yours truly laid out a recipe for building the Serverless Supercomputer, starting with Serverless Networking that opens the door to building distributed algorithms serverlessly.

Chris Munns reviewed the pace of innovation for AWS Lambda since its launch in 2014 and hinted at more to come at next month’s AWS re:Invent in Las Vegas.

With their amusing name helping to grab attention, The Agile Monkeys presented a Serverless answer to Ruby on Rails with a high-level object model that compiles down to Lambda functions and other serverless componentry.

It’s still not easy enough
Serverless might sound like a technology stack, but it’s really a vision for software development. In contrast to the ever-growing complexity of servers and Kubernetes, attendees at a Serverless conference are looking for ways to do more with less — less infrastructure, less complexity, less overhead, and less waste.

But while a desire for simplicity and “getting the business of business” done unites the attendees at a conference like this, it’s still the case that too much non-essential complexity gets in the way.

Tools, IDEs, debuggers, security, config & deployment, CI/CD pipelines…a lot of energy from vendors to startups to consultants to enterprise adopters is flowing into getting Serverless projects across the finish line. It may be way easier than servers (and containers), but it’s clearly still not easy enough.

Conferences like this help, but more and better documentation, more sharing of best practices, and tools that can truly streamline the job of delivering business value on top of Serverless remain a work in progress…leaving a lot of untapped potential opportunity in the space still to explore!

Author disclosures: I presented at Serverless NYC ’19 for which I received a registration fee waiver. I’m a former employee of both AWS and Microsoft and currently an independent board member of Stackery.io. I received no compensation from any of the companies or organizations cited above for writing or distributing this article and the opinions provided are my own.


The State of Serverless, Circa 10/2019 was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.

Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

The metrics point to crypto still being a toy until it can achieve real world business scale demonstrated by Amazon DynamoDB

14 transactions per second. No matter how passionate you may be about the aspirations and future of crypto, it’s the metric that points out that when it comes to actual utility, crypto is still mostly a toy.

After all, pretty much any real world problem, including payments, e-commerce, remote telemetry, business process workflows, supply chain and transport logistics, and others require many, many times this bandwidth to handle their current business data needs — let alone future ones.

Unfortunately the crypto world’s current solutions to this problem tend to either blunt the advantages of decentralization (hello, sidechains!) or look like clumsy bolt-ons that don’t close the necessary gaps.

Real World Business Scale

Just how big is this gap, and what would success look like for crypto scalability? We can see an actual example of both real-world transaction scale and what it would take to enable migrating actual business processes to a new database technology by taking a look at Amazon’s 2019 Prime Day stats.

The AWS web site breaks down Amazon retail’s adoption and usage of NoSQL (in the form of DynamoDB) nicely:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

45 million requests per second. That’s six zeros more than Bitcoin or Eth. Yikes. And this is just one company’s traffic, and only a subset at that. (After all, Amazon is a heavy user of SQL databases as well as DynamoDB), so the actual TPS DynamoDB is doing at peak is even higher than the number above.

Talk about having a gap to goal…and it doesn’t stop there. If you imagine using a blockchain (with or without crypto) for a real-world e-commerce application and expect it support multiple companies in a multi-tenanted fashion, want it to replace legacy database systems, and need a little headroom to grow — a sane target might look like 140 million transactions per second.

That’s seven orders of magnitude from where we are today.

The Myth of Centralization

Why are these results so different? Let’s examine this dichotomy a little closer. First, note that DynamoDB creates a fully ordered ledger, known as a stream, for each table. Each table is totally ordered and immutable; once emitted, it never changes.

DynamoDB is doing its job by using a whole lot of individual servers communicating over a network to form a distributed algorithm that has a consensus algorithm at its heart.

Cross-table updates are given ACID properties through a transactional API. DynamoDB’s servers don’t “just trust” the network (or other parts of itself), either — data in transit and at rest is encrypted with modern cryptographic protocols and other machines (or the services running on them) are required to sign and authenticate themselves when they converse.

Any of this sound familiar?

The classic, albeit defensive, retort to this observation is, “Well, sure, but that’s a centralized database, and decentralized data is so much harder that is just has to be slower.” This defense sounds sort of plausible on the surface, but it doesn’t survive closer inspection.

First, let’s talk about centralization. A database running in single tenant mode with no security or isolation can be very fast indeed — think Redis or a hashtable in RAM, either of which can achieve bandwidth numbers like the DynamoDB rates quoted above. But that’s not even remotely a valid model for how a retail giant like Amazon uses DynamoDB.

Different teams within Amazon (credit card processing, catalog management, search, website, etc.) do not get to read and write each others’ data directly — these teams essentially assume they are mutually untrustworthy as a defensive measure. In other words, they make a similar assumption that a cryptocurrency blockchain node makes about other nodes in its network!

On the other side, DynamoDB supports millions of customer accounts. It has to assume that any one of them can be an evildoer and that it has to protect itself from customers and customers from each other. Amazon retail usage gets exactly the same treatment any other customer would…no more or less privileged than any other DynamoDB user.

Again, this sounds pretty familiar if you’re trying to handle money movement on a blockchain: You can’t trust other clients or other nodes.

These business-level assumptions are too similar to explain a 7 order of magnitude difference in performance. We’ll need to look elsewhere for an explanation.

Is it under the hood?

Now let’s look at the technology…maybe the answer is there. “Consensus” often gets thrown up as the reason blockchain bandwidth is so low. While DynamoDB tables are independent outside of transaction boundaries, it’s pretty clear that there’s a lot of consensus, in the form of totally ordered updates, many of which represent financial transactions of some flavor in those Prime Day stats.

Both blockchains and highly distributed databases like DynamoDB need to worry about fault tolerance and data durability, so they both need a voting mechanism.

Here’s one place where blockchains do have it a little harder: Overcoming Byzantine attacks requires a larger majority (2/3 +1) than simply establishing a quorum (1/2 +1) on a data read or write operation. But the math doesn’t hold up: At best, that accounts for 1/6th of the difference in bandwidth between the two systems, not 7 orders of magnitude.

What about Proof of Work? Ethereum, Bitcoin and other PoW-based blockchains intentionally slow down transactions in order to be Sybil resistant. But if that were the only issue, PoS blockchains would be demonstrating results similar to DynamoDB’s performance…and so far, they’re still not in the ballpark. Chalk PoW-versus-PoS up to a couple orders of magnitude, though — it’s at least germane as a difference.

How about the network? One difference between two nodes that run on the open Internet and a constellation of servers in (e.g.) AWS EC2 is that the latter run on a proprietary network. Intra-region, and especially intra-Availability Zone (“AZ”) traffic can easily be an order of magnitude higher bandwidth and an order of magnitude lower latency than open Internet-routed traffic, even within a city-sized locale.

But given that most production blockchain nodes at companies like Coinbase are running in AWS data centers, this also can’t explain the differences in performance. At best, it’s an indication that routing in blockchains needs more work…and still leaves 3 more orders of magnitude unaccounted for.

What about the application itself? Since the Amazon retail results are for multiple teams using different tables, there’s essentially a bunch of implicit sharding going on at the application level: Two teams with unrelated applications can use two separate tables, and neither DynamoDB nor these two users will need to order their respective data writes. Is this a possible semantic difference?

For a company like Amazon retail, the teams using DynamoDB “know” when to couple their tables (through use of the transaction API) and when to keep them separate. If a cryptocurrency API requires the blockchain to determine on the fly whether (and how) to shard by looking at every single incoming transaction, then there’s obviously more central coordination required. (Oh, the irony.)

But given that we have a published proof point here that a large company obviously will perform application level sharding through its schema design and API usage, it seems clear that this is a spurious difference — at best, it indicates an impoverished API or data model on the part of crypto, not an a priori requirement that a blockchain has to be slow in practice.

In fact, we have an indication that this dichotomy is something crypto clients are happy to code to: smart contracts. They’re both 1) distinguished in the API from “normal” (simple transfer) transactions and 2) tend to denote their participants in some fashion.

It’s easy to see the similarity between smart contract calls in a decentralized blockchain and use of the DynamoDB transaction API between teams in a large “centralized” company like Amazon retail. Let’s assume this accounts for an order of magnitude; 2 more to go.

Managed Services and Cloud Optimization

One significant difference in the coding practices of a service like DynamoDB versus pretty much any cryptocurrency is that the former is highly optimized for running in the cloud.

In fact, you’d be hard pressed to locate a line of code in DynamoDB’s implementation that hasn’t been repeatedly scrutinized to see if there’s a way to wring more performance out of it by thinking hard about how and where it runs. Contrast this to crypto implementations, which practically make it a precept to assume the cloud doesn’t exist.

Instance selection, zonal placement, traffic routing, scaling and workload distribution…most of the practical knowledge, operational hygiene, and design methodology learned and practiced over the last decade goes unused in crypto. It’s not hard to imagine that accounts for the remaining gap.

Getting Schooled on Scalability

Are there design patterns we can glean from a successfully scaled distributed system like DynamoDB as we contemplate next-generation cryptocurrency blockchain architectures?

We can certainly “reverse engineer” some requirements by looking at how a commercially viable solution like Amazon’s Prime Day works today:

  • Application layer (client-provided) sharding is a hard requirement. This might take a more contract-centric form in a blockchain than in a NoSQL database’s API, but it’s still critical to involve the application in deciding which transactions require total ordering versus partial ordering versus no ordering. Partial ordering via client-provided grouping of transactions in particular is virtually certain to be part of any feasible solution.
  • Quorum voting may indeed be a bottleneck on performance, but Byzantine resistance per se is a red herring. Establishing a majority vote on data durability across mutually authenticated storage servers with full encoding on the wire isn’t much different from a Proof-of-Stake supermajority vote in a blockchain. So while it matters to “sweat the details” on getting this inner loop efficient, it can’t be the case that consensus per se fundamentally forces blockchains to be slow.
  • Routing matters. Routing alone won’t speed up a blockchain by 7 orders of magnitude, but smarter routing might shave off a factor of 10.
  • Infrastructure ignorance comes at a cost. Cryptocurrency developers largely ignore the fact that the cloud exists (certainly that managed services, the most modern incarnation of the cloud, exist). This is surprising, given that the vast majority of cryptocurrency nodes run in the cloud anyway, and it almost certainly accounts for at least some of the large differential in performance. In a system like DynamoDB you can count on the fact that every line of code has been optimized to run well in the cloud. Amazon retail is also a large user of serverless approaches in general, including DynamoDB, AWS Lambda, and other modern cloud services that wring performance and cost savings out of every transaction.

We’re not going to solve blockchain scaling in a single article 😀, but there’s a lot we can learn by taking a non-defensive look at the problem and comparing it to the best known distributed algorithms in use by commercial companies today.

Only by being willing to learn and adapt ideas from related areas and applications can blockchains and cryptocurrencies grow into the lofty expectations that have been set for them…and claim a meaningful place in scaling up to handle real-world business transactions.


Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS was originally published in A Cloud Guru on Medium, where people are continuing the conversation by highlighting and responding to this story.