Prediction Framework, a time saver for Data Science prediction projects

Posted by Álvaro Lamas, Héctor Parra, Jaime Martínez, Julia Hernández, Miguel Fernandes, Pablo Gil

Acquiring high value customers using predicted Lifetime Value, taking specific actions on high propensity of churn users, generating and activating audiences based on machine learning processed signals…All of those marketing scenarios require of analyzing first party data, performing predictions on the data and activating the results into the different marketing platforms like Google Ads as frequently as possible to keep the data fresh.

Feeding marketing platforms like Google Ads on a regular and frequent basis, requires a robust, report oriented and cost reduced ETL & prediction pipeline. These pipelines are very similar regardless of the use case and it’s very easy to fall into reinventing the wheel every time or manually copy & paste structural code increasing the risk of introducing errors.

Wouldn’t it be great to have a common reusable structure and just add the specific code for each of the stages?

Here is where Prediction Framework plays a key role in helping you implement and accelerate your first-party data prediction projects by providing the backbone elements of the predictive process.

Prediction Framework is a fully customizable pipeline that allows you to simplify the implementation of prediction projects. You only need to have the input data source, the logic to extract and process the data and a Vertex AutoML model ready to use along with the right feature list, and the framework will be in charge of creating and deploying the required artifacts. With a simple configuration, all the common artifacts of the different stages of this type of projects will be created and deployed for you: data extraction, data preparation (aka feature engineering), filtering, prediction and post-processing, in addition to some other operational functionality including backfilling, throttling (for API limits), synchronization, storage and reporting.

The Prediction Framework was built to be hosted in the Google Cloud Platform and it makes use of Cloud Functions to do all the data processing (extraction, preparation, filtering and post-prediction processing), Firestore, Pub/Sub and Schedulers for the throttling system and to coordinate the different phases of the predictive process, Vertex AutoML to host your machine learning model and BigQuery as the final storage of your predictions.

Prediction Framework Architecture

To get involved and start using the Prediction Framework, a configuration file needs to be prepared with some environment variables about the Google Cloud Project to be used, the data sources, the ML model to make the predictions and the scheduler for the throttling system. In addition, custom queries for the data extraction, preparation, filtering and post-processing need to be added in the deploy files customization. Then, the deployment is done automatically using a deployment script provided by the tool.

Once deployed, all the stages will be executed one after the other, storing the intermediate and final data in the BigQuery tables:

  • Extract: this step will, on a timely basis, query the transactions from the data source, corresponding to the run date (scheduler or backfill run date) and will store them in a new table into the local project BigQuery.
  • Prepare: immediately after the extract of the transactions for one specific date is available, the data will be picked up from the local BigQuery and processed according to the specs of the model. Once the data is processed, it will be stored in a new table into the local project BigQuery.
  • Filter: this step will query the data stored by the prepare process and will filter the required data and store it into the local project BigQuery. (i.e only taking into consideration new customers transactionsWhat a new customer is up to the instantiation of the framework for the specific use case. Will be covered later).
  • Predict: once the new customers are stored, this step will read them from BigQuery and call the prediction using Vertex API. A formula based on the result of the prediction could be applied to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.
  • Post_process: A formula could be applied to the AutoML batch results to tune the value or to apply thresholds. Once the data is ready, it will be stored into the BigQuery within the target project.

One of the powerful features of the prediction framework is that it allows backfilling directly from the BigQuery user interface, so in case you’d need to reprocess a whole period of time, it could be done in literally 4 clicks.

In summary: Prediction Framework simplifies the implementation of first-party data prediction projects, saving time and minimizing errors of manual deployments of recurrent architectures.

For additional information and to start experimenting, you can visit the Prediction Framework repository on Github.

AWS announces phone number enrichments for Amazon Fraud Detector Models

We are excited to announce the launch of phone number enrichments for Amazon Fraud Detector machine  learning (ML) models. Amazon Fraud Detector (AFD) is a fully managed service that makes it easy to identify potentially fraudulent online activities, such as the creation of fake accounts or online payment fraud. Using ML under the hood and based on over 20 years of fraud detection expertise from AFD automatically identifies potentially fraudulent activity in milliseconds—with no ML expertise required.

【Amazonアプリストア小規模ビジネス向けアクセラレータープログラム】第1段階が始動

(本ブログは、こちらの英語記事を翻訳したものです)

 

このたび、Amazonアプリストア小規模ビジネス向けアクセラレータープログラムの第1段階が始動となりました。前暦年の年間収益が100万USD未満、またはAmazonアプリストアの利用が初めての開発者に課される手数料率が20%に引き下げられるため、収益の80%を得られるようになります。新料率は自動的に適用となるため、必要なお手続きはありません。

また、本プログラムの一環として、対象となる小規模開発者のアプリ専用セクションがFireタブレットデバイス内で新たに設けられました。このセクションでは、本プログラムに参加している開発者の事例をご紹介するとともに、そのアプリの販促を幅広く行う予定です。

本プログラムは2022年に次の段階へと進み、収益の10%相当のAWS販促クレジットが付与されるようになります。対象となる開発者には、AWS販促クレジットの受け取り方法や、クラウド開発のメリットを活かす方法をEメールでご案内いたします。

 


よくある質問(FAQ)

1)小規模ビジネス向けアクセラレータープログラムの適用対象について教えてください。

  • すべてのマーケットプレイスにおいて前暦年の年間収益が100万USD未満の開発者と、Amazonアプリストアの利用が初めての開発者が適用対象となります。
  • 対象となった開発者の収益が当年内に100万USD以上になると、手数料率と提供されるベネフィットは標準に戻ります。
  • 翌年以降、年間収益が100万USDを下回った場合、その次の暦年は本プログラムの適用対象となります。

 

2)小規模ビジネス向けアクセラレータープログラムの一環であるAWS販促クレジットは、いつから開発者に付与されますか?

本プログラムのAWS販促クレジットは、2022年度内の付与開始を予定しています。対象となる開発者にはEメールでお知らせいたします。

 

3)適用対象の開発者の収益分配率は、いつから変更となりますか?

収益分配率は2021年12月22日に引き上げられ、同日から対象開発者へのロイヤリティの支払いに反映されます。

Small business developers now earn more

We’re excited to announce the launch of the first phase of the Amazon Appstore Small Business Accelerator Program. Developers with revenue less than 1 million USD in the previous calendar year will now receive an increased 80/20 revenue share with no action required to earn the new rates.

As part of the program, we’ve also created a new dedicated application row on Fire tablet devices which features apps from eligible small business developers. This row will showcase developers from the program and we hope will drive more visibility for a broad selection of apps.

The program will continue to expand in 2022, with the next phase featuring AWS promotional credits equivalent to 10 percent of revenue. Eligible developers will receive guidance via email on how to claim their credits and unlock the benefits of building on the cloud.

Want to stay updated? Receive continued program updates by subscribing to the Amazon Appstore Developer Newsletter.

 


FAQs

1) How does eligibility for the Amazon Appstore Small Business Accelerator program work?

  • Developers in all marketplaces who earned less than 1 million USD in the prior calendar year and developers new to Amazon Appstore are eligible.
  • If an eligible developer earns 1 million USD or more in the current year, they will revert to the standard royalty rates and benefits.
  • If a developer’s revenue falls below 1 million USD in a future year, the developer will be eligible in the next calendar year.

 

2) When will AWS promotional credits become available to developers as part of the Amazon Appstore Small Business Accelerator Program?

AWS promotional credits will become available as part of the program in 2022. All eligible developers will be notified via email.

 

3) Once I am eligible, when will the revenue-share adjustment start?

The revenue share adjustment began on December 22, 2021, and higher royalty payments started for all eligible developers on this day.

AWS Secrets Manager ahora habilita automáticamente las conexiones SSL al rotar los secretos de las bases de datos

AWS Secrets Manager ya es compatible con las conexiones SSL de forma transparente al rotar los secretos de las bases de datos para Amazon RDS MySQL, MariaDB, SQL Server, PostgreSQL y MongoDB. Ahora puede hacer que SSL esté siempre habilitado para estas bases de datos, sin necesidad de modificar primero los recursos de AWS Lambda proporcionados por AWS Secrets Manager. 

Lanzamiento de Amazon S3 on Outposts en las regiones de AWS GovCloud (EE. UU.)

Amazon S3 on Outposts ya está disponible en las dos regiones de AWS GovCloud (EE. UU.). Gracias a la expansión a la región de AWS GovCloud (EE. UU.), las agencias gubernamentales de EE. UU. y sus subcontratistas podrán trasladar a su AWS Outposts cargas de trabajo con un mayor nivel de confidencialidad cumpliendo con sus requisitos específicos normativos y en conformidad con el almacenamiento de objetos.

La interfaz de usuario del chat de Amazon Connect ya es compatible con las notificaciones del navegador para los clientes

La interfaz de usuario del chat de Amazon Connect ya es compatible con las notificaciones a través del navegador web de los clientes, lo que mejora la satisfacción del cliente permitiéndole saber en qué momento el agente ha respondido a su mensaje. Cuando el cliente recibe un nuevo mensaje de chat mientras está en otra aplicación o ventana del navegador, recibirá una notificación a través de su navegador web en la que podrá hacer clic cómodamente para ver el mensaje en la interfaz de usuario del chat. Esta característica se puede utilizar de forma inmediata, sin necesidad de configuración manual.

La compatibilidad con el protocolo QoS de Fujitsu ya está disponible en AWS Elemental MediaConnect

Desde hoy, AWS Elemental MediaConnect es compatible con el protocolo Fujitsu Quality of Service (QoS). Fujitsu QoS es uno de los varios protocolos de transporte compatibles con MediaConnect, una lista que incluye Zixi, Reliable Internet Stream Transport (RIST), Secure Reliable Transport (SRT) y Real-Time Transport Protocol (RTP). Dado que MediaConnect se encarga de traducir los distintos protocolos, puede diseñar una variedad de aplicaciones fiables de transporte de video en tiempo real que se ejecutan tanto dentro como fuera de AWS.

AWS Lambda ya es compatible con los puntos de conexión de la versión 6 del protocolo de Internet (IPv6) para las conexiones entrantes

AWS Lambda ya es compatible con los puntos de conexión IPv6 para las conexiones entrantes, lo que permite a los clientes invocar funciones de Lambda a través de IPv6. Esto ayuda a los clientes a cumplir los requisitos de cumplimiento de IPv6 y elimina la necesidad de contar con equipos de red de coste elevado para administrar la traducción de direcciones entre IPv4 e IPv6.

EC2 Image Builder incorpora compatibilidad con la consola para la creación de imágenes personalizadas a partir de imágenes locales

Ahora los clientes pueden utilizar EC2 Image Builder para crear imágenes personalizadas a partir de sus imágenes locales mediante una experiencia sencilla basada en la consola. Esta capacidad facilita a los clientes la incorporación de imágenes locales almacenadas en S3 (en formato OVA, VHD, VHDX, VMDK y sin formato) en las canalizaciones de EC2 Image Builder. Estos clientes ahora pueden aprovechar las capacidades existentes de EC2 Image Builder, como la automatización de procesos, la seguridad de la creación y la distribución de imágenes, para construir imágenes a través de una consola intuitiva.