mirror of
https://github.com/twitter/the-algorithm.git
synced 2025-01-03 08:01:53 +01:00
Merge branch 'main' into main
This commit is contained in:
commit
f9b7db14c6
51
README.md
51
README.md
@ -1,22 +1,42 @@
|
||||
# Twitter's Recommendation Algorithm
|
||||
|
||||
Twitter's Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the
|
||||
Home Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The
|
||||
diagram below illustrates how major services and jobs interconnect.
|
||||
Twitter's Recommendation Algorithm is a set of services and jobs that are responsible for serving feeds of Tweets and other content across all Twitter product surfaces (e.g. For You Timeline, Search, Explore). For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm).
|
||||
|
||||
![](docs/system-diagram.png)
|
||||
## Architecture
|
||||
|
||||
These are the main components of the Recommendation Algorithm included in this repository:
|
||||
Product surfaces at Twitter are built on a shared set of data, models, and software frameworks. The shared components included in this repository are listed below:
|
||||
|
||||
| Type | Component | Description |
|
||||
|------------|------------|------------|
|
||||
| Data | [unified-user-actions](unified_user_actions/README.md) | Real-time stream of user actions on Twitter. |
|
||||
| | [user-signal-service](user-signal-service/README.md) | Centralized platform to retrieve explicit (e.g. likes, replies) and implicit (e.g. profile visits, tweet clicks) user signals. |
|
||||
| Model | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |
|
||||
| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |
|
||||
| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |
|
||||
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict the likelihood of a Twitter User interacting with another User. |
|
||||
| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |
|
||||
| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |
|
||||
| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |
|
||||
| | [topic-social-proof](topic-social-proof/README.md) | Identifies topics related to individual Tweets. |
|
||||
| | [representation-scorer](representation-scorer/README.md) | Compute scores between pairs of entities (Users, Tweets, etc.) using embedding similarity. |
|
||||
| Software framework | [navi](navi/README.md) | High performance, machine learning model serving written in Rust. |
|
||||
| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |
|
||||
| | [timelines-aggregation-framework](timelines/data_processing/ml_util/aggregation_framework/README.md) | Framework for generating aggregate features in batch or real time. |
|
||||
| | [representation-manager](representation-manager/README.md) | Service to retrieve embeddings (i.e. SimClusers and TwHIN). |
|
||||
| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |
|
||||
|
||||
The product surface currently included in this repository is the For You Timeline.
|
||||
|
||||
### For You Timeline
|
||||
|
||||
The diagram below illustrates how major services and jobs interconnect to construct a For You Timeline.
|
||||
|
||||
![](docs/system-diagram.png)
|
||||
|
||||
The core components of the For You Timeline included in this repository are listed below:
|
||||
|
||||
| Type | Component | Description |
|
||||
|------------|------------|------------|
|
||||
| Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |
|
||||
| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |
|
||||
| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |
|
||||
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict the likelihood of a Twitter User interacting with another User. |
|
||||
| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |
|
||||
| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |
|
||||
| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |
|
||||
| Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. |
|
||||
| | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. |
|
||||
| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos). |
|
||||
@ -26,11 +46,10 @@ These are the main components of the Recommendation Algorithm included in this r
|
||||
| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md). |
|
||||
| | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking. |
|
||||
| | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. |
|
||||
| Software framework | [navi](navi/README.md) | High performance, machine learning model serving written in Rust. |
|
||||
| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |
|
||||
| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |
|
||||
|
||||
We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file.
|
||||
## Build and test code
|
||||
|
||||
We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file. We plan to add a more complete build and test system in the future.
|
||||
|
||||
## Contributing
|
||||
|
||||
|
51
RETREIVAL_SIGNALS.md
Normal file
51
RETREIVAL_SIGNALS.md
Normal file
@ -0,0 +1,51 @@
|
||||
# Signals for Candidate Sources
|
||||
|
||||
## Overview
|
||||
|
||||
The candidate sourcing stage within the Twitter Recommendation algorithm serves to significantly narrow down the item size from approximately 1 billion to just a few thousand. This process utilizes Twitter user behavior as the primary input for the algorithm. This document comprehensively enumerates all the signals during the candidate sourcing phase.
|
||||
|
||||
| Signals | Description |
|
||||
| :-------------------- | :-------------------------------------------------------------------- |
|
||||
| Author Follow | The accounts which user explicit follows. |
|
||||
| Author Unfollow | The accounts which user recently unfollows. |
|
||||
| Author Mute | The accounts which user have muted. |
|
||||
| Author Block | The accounts which user have blocked |
|
||||
| Tweet Favorite | The tweets which user clicked the like botton. |
|
||||
| Tweet Unfavorite | The tweets which user clicked the unlike botton. |
|
||||
| Retweet | The tweets which user retweeted |
|
||||
| Quote Tweet | The tweets which user retweeted with comments. |
|
||||
| Tweet Reply | The tweets which user replied. |
|
||||
| Tweet Share | The tweets which user clicked the share botton. |
|
||||
| Tweet Bookmark | The tweets which user clicked the bookmark botton. |
|
||||
| Tweet Click | The tweets which user clicked and viewed the tweet detail page. |
|
||||
| Tweet Video Watch | The video tweets which user watched certain seconds or percentage. |
|
||||
| Tweet Don't like | The tweets which user clicked "Not interested in this tweet" botton. |
|
||||
| Tweet Report | The tweets which user clicked "Report Tweet" botton. |
|
||||
| Notification Open | The push notification tweets which user opened. |
|
||||
| Ntab click | The tweets which user click on the Notifications page. |
|
||||
| User AddressBook | The author accounts identifiers of the user's addressbook. |
|
||||
|
||||
## Usage Details
|
||||
|
||||
Twitter uses these user signals as training labels and/or ML features in the each candidate sourcing algorithms. The following tables shows how they are used in the each components.
|
||||
|
||||
| Signals | USS | SimClusters | TwHin | UTEG | FRS | Light Ranking |
|
||||
| :-------------------- | :----------------- | :----------------- | :----------------- | :----------------- | :----------------- | :----------------- |
|
||||
| Author Follow | Features | Features / Labels | Features / Labels | Features | Features / Labels | N/A |
|
||||
| Author Unfollow | Features | N/A | N/A | N/A | N/A | N/A |
|
||||
| Author Mute | Features | N/A | N/A | N/A | Features | N/A |
|
||||
| Author Block | Features | N/A | N/A | N/A | Features | N/A |
|
||||
| Tweet Favorite | Features | Features | Features / Labels | Features | Features / Labels | Features / Labels |
|
||||
| Tweet Unfavorite | Features | Features | N/A | N/A | N/A | N/A |
|
||||
| Retweet | Features | N/A | Features / Labels | Features | Features / Labels | Features / Labels |
|
||||
| Quote Tweet | Features | N/A | Features / Labels | Features | Features / Labels | Features / Labels |
|
||||
| Tweet Reply | Features | N/A | Features | Features | Features / Labels | Features |
|
||||
| Tweet Share | Features | N/A | N/A | N/A | Features | N/A |
|
||||
| Tweet Bookmark | Features | N/A | N/A | N/A | N/A | N/A |
|
||||
| Tweet Click | Features | N/A | N/A | N/A | Features | Labels |
|
||||
| Tweet Video Watch | Features | Features | N/A | N/A | N/A | Labels |
|
||||
| Tweet Don't like | Features | N/A | N/A | N/A | N/A | N/A |
|
||||
| Tweet Report | Features | N/A | N/A | N/A | N/A | N/A |
|
||||
| Notification Open | Features | Features | Features | N/A | Features | N/A |
|
||||
| Ntab click | Features | Features | Features | N/A | Features | N/A |
|
||||
| User AddressBook | N/A | N/A | N/A | N/A | Features | N/A |
|
@ -31,6 +31,11 @@ In navi/navi, you can run the following commands:
|
||||
- `scripts/run_onnx.sh` for [Onnx](https://onnx.ai/)
|
||||
|
||||
Do note that you need to create a models directory and create some versions, preferably using epoch time, e.g., `1679693908377`.
|
||||
so the models structure looks like:
|
||||
models/
|
||||
-web_click
|
||||
- 1809000
|
||||
- 1809010
|
||||
|
||||
## Build
|
||||
You can adapt the above scripts to build using Cargo.
|
||||
|
@ -3,7 +3,6 @@ name = "dr_transform"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
[dependencies]
|
||||
serde = { version = "1.0", features = ["derive"] }
|
||||
serde_json = "1.0"
|
||||
@ -12,7 +11,6 @@ bpr_thrift = { path = "../thrift_bpr_adapter/thrift/"}
|
||||
segdense = { path = "../segdense/"}
|
||||
thrift = "0.17.0"
|
||||
ndarray = "0.15"
|
||||
ort = {git ="https://github.com/pykeio/ort.git", tag="v1.14.2"}
|
||||
base64 = "0.20.0"
|
||||
npyz = "0.7.2"
|
||||
log = "0.4.17"
|
||||
@ -21,6 +19,11 @@ prometheus = "0.13.1"
|
||||
once_cell = "1.17.0"
|
||||
rand = "0.8.5"
|
||||
itertools = "0.10.5"
|
||||
anyhow = "1.0.70"
|
||||
[target.'cfg(not(target_os="linux"))'.dependencies]
|
||||
ort = {git ="https://github.com/pykeio/ort.git", features=["profiling"], tag="v1.14.6"}
|
||||
[target.'cfg(target_os="linux")'.dependencies]
|
||||
ort = {git ="https://github.com/pykeio/ort.git", features=["profiling", "tensorrt", "cuda", "copy-dylibs"], tag="v1.14.6"}
|
||||
[dev-dependencies]
|
||||
criterion = "0.3.0"
|
||||
|
||||
|
@ -44,5 +44,6 @@ pub struct RenamedFeatures {
|
||||
}
|
||||
|
||||
pub fn parse(json_str: &str) -> Result<AllConfig, Error> {
|
||||
serde_json::from_str(json_str)
|
||||
let all_config: AllConfig = serde_json::from_str(json_str)?;
|
||||
Ok(all_config)
|
||||
}
|
||||
|
@ -2,6 +2,9 @@ use std::collections::BTreeSet;
|
||||
use std::fmt::{self, Debug, Display};
|
||||
use std::fs;
|
||||
|
||||
use crate::all_config;
|
||||
use crate::all_config::AllConfig;
|
||||
use anyhow::{bail, Context};
|
||||
use bpr_thrift::data::DataRecord;
|
||||
use bpr_thrift::prediction_service::BatchPredictionRequest;
|
||||
use bpr_thrift::tensor::GeneralTensor;
|
||||
@ -16,8 +19,6 @@ use segdense::util;
|
||||
use thrift::protocol::{TBinaryInputProtocol, TSerializable};
|
||||
use thrift::transport::TBufferChannel;
|
||||
|
||||
use crate::{all_config, all_config::AllConfig};
|
||||
|
||||
pub fn log_feature_match(
|
||||
dr: &DataRecord,
|
||||
seg_dense_config: &DensificationTransformSpec,
|
||||
@ -28,20 +29,24 @@ pub fn log_feature_match(
|
||||
|
||||
for (feature_id, feature_value) in dr.continuous_features.as_ref().unwrap() {
|
||||
debug!(
|
||||
"{dr_type} - Continuous Datarecord => Feature ID: {feature_id}, Feature value: {feature_value}"
|
||||
"{} - Continous Datarecord => Feature ID: {}, Feature value: {}",
|
||||
dr_type, feature_id, feature_value
|
||||
);
|
||||
for input_feature in &seg_dense_config.cont.input_features {
|
||||
if input_feature.feature_id == *feature_id {
|
||||
debug!("Matching input feature: {input_feature:?}")
|
||||
debug!("Matching input feature: {:?}", input_feature)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
for feature_id in dr.binary_features.as_ref().unwrap() {
|
||||
debug!("{dr_type} - Binary Datarecord => Feature ID: {feature_id}");
|
||||
debug!(
|
||||
"{} - Binary Datarecord => Feature ID: {}",
|
||||
dr_type, feature_id
|
||||
);
|
||||
for input_feature in &seg_dense_config.binary.input_features {
|
||||
if input_feature.feature_id == *feature_id {
|
||||
debug!("Found input feature: {input_feature:?}")
|
||||
debug!("Found input feature: {:?}", input_feature)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -90,18 +95,19 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
model_version: &str,
|
||||
reporting_feature_ids: Vec<(i64, &str)>,
|
||||
register_metric_fn: Option<impl Fn(&HistogramVec)>,
|
||||
) -> BatchPredictionRequestToTorchTensorConverter {
|
||||
let all_config_path = format!("{model_dir}/{model_version}/all_config.json");
|
||||
let seg_dense_config_path =
|
||||
format!("{model_dir}/{model_version}/segdense_transform_spec_home_recap_2022.json");
|
||||
let seg_dense_config = util::load_config(&seg_dense_config_path);
|
||||
) -> anyhow::Result<BatchPredictionRequestToTorchTensorConverter> {
|
||||
let all_config_path = format!("{}/{}/all_config.json", model_dir, model_version);
|
||||
let seg_dense_config_path = format!(
|
||||
"{}/{}/segdense_transform_spec_home_recap_2022.json",
|
||||
model_dir, model_version
|
||||
);
|
||||
let seg_dense_config = util::load_config(&seg_dense_config_path)?;
|
||||
let all_config = all_config::parse(
|
||||
&fs::read_to_string(&all_config_path)
|
||||
.unwrap_or_else(|error| panic!("error loading all_config.json - {error}")),
|
||||
)
|
||||
.unwrap();
|
||||
.with_context(|| "error loading all_config.json - ")?,
|
||||
)?;
|
||||
|
||||
let feature_mapper = util::load_from_parsed_config_ref(&seg_dense_config);
|
||||
let feature_mapper = util::load_from_parsed_config(seg_dense_config.clone())?;
|
||||
|
||||
let user_embedding_feature_id = Self::get_feature_id(
|
||||
&all_config
|
||||
@ -131,11 +137,11 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
let (discrete_feature_metrics, continuous_feature_metrics) = METRICS.get_or_init(|| {
|
||||
let discrete = HistogramVec::new(
|
||||
HistogramOpts::new(":navi:feature_id:discrete", "Discrete Feature ID values")
|
||||
.buckets(Vec::from([
|
||||
0.0f64, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 110.0,
|
||||
.buckets(Vec::from(&[
|
||||
0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 110.0,
|
||||
120.0, 130.0, 140.0, 150.0, 160.0, 170.0, 180.0, 190.0, 200.0, 250.0,
|
||||
300.0, 500.0, 1000.0, 10000.0, 100000.0,
|
||||
])),
|
||||
] as &'static [f64])),
|
||||
&["feature_id"],
|
||||
)
|
||||
.expect("metric cannot be created");
|
||||
@ -144,18 +150,18 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
":navi:feature_id:continuous",
|
||||
"continuous Feature ID values",
|
||||
)
|
||||
.buckets(Vec::from([
|
||||
0.0f64, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 110.0,
|
||||
120.0, 130.0, 140.0, 150.0, 160.0, 170.0, 180.0, 190.0, 200.0, 250.0, 300.0,
|
||||
500.0, 1000.0, 10000.0, 100000.0,
|
||||
])),
|
||||
.buckets(Vec::from(&[
|
||||
0.0, 10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0, 110.0, 120.0,
|
||||
130.0, 140.0, 150.0, 160.0, 170.0, 180.0, 190.0, 200.0, 250.0, 300.0, 500.0,
|
||||
1000.0, 10000.0, 100000.0,
|
||||
] as &'static [f64])),
|
||||
&["feature_id"],
|
||||
)
|
||||
.expect("metric cannot be created");
|
||||
if let Some(r) = register_metric_fn {
|
||||
register_metric_fn.map(|r| {
|
||||
r(&discrete);
|
||||
r(&continuous);
|
||||
}
|
||||
});
|
||||
(discrete, continuous)
|
||||
});
|
||||
|
||||
@ -164,13 +170,16 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
|
||||
for (feature_id, feature_type) in reporting_feature_ids.iter() {
|
||||
match *feature_type {
|
||||
"discrete" => discrete_features_to_report.insert(*feature_id),
|
||||
"continuous" => continuous_features_to_report.insert(*feature_id),
|
||||
_ => panic!("Invalid feature type {feature_type} for reporting metrics!"),
|
||||
"discrete" => discrete_features_to_report.insert(feature_id.clone()),
|
||||
"continuous" => continuous_features_to_report.insert(feature_id.clone()),
|
||||
_ => bail!(
|
||||
"Invalid feature type {} for reporting metrics!",
|
||||
feature_type
|
||||
),
|
||||
};
|
||||
}
|
||||
|
||||
BatchPredictionRequestToTorchTensorConverter {
|
||||
Ok(BatchPredictionRequestToTorchTensorConverter {
|
||||
all_config,
|
||||
seg_dense_config,
|
||||
all_config_path,
|
||||
@ -183,7 +192,7 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
continuous_features_to_report,
|
||||
discrete_feature_metrics,
|
||||
continuous_feature_metrics,
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
fn get_feature_id(feature_name: &str, seg_dense_config: &Root) -> i64 {
|
||||
@ -218,43 +227,45 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
let mut working_set = vec![0 as f32; total_size];
|
||||
let mut bpr_start = 0;
|
||||
for (bpr, &bpr_end) in bprs.iter().zip(batch_size) {
|
||||
if bpr.common_features.is_some()
|
||||
&& bpr.common_features.as_ref().unwrap().tensors.is_some()
|
||||
&& bpr
|
||||
.common_features
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.tensors
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains_key(&feature_id)
|
||||
{
|
||||
let source_tensor = bpr
|
||||
.common_features
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.tensors
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.get(&feature_id)
|
||||
.unwrap();
|
||||
let tensor = match source_tensor {
|
||||
GeneralTensor::FloatTensor(float_tensor) =>
|
||||
//Tensor::of_slice(
|
||||
if bpr.common_features.is_some() {
|
||||
if bpr.common_features.as_ref().unwrap().tensors.is_some() {
|
||||
if bpr
|
||||
.common_features
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.tensors
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.contains_key(&feature_id)
|
||||
{
|
||||
float_tensor
|
||||
.floats
|
||||
.iter()
|
||||
.map(|x| x.into_inner() as f32)
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
_ => vec![0 as f32; cols],
|
||||
};
|
||||
let source_tensor = bpr
|
||||
.common_features
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.tensors
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.get(&feature_id)
|
||||
.unwrap();
|
||||
let tensor = match source_tensor {
|
||||
GeneralTensor::FloatTensor(float_tensor) =>
|
||||
//Tensor::of_slice(
|
||||
{
|
||||
float_tensor
|
||||
.floats
|
||||
.iter()
|
||||
.map(|x| x.into_inner() as f32)
|
||||
.collect::<Vec<_>>()
|
||||
}
|
||||
_ => vec![0 as f32; cols],
|
||||
};
|
||||
|
||||
// since the tensor is found in common feature, add it in all batches
|
||||
for row in bpr_start..bpr_end {
|
||||
for col in 0..cols {
|
||||
working_set[row * cols + col] = tensor[col];
|
||||
// since the tensor is found in common feature, add it in all batches
|
||||
for row in bpr_start..bpr_end {
|
||||
for col in 0..cols {
|
||||
working_set[row * cols + col] = tensor[col];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -298,9 +309,9 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
// (INT64 --> INT64, DataRecord.discrete_feature)
|
||||
fn get_continuous(&self, bprs: &[BatchPredictionRequest], batch_ends: &[usize]) -> InputTensor {
|
||||
// These need to be part of model schema
|
||||
let rows = batch_ends[batch_ends.len() - 1];
|
||||
let cols = 5293;
|
||||
let full_size = rows * cols;
|
||||
let rows: usize = batch_ends[batch_ends.len() - 1];
|
||||
let cols: usize = 5293;
|
||||
let full_size: usize = rows * cols;
|
||||
let default_val = f32::NAN;
|
||||
|
||||
let mut tensor = vec![default_val; full_size];
|
||||
@ -325,15 +336,18 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
.unwrap();
|
||||
|
||||
for feature in common_features {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature.0) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index = r * cols + idx;
|
||||
tensor[flat_index] = feature.1.into_inner() as f32;
|
||||
match self.feature_mapper.get(feature.0) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index: usize = r * cols + idx;
|
||||
tensor[flat_index] = feature.1.into_inner() as f32;
|
||||
}
|
||||
}
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
if self.continuous_features_to_report.contains(feature.0) {
|
||||
self.continuous_feature_metrics
|
||||
@ -349,24 +363,28 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
|
||||
// Process the batch of datarecords
|
||||
for r in bpr_start..bpr_end {
|
||||
let dr: &DataRecord = &bpr.individual_features_list[r - bpr_start];
|
||||
let dr: &DataRecord =
|
||||
&bpr.individual_features_list[usize::try_from(r - bpr_start).unwrap()];
|
||||
if dr.continuous_features.is_some() {
|
||||
for feature in dr.continuous_features.as_ref().unwrap() {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature.0) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index = r * cols + idx;
|
||||
if flat_index < tensor.len() && idx < cols {
|
||||
tensor[flat_index] = feature.1.into_inner() as f32;
|
||||
match self.feature_mapper.get(&feature.0) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index: usize = r * cols + idx;
|
||||
if flat_index < tensor.len() && idx < cols {
|
||||
tensor[flat_index] = feature.1.into_inner() as f32;
|
||||
}
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
if self.continuous_features_to_report.contains(feature.0) {
|
||||
self.continuous_feature_metrics
|
||||
.with_label_values(&[feature.0.to_string().as_str()])
|
||||
.observe(feature.1.into_inner())
|
||||
.observe(feature.1.into_inner() as f64)
|
||||
} else if self.discrete_features_to_report.contains(feature.0) {
|
||||
self.discrete_feature_metrics
|
||||
.with_label_values(&[feature.0.to_string().as_str()])
|
||||
.observe(feature.1.into_inner())
|
||||
.observe(feature.1.into_inner() as f64)
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -383,10 +401,10 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
|
||||
fn get_binary(&self, bprs: &[BatchPredictionRequest], batch_ends: &[usize]) -> InputTensor {
|
||||
// These need to be part of model schema
|
||||
let rows = batch_ends[batch_ends.len() - 1];
|
||||
let cols = 149;
|
||||
let full_size = rows * cols;
|
||||
let default_val = 0;
|
||||
let rows: usize = batch_ends[batch_ends.len() - 1];
|
||||
let cols: usize = 149;
|
||||
let full_size: usize = rows * cols;
|
||||
let default_val: i64 = 0;
|
||||
|
||||
let mut v = vec![default_val; full_size];
|
||||
|
||||
@ -410,15 +428,18 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
.unwrap();
|
||||
|
||||
for feature in common_features {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index = r * cols + idx;
|
||||
v[flat_index] = 1;
|
||||
match self.feature_mapper.get(feature) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index: usize = r * cols + idx;
|
||||
v[flat_index] = 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -428,10 +449,13 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
let dr: &DataRecord = &bpr.individual_features_list[r - bpr_start];
|
||||
if dr.binary_features.is_some() {
|
||||
for feature in dr.binary_features.as_ref().unwrap() {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index = r * cols + idx;
|
||||
v[flat_index] = 1;
|
||||
match self.feature_mapper.get(&feature) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index: usize = r * cols + idx;
|
||||
v[flat_index] = 1;
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -448,10 +472,10 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
#[allow(dead_code)]
|
||||
fn get_discrete(&self, bprs: &[BatchPredictionRequest], batch_ends: &[usize]) -> InputTensor {
|
||||
// These need to be part of model schema
|
||||
let rows = batch_ends[batch_ends.len() - 1];
|
||||
let cols = 320;
|
||||
let full_size = rows * cols;
|
||||
let default_val = 0;
|
||||
let rows: usize = batch_ends[batch_ends.len() - 1];
|
||||
let cols: usize = 320;
|
||||
let full_size: usize = rows * cols;
|
||||
let default_val: i64 = 0;
|
||||
|
||||
let mut v = vec![default_val; full_size];
|
||||
|
||||
@ -475,15 +499,18 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
.unwrap();
|
||||
|
||||
for feature in common_features {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature.0) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index = r * cols + idx;
|
||||
v[flat_index] = *feature.1;
|
||||
match self.feature_mapper.get(feature.0) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
if idx < cols {
|
||||
// Set value in each row
|
||||
for r in bpr_start..bpr_end {
|
||||
let flat_index: usize = r * cols + idx;
|
||||
v[flat_index] = *feature.1;
|
||||
}
|
||||
}
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
if self.discrete_features_to_report.contains(feature.0) {
|
||||
self.discrete_feature_metrics
|
||||
@ -495,15 +522,18 @@ impl BatchPredictionRequestToTorchTensorConverter {
|
||||
|
||||
// Process the batch of datarecords
|
||||
for r in bpr_start..bpr_end {
|
||||
let dr: &DataRecord = &bpr.individual_features_list[r];
|
||||
let dr: &DataRecord = &bpr.individual_features_list[usize::try_from(r).unwrap()];
|
||||
if dr.discrete_features.is_some() {
|
||||
for feature in dr.discrete_features.as_ref().unwrap() {
|
||||
if let Some(f_info) = self.feature_mapper.get(feature.0) {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index = r * cols + idx;
|
||||
if flat_index < v.len() && idx < cols {
|
||||
v[flat_index] = *feature.1;
|
||||
match self.feature_mapper.get(&feature.0) {
|
||||
Some(f_info) => {
|
||||
let idx = f_info.index_within_tensor as usize;
|
||||
let flat_index: usize = r * cols + idx;
|
||||
if flat_index < v.len() && idx < cols {
|
||||
v[flat_index] = *feature.1;
|
||||
}
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
if self.discrete_features_to_report.contains(feature.0) {
|
||||
self.discrete_feature_metrics
|
||||
@ -569,7 +599,7 @@ impl Converter for BatchPredictionRequestToTorchTensorConverter {
|
||||
.map(|bpr| bpr.individual_features_list.len())
|
||||
.scan(0usize, |acc, e| {
|
||||
//running total
|
||||
*acc += e;
|
||||
*acc = *acc + e;
|
||||
Some(*acc)
|
||||
})
|
||||
.collect::<Vec<_>>();
|
||||
|
@ -3,3 +3,4 @@ pub mod converter;
|
||||
#[cfg(test)]
|
||||
mod test;
|
||||
pub mod util;
|
||||
pub extern crate ort;
|
||||
|
@ -1,8 +1,7 @@
|
||||
[package]
|
||||
name = "navi"
|
||||
version = "2.0.42"
|
||||
version = "2.0.45"
|
||||
edition = "2021"
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[[bin]]
|
||||
name = "navi"
|
||||
@ -16,12 +15,19 @@ required-features=["torch"]
|
||||
name = "navi_onnx"
|
||||
path = "src/bin/navi_onnx.rs"
|
||||
required-features=["onnx"]
|
||||
[[bin]]
|
||||
name = "navi_onnx_test"
|
||||
path = "src/bin/bin_tests/navi_onnx_test.rs"
|
||||
[[bin]]
|
||||
name = "navi_torch_test"
|
||||
path = "src/bin/bin_tests/navi_torch_test.rs"
|
||||
required-features=["torch"]
|
||||
|
||||
[features]
|
||||
default=[]
|
||||
navi_console=[]
|
||||
torch=["tch"]
|
||||
onnx=["ort"]
|
||||
onnx=[]
|
||||
tf=["tensorflow"]
|
||||
[dependencies]
|
||||
itertools = "0.10.5"
|
||||
@ -47,6 +53,7 @@ parking_lot = "0.12.1"
|
||||
rand = "0.8.5"
|
||||
rand_pcg = "0.3.1"
|
||||
random = "0.12.2"
|
||||
x509-parser = "0.15.0"
|
||||
sha256 = "1.0.3"
|
||||
tonic = { version = "0.6.2", features=['compression', 'tls'] }
|
||||
tokio = { version = "1.17.0", features = ["macros", "rt-multi-thread", "fs", "process"] }
|
||||
@ -55,16 +62,12 @@ npyz = "0.7.3"
|
||||
base64 = "0.21.0"
|
||||
histogram = "0.6.9"
|
||||
tch = {version = "0.10.3", optional = true}
|
||||
tensorflow = { version = "0.20.0", optional = true }
|
||||
tensorflow = { version = "0.18.0", optional = true }
|
||||
once_cell = {version = "1.17.1"}
|
||||
ndarray = "0.15"
|
||||
serde = "1.0.154"
|
||||
serde_json = "1.0.94"
|
||||
dr_transform = { path = "../dr_transform"}
|
||||
[target.'cfg(not(target_os="linux"))'.dependencies]
|
||||
ort = {git ="https://github.com/pykeio/ort.git", features=["profiling"], optional = true, tag="v1.14.2"}
|
||||
[target.'cfg(target_os="linux")'.dependencies]
|
||||
ort = {git ="https://github.com/pykeio/ort.git", features=["profiling", "tensorrt", "cuda", "copy-dylibs"], optional = true, tag="v1.14.2"}
|
||||
[build-dependencies]
|
||||
tonic-build = {version = "0.6.2", features=['prost', "compression"] }
|
||||
[profile.release]
|
||||
@ -74,3 +77,5 @@ ndarray-rand = "0.14.0"
|
||||
tokio-test = "*"
|
||||
assert_cmd = "2.0"
|
||||
criterion = "0.4.0"
|
||||
|
||||
|
||||
|
@ -122,7 +122,7 @@ enum FullTypeId {
|
||||
// TFT_TENSOR[TFT_INT32, TFT_UNKNOWN]
|
||||
// is a Tensor of int32 element type and unknown shape.
|
||||
//
|
||||
// TODO: Define TFT_SHAPE and add more examples.
|
||||
// TODO(mdan): Define TFT_SHAPE and add more examples.
|
||||
TFT_TENSOR = 1000;
|
||||
|
||||
// Array (or tensorflow::TensorList in the variant type registry).
|
||||
@ -178,7 +178,7 @@ enum FullTypeId {
|
||||
// object (for now).
|
||||
|
||||
// The bool element type.
|
||||
// TODO
|
||||
// TODO(mdan): Quantized types, legacy representations (e.g. ref)
|
||||
TFT_BOOL = 200;
|
||||
// Integer element types.
|
||||
TFT_UINT8 = 201;
|
||||
@ -195,7 +195,7 @@ enum FullTypeId {
|
||||
TFT_DOUBLE = 211;
|
||||
TFT_BFLOAT16 = 215;
|
||||
// Complex element types.
|
||||
// TODO: Represent as TFT_COMPLEX[TFT_DOUBLE] instead?
|
||||
// TODO(mdan): Represent as TFT_COMPLEX[TFT_DOUBLE] instead?
|
||||
TFT_COMPLEX64 = 212;
|
||||
TFT_COMPLEX128 = 213;
|
||||
// The string element type.
|
||||
@ -240,7 +240,7 @@ enum FullTypeId {
|
||||
// ownership is in the true sense: "the op argument representing the lock is
|
||||
// available".
|
||||
// Mutex locks are the dynamic counterpart of control dependencies.
|
||||
// TODO: Properly document this thing.
|
||||
// TODO(mdan): Properly document this thing.
|
||||
//
|
||||
// Parametrization: TFT_MUTEX_LOCK[].
|
||||
TFT_MUTEX_LOCK = 10202;
|
||||
@ -271,6 +271,6 @@ message FullTypeDef {
|
||||
oneof attr {
|
||||
string s = 3;
|
||||
int64 i = 4;
|
||||
// TODO: list/tensor, map? Need to reconcile with TFT_RECORD, etc.
|
||||
// TODO(mdan): list/tensor, map? Need to reconcile with TFT_RECORD, etc.
|
||||
}
|
||||
}
|
||||
|
@ -23,7 +23,7 @@ message FunctionDefLibrary {
|
||||
// with a value. When a GraphDef has a call to a function, it must
|
||||
// have binding for every attr defined in the signature.
|
||||
//
|
||||
// TODO:
|
||||
// TODO(zhifengc):
|
||||
// * device spec, etc.
|
||||
message FunctionDef {
|
||||
// The definition of the function's name, arguments, return values,
|
||||
|
@ -61,7 +61,7 @@ message NodeDef {
|
||||
// one of the names from the corresponding OpDef's attr field).
|
||||
// The values must have a type matching the corresponding OpDef
|
||||
// attr's type field.
|
||||
// TODO: Add some examples here showing best practices.
|
||||
// TODO(josh11b): Add some examples here showing best practices.
|
||||
map<string, AttrValue> attr = 5;
|
||||
|
||||
message ExperimentalDebugInfo {
|
||||
|
@ -96,7 +96,7 @@ message OpDef {
|
||||
// Human-readable description.
|
||||
string description = 4;
|
||||
|
||||
// TODO: bool is_optional?
|
||||
// TODO(josh11b): bool is_optional?
|
||||
|
||||
// --- Constraints ---
|
||||
// These constraints are only in effect if specified. Default is no
|
||||
@ -139,7 +139,7 @@ message OpDef {
|
||||
// taking input from multiple devices with a tree of aggregate ops
|
||||
// that aggregate locally within each device (and possibly within
|
||||
// groups of nearby devices) before communicating.
|
||||
// TODO: Implement that optimization.
|
||||
// TODO(josh11b): Implement that optimization.
|
||||
bool is_aggregate = 16; // for things like add
|
||||
|
||||
// Other optimizations go here, like
|
||||
|
@ -53,7 +53,7 @@ message MemoryStats {
|
||||
|
||||
// Time/size stats recorded for a single execution of a graph node.
|
||||
message NodeExecStats {
|
||||
// TODO: Use some more compact form of node identity than
|
||||
// TODO(tucker): Use some more compact form of node identity than
|
||||
// the full string name. Either all processes should agree on a
|
||||
// global id (cost_id?) for each node, or we should use a hash of
|
||||
// the name.
|
||||
|
@ -16,7 +16,7 @@ option go_package = "github.com/tensorflow/tensorflow/tensorflow/go/core/framewo
|
||||
message TensorProto {
|
||||
DataType dtype = 1;
|
||||
|
||||
// Shape of the tensor. TODO: sort out the 0-rank issues.
|
||||
// Shape of the tensor. TODO(touts): sort out the 0-rank issues.
|
||||
TensorShapeProto tensor_shape = 2;
|
||||
|
||||
// Only one of the representations below is set, one of "tensor_contents" and
|
||||
|
@ -532,7 +532,7 @@ message ConfigProto {
|
||||
|
||||
// We removed the flag client_handles_error_formatting. Marking the tag
|
||||
// number as reserved.
|
||||
// TODO: Should we just remove this tag so that it can be
|
||||
// TODO(shikharagarwal): Should we just remove this tag so that it can be
|
||||
// used in future for other purpose?
|
||||
reserved 2;
|
||||
|
||||
@ -576,7 +576,7 @@ message ConfigProto {
|
||||
// - If isolate_session_state is true, session states are isolated.
|
||||
// - If isolate_session_state is false, session states are shared.
|
||||
//
|
||||
// TODO: Add a single API that consistently treats
|
||||
// TODO(b/129330037): Add a single API that consistently treats
|
||||
// isolate_session_state and ClusterSpec propagation.
|
||||
bool share_session_state_in_clusterspec_propagation = 8;
|
||||
|
||||
@ -704,7 +704,7 @@ message ConfigProto {
|
||||
|
||||
// Options for a single Run() call.
|
||||
message RunOptions {
|
||||
// TODO Turn this into a TraceOptions proto which allows
|
||||
// TODO(pbar) Turn this into a TraceOptions proto which allows
|
||||
// tracing to be controlled in a more orthogonal manner?
|
||||
enum TraceLevel {
|
||||
NO_TRACE = 0;
|
||||
@ -781,7 +781,7 @@ message RunMetadata {
|
||||
repeated GraphDef partition_graphs = 3;
|
||||
|
||||
message FunctionGraphs {
|
||||
// TODO: Include some sort of function/cache-key identifier?
|
||||
// TODO(nareshmodi): Include some sort of function/cache-key identifier?
|
||||
repeated GraphDef partition_graphs = 1;
|
||||
|
||||
GraphDef pre_optimization_graph = 2;
|
||||
|
@ -194,7 +194,7 @@ service CoordinationService {
|
||||
|
||||
// Report error to the task. RPC sets the receiving instance of coordination
|
||||
// service agent to error state permanently.
|
||||
// TODO: Consider splitting this into a different RPC service.
|
||||
// TODO(b/195990880): Consider splitting this into a different RPC service.
|
||||
rpc ReportErrorToAgent(ReportErrorToAgentRequest)
|
||||
returns (ReportErrorToAgentResponse);
|
||||
|
||||
|
@ -46,7 +46,7 @@ message DebugTensorWatch {
|
||||
// are to be debugged, the callers of Session::Run() must use distinct
|
||||
// debug_urls to make sure that the streamed or dumped events do not overlap
|
||||
// among the invocations.
|
||||
// TODO: More visible documentation of this in g3docs.
|
||||
// TODO(cais): More visible documentation of this in g3docs.
|
||||
repeated string debug_urls = 4;
|
||||
|
||||
// Do not error out if debug op creation fails (e.g., due to dtype
|
||||
|
@ -12,7 +12,7 @@ option java_package = "org.tensorflow.util";
|
||||
option go_package = "github.com/tensorflow/tensorflow/tensorflow/go/core/protobuf/for_core_protos_go_proto";
|
||||
|
||||
// Available modes for extracting debugging information from a Tensor.
|
||||
// TODO: Document the detailed column names and semantics in a separate
|
||||
// TODO(cais): Document the detailed column names and semantics in a separate
|
||||
// markdown file once the implementation settles.
|
||||
enum TensorDebugMode {
|
||||
UNSPECIFIED = 0;
|
||||
@ -223,7 +223,7 @@ message DebuggedDevice {
|
||||
// A debugger-generated ID for the device. Guaranteed to be unique within
|
||||
// the scope of the debugged TensorFlow program, including single-host and
|
||||
// multi-host settings.
|
||||
// TODO: Test the uniqueness guarantee in multi-host settings.
|
||||
// TODO(cais): Test the uniqueness guarantee in multi-host settings.
|
||||
int32 device_id = 2;
|
||||
}
|
||||
|
||||
@ -264,7 +264,7 @@ message Execution {
|
||||
// field with the DebuggedDevice messages.
|
||||
repeated int32 output_tensor_device_ids = 9;
|
||||
|
||||
// TODO support, add more fields
|
||||
// TODO(cais): When backporting to V1 Session.run() support, add more fields
|
||||
// such as fetches and feeds.
|
||||
}
|
||||
|
||||
|
@ -7,7 +7,7 @@ option go_package = "github.com/tensorflow/tensorflow/tensorflow/go/core/protobu
|
||||
|
||||
// Used to serialize and transmit tensorflow::Status payloads through
|
||||
// grpc::Status `error_details` since grpc::Status lacks payload API.
|
||||
// TODO: Use GRPC API once supported.
|
||||
// TODO(b/204231601): Use GRPC API once supported.
|
||||
message GrpcPayloadContainer {
|
||||
map<string, bytes> payloads = 1;
|
||||
}
|
||||
|
@ -172,7 +172,7 @@ message WaitQueueDoneRequest {
|
||||
}
|
||||
|
||||
message WaitQueueDoneResponse {
|
||||
// TODO: Consider adding NodeExecStats here to be able to
|
||||
// TODO(nareshmodi): Consider adding NodeExecStats here to be able to
|
||||
// propagate some stats.
|
||||
}
|
||||
|
||||
|
@ -94,7 +94,7 @@ message ExtendSessionRequest {
|
||||
}
|
||||
|
||||
message ExtendSessionResponse {
|
||||
// TODO: Return something about the operation?
|
||||
// TODO(mrry): Return something about the operation?
|
||||
|
||||
// The new version number for the extended graph, to be used in the next call
|
||||
// to ExtendSession.
|
||||
|
@ -176,7 +176,7 @@ message SavedBareConcreteFunction {
|
||||
// allows the ConcreteFunction to be called with nest structure inputs. This
|
||||
// field may not be populated. If this field is absent, the concrete function
|
||||
// can only be called with flat inputs.
|
||||
// TODO: support calling saved ConcreteFunction with structured
|
||||
// TODO(b/169361281): support calling saved ConcreteFunction with structured
|
||||
// inputs in C++ SavedModel API.
|
||||
FunctionSpec function_spec = 4;
|
||||
}
|
||||
|
@ -17,7 +17,7 @@ option go_package = "github.com/tensorflow/tensorflow/tensorflow/go/core/protobu
|
||||
|
||||
// Special header that is associated with a bundle.
|
||||
//
|
||||
// TODO: maybe in the future, we can add information about
|
||||
// TODO(zongheng,zhifengc): maybe in the future, we can add information about
|
||||
// which binary produced this checkpoint, timestamp, etc. Sometime, these can be
|
||||
// valuable debugging information. And if needed, these can be used as defensive
|
||||
// information ensuring reader (binary version) of the checkpoint and the writer
|
||||
|
@ -188,7 +188,7 @@ message DeregisterGraphRequest {
|
||||
}
|
||||
|
||||
message DeregisterGraphResponse {
|
||||
// TODO: Optionally add summary stats for the graph.
|
||||
// TODO(mrry): Optionally add summary stats for the graph.
|
||||
}
|
||||
|
||||
////////////////////////////////////////////////////////////////////////////////
|
||||
@ -294,7 +294,7 @@ message RunGraphResponse {
|
||||
|
||||
// If the request asked for execution stats, the cost graph, or the partition
|
||||
// graphs, these are returned here.
|
||||
// TODO: Package these in a RunMetadata instead.
|
||||
// TODO(suharshs): Package these in a RunMetadata instead.
|
||||
StepStats step_stats = 2;
|
||||
CostGraphDef cost_graph = 3;
|
||||
repeated GraphDef partition_graph = 4;
|
||||
|
@ -13,5 +13,5 @@ message LogMetadata {
|
||||
SamplingConfig sampling_config = 2;
|
||||
// List of tags used to load the relevant MetaGraphDef from SavedModel.
|
||||
repeated string saved_model_tags = 3;
|
||||
// TODO: Add more metadata as mentioned in the bug.
|
||||
// TODO(b/33279154): Add more metadata as mentioned in the bug.
|
||||
}
|
||||
|
@ -58,7 +58,7 @@ message FileSystemStoragePathSourceConfig {
|
||||
|
||||
// A single servable name/base_path pair to monitor.
|
||||
// DEPRECATED: Use 'servables' instead.
|
||||
// TODO: Stop using these fields, and ultimately remove them here.
|
||||
// TODO(b/30898016): Stop using these fields, and ultimately remove them here.
|
||||
string servable_name = 1 [deprecated = true];
|
||||
string base_path = 2 [deprecated = true];
|
||||
|
||||
@ -76,7 +76,7 @@ message FileSystemStoragePathSourceConfig {
|
||||
// check for a version to appear later.)
|
||||
// DEPRECATED: Use 'servable_versions_always_present' instead, which includes
|
||||
// this behavior.
|
||||
// TODO: Remove 2019-10-31 or later.
|
||||
// TODO(b/30898016): Remove 2019-10-31 or later.
|
||||
bool fail_if_zero_versions_at_startup = 4 [deprecated = true];
|
||||
|
||||
// If true, the servable is always expected to exist on the underlying
|
||||
|
@ -9,7 +9,7 @@ import "tensorflow_serving/config/logging_config.proto";
|
||||
option cc_enable_arenas = true;
|
||||
|
||||
// The type of model.
|
||||
// TODO: DEPRECATED.
|
||||
// TODO(b/31336131): DEPRECATED.
|
||||
enum ModelType {
|
||||
MODEL_TYPE_UNSPECIFIED = 0 [deprecated = true];
|
||||
TENSORFLOW = 1 [deprecated = true];
|
||||
@ -31,7 +31,7 @@ message ModelConfig {
|
||||
string base_path = 2;
|
||||
|
||||
// Type of model.
|
||||
// TODO: DEPRECATED. Please use 'model_platform' instead.
|
||||
// TODO(b/31336131): DEPRECATED. Please use 'model_platform' instead.
|
||||
ModelType model_type = 3 [deprecated = true];
|
||||
|
||||
// Type of model (e.g. "tensorflow").
|
||||
|
@ -1,10 +1,9 @@
|
||||
#!/bin/sh
|
||||
#RUST_LOG=debug LD_LIBRARY_PATH=so/onnx/lib target/release/navi_onnx --port 30 --num-worker-threads 8 --intra-op-parallelism 8 --inter-op-parallelism 8 \
|
||||
RUST_LOG=info LD_LIBRARY_PATH=so/onnx/lib cargo run --bin navi_onnx --features onnx -- \
|
||||
--port 30 --num-worker-threads 8 --intra-op-parallelism 8 --inter-op-parallelism 8 \
|
||||
--port 8030 --num-worker-threads 8 \
|
||||
--model-check-interval-secs 30 \
|
||||
--model-dir models/int8 \
|
||||
--output caligrated_probabilities \
|
||||
--input "" \
|
||||
--modelsync-cli "echo" \
|
||||
--onnx-ep-options use_arena=true
|
||||
--onnx-ep-options use_arena=true \
|
||||
--model-dir models/prod_home --output caligrated_probabilities --input "" --intra-op-parallelism 8 --inter-op-parallelism 8 --max-batch-size 1 --batch-time-out-millis 1 \
|
||||
--model-dir models/prod_home1 --output caligrated_probabilities --input "" --intra-op-parallelism 8 --inter-op-parallelism 8 --max-batch-size 1 --batch-time-out-millis 1 \
|
||||
|
@ -1,11 +1,24 @@
|
||||
use anyhow::Result;
|
||||
use log::info;
|
||||
use navi::cli_args::{ARGS, MODEL_SPECS};
|
||||
use navi::onnx_model::onnx::OnnxModel;
|
||||
use navi::{bootstrap, metrics};
|
||||
|
||||
fn main() -> Result<()> {
|
||||
env_logger::init();
|
||||
assert_eq!(MODEL_SPECS.len(), ARGS.inter_op_parallelism.len());
|
||||
info!("global: {:?}", ARGS.onnx_global_thread_pool_options);
|
||||
let assert_session_params = if ARGS.onnx_global_thread_pool_options.is_empty() {
|
||||
// std::env::set_var("OMP_NUM_THREADS", "1");
|
||||
info!("now we use per session thread pool");
|
||||
MODEL_SPECS.len()
|
||||
}
|
||||
else {
|
||||
info!("now we use global thread pool");
|
||||
0
|
||||
};
|
||||
assert_eq!(assert_session_params, ARGS.inter_op_parallelism.len());
|
||||
assert_eq!(assert_session_params, ARGS.inter_op_parallelism.len());
|
||||
|
||||
metrics::register_custom_metrics();
|
||||
bootstrap::bootstrap(OnnxModel::new)
|
||||
}
|
||||
|
@ -1,5 +1,6 @@
|
||||
use anyhow::Result;
|
||||
use log::{info, warn};
|
||||
use x509_parser::{prelude::{parse_x509_pem}, parse_x509_certificate};
|
||||
use std::collections::HashMap;
|
||||
use tokio::time::Instant;
|
||||
use tonic::{
|
||||
@ -27,6 +28,7 @@ use crate::cli_args::{ARGS, INPUTS, OUTPUTS};
|
||||
use crate::metrics::{
|
||||
NAVI_VERSION, NUM_PREDICTIONS, NUM_REQUESTS_FAILED, NUM_REQUESTS_FAILED_BY_MODEL,
|
||||
NUM_REQUESTS_RECEIVED, NUM_REQUESTS_RECEIVED_BY_MODEL, RESPONSE_TIME_COLLECTOR,
|
||||
CERT_EXPIRY_EPOCH
|
||||
};
|
||||
use crate::predict_service::{Model, PredictService};
|
||||
use crate::tf_proto::tensorflow_serving::model_spec::VersionChoice::Version;
|
||||
@ -207,6 +209,9 @@ impl<T: Model> PredictionService for PredictService<T> {
|
||||
PredictResult::DropDueToOverload => Err(Status::resource_exhausted("")),
|
||||
PredictResult::ModelNotFound(idx) => {
|
||||
Err(Status::not_found(format!("model index {}", idx)))
|
||||
},
|
||||
PredictResult::ModelNotReady(idx) => {
|
||||
Err(Status::unavailable(format!("model index {}", idx)))
|
||||
}
|
||||
PredictResult::ModelVersionNotFound(idx, version) => Err(
|
||||
Status::not_found(format!("model index:{}, version {}", idx, version)),
|
||||
@ -230,6 +235,12 @@ impl<T: Model> PredictionService for PredictService<T> {
|
||||
}
|
||||
}
|
||||
|
||||
// A function that takes a timestamp as input and returns a ticker stream
|
||||
fn report_expiry(expiry_time: i64) {
|
||||
info!("Certificate expires at epoch: {:?}", expiry_time);
|
||||
CERT_EXPIRY_EPOCH.set(expiry_time as i64);
|
||||
}
|
||||
|
||||
pub fn bootstrap<T: Model>(model_factory: ModelFactory<T>) -> Result<()> {
|
||||
info!("package: {}, version: {}, args: {:?}", NAME, VERSION, *ARGS);
|
||||
//we follow SemVer. So here we assume MAJOR.MINOR.PATCH
|
||||
@ -246,6 +257,7 @@ pub fn bootstrap<T: Model>(model_factory: ModelFactory<T>) -> Result<()> {
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
tokio::runtime::Builder::new_multi_thread()
|
||||
.thread_name("async worker")
|
||||
.worker_threads(ARGS.num_worker_threads)
|
||||
@ -263,6 +275,21 @@ pub fn bootstrap<T: Model>(model_factory: ModelFactory<T>) -> Result<()> {
|
||||
let mut builder = if ARGS.ssl_dir.is_empty() {
|
||||
Server::builder()
|
||||
} else {
|
||||
// Read the pem file as a string
|
||||
let pem_str = std::fs::read_to_string(format!("{}/server.crt", ARGS.ssl_dir)).unwrap();
|
||||
let res = parse_x509_pem(&pem_str.as_bytes());
|
||||
match res {
|
||||
Ok((rem, pem_2)) => {
|
||||
assert!(rem.is_empty());
|
||||
assert_eq!(pem_2.label, String::from("CERTIFICATE"));
|
||||
let res_x509 = parse_x509_certificate(&pem_2.contents);
|
||||
info!("Certificate label: {}", pem_2.label);
|
||||
assert!(res_x509.is_ok());
|
||||
report_expiry(res_x509.unwrap().1.validity().not_after.timestamp());
|
||||
},
|
||||
_ => panic!("PEM parsing failed: {:?}", res),
|
||||
}
|
||||
|
||||
let key = tokio::fs::read(format!("{}/server.key", ARGS.ssl_dir))
|
||||
.await
|
||||
.expect("can't find key file");
|
||||
@ -278,7 +305,7 @@ pub fn bootstrap<T: Model>(model_factory: ModelFactory<T>) -> Result<()> {
|
||||
let identity = Identity::from_pem(pem.clone(), key);
|
||||
let client_ca_cert = Certificate::from_pem(pem.clone());
|
||||
let tls = ServerTlsConfig::new()
|
||||
.identity(identity)
|
||||
.identity(identity)
|
||||
.client_ca_root(client_ca_cert);
|
||||
Server::builder()
|
||||
.tls_config(tls)
|
||||
|
@ -87,13 +87,11 @@ pub struct Args {
|
||||
pub intra_op_parallelism: Vec<String>,
|
||||
#[clap(
|
||||
long,
|
||||
default_value = "14",
|
||||
help = "number of threads to parallelize computations of the graph"
|
||||
)]
|
||||
pub inter_op_parallelism: Vec<String>,
|
||||
#[clap(
|
||||
long,
|
||||
default_value = "serving_default",
|
||||
help = "signature of a serving. only TF"
|
||||
)]
|
||||
pub serving_sig: Vec<String>,
|
||||
@ -107,10 +105,12 @@ pub struct Args {
|
||||
help = "max warmup records to use. warmup only implemented for TF"
|
||||
)]
|
||||
pub max_warmup_records: usize,
|
||||
#[clap(long, value_parser = Args::parse_key_val::<String, String>, value_delimiter=',')]
|
||||
pub onnx_global_thread_pool_options: Vec<(String, String)>,
|
||||
#[clap(
|
||||
long,
|
||||
default_value = "true",
|
||||
help = "when to use graph parallelization. only for ONNX"
|
||||
long,
|
||||
default_value = "true",
|
||||
help = "when to use graph parallelization. only for ONNX"
|
||||
)]
|
||||
pub onnx_use_parallel_mode: String,
|
||||
// #[clap(long, default_value = "false")]
|
||||
|
@ -144,6 +144,7 @@ pub enum PredictResult {
|
||||
Ok(Vec<TensorScores>, i64),
|
||||
DropDueToOverload,
|
||||
ModelNotFound(usize),
|
||||
ModelNotReady(usize),
|
||||
ModelVersionNotFound(usize, i64),
|
||||
}
|
||||
|
||||
|
@ -171,6 +171,9 @@ lazy_static! {
|
||||
&["model_name"]
|
||||
)
|
||||
.expect("metric can be created");
|
||||
pub static ref CERT_EXPIRY_EPOCH: IntGauge =
|
||||
IntGauge::new(":navi:cert_expiry_epoch", "Timestamp when the current cert expires")
|
||||
.expect("metric can be created");
|
||||
}
|
||||
|
||||
pub fn register_custom_metrics() {
|
||||
@ -249,6 +252,10 @@ pub fn register_custom_metrics() {
|
||||
REGISTRY
|
||||
.register(Box::new(CONVERTER_TIME_COLLECTOR.clone()))
|
||||
.expect("collector can be registered");
|
||||
REGISTRY
|
||||
.register(Box::new(CERT_EXPIRY_EPOCH.clone()))
|
||||
.expect("collector can be registered");
|
||||
|
||||
}
|
||||
|
||||
pub fn register_dynamic_metrics(c: &HistogramVec) {
|
||||
|
@ -13,21 +13,22 @@ pub mod onnx {
|
||||
use dr_transform::converter::{BatchPredictionRequestToTorchTensorConverter, Converter};
|
||||
use itertools::Itertools;
|
||||
use log::{debug, info};
|
||||
use ort::environment::Environment;
|
||||
use ort::session::Session;
|
||||
use ort::tensor::InputTensor;
|
||||
use ort::{ExecutionProvider, GraphOptimizationLevel, SessionBuilder};
|
||||
use dr_transform::ort::environment::Environment;
|
||||
use dr_transform::ort::session::Session;
|
||||
use dr_transform::ort::tensor::InputTensor;
|
||||
use dr_transform::ort::{ExecutionProvider, GraphOptimizationLevel, SessionBuilder};
|
||||
use dr_transform::ort::LoggingLevel;
|
||||
use serde_json::Value;
|
||||
use std::fmt::{Debug, Display};
|
||||
use std::sync::Arc;
|
||||
use std::{fmt, fs};
|
||||
use tokio::time::Instant;
|
||||
|
||||
lazy_static! {
|
||||
pub static ref ENVIRONMENT: Arc<Environment> = Arc::new(
|
||||
Environment::builder()
|
||||
.with_name("onnx home")
|
||||
.with_log_level(ort::LoggingLevel::Error)
|
||||
.with_log_level(LoggingLevel::Error)
|
||||
.with_global_thread_pool(ARGS.onnx_global_thread_pool_options.clone())
|
||||
.build()
|
||||
.unwrap()
|
||||
);
|
||||
@ -101,23 +102,30 @@ pub mod onnx {
|
||||
let meta_info = format!("{}/{}/{}", ARGS.model_dir[idx], version, META_INFO);
|
||||
let mut builder = SessionBuilder::new(&ENVIRONMENT)?
|
||||
.with_optimization_level(GraphOptimizationLevel::Level3)?
|
||||
.with_parallel_execution(ARGS.onnx_use_parallel_mode == "true")?
|
||||
.with_inter_threads(
|
||||
utils::get_config_or(
|
||||
model_config,
|
||||
"inter_op_parallelism",
|
||||
&ARGS.inter_op_parallelism[idx],
|
||||
)
|
||||
.parse()?,
|
||||
)?
|
||||
.with_intra_threads(
|
||||
utils::get_config_or(
|
||||
model_config,
|
||||
"intra_op_parallelism",
|
||||
&ARGS.intra_op_parallelism[idx],
|
||||
)
|
||||
.parse()?,
|
||||
)?
|
||||
.with_parallel_execution(ARGS.onnx_use_parallel_mode == "true")?;
|
||||
if ARGS.onnx_global_thread_pool_options.is_empty() {
|
||||
builder = builder
|
||||
.with_inter_threads(
|
||||
utils::get_config_or(
|
||||
model_config,
|
||||
"inter_op_parallelism",
|
||||
&ARGS.inter_op_parallelism[idx],
|
||||
)
|
||||
.parse()?,
|
||||
)?
|
||||
.with_intra_threads(
|
||||
utils::get_config_or(
|
||||
model_config,
|
||||
"intra_op_parallelism",
|
||||
&ARGS.intra_op_parallelism[idx],
|
||||
)
|
||||
.parse()?,
|
||||
)?;
|
||||
}
|
||||
else {
|
||||
builder = builder.with_disable_per_session_threads()?;
|
||||
}
|
||||
builder = builder
|
||||
.with_memory_pattern(ARGS.onnx_use_memory_pattern == "true")?
|
||||
.with_execution_providers(&OnnxModel::ep_choices())?;
|
||||
match &ARGS.profiling {
|
||||
@ -181,7 +189,7 @@ pub mod onnx {
|
||||
&version,
|
||||
reporting_feature_ids,
|
||||
Some(metrics::register_dynamic_metrics),
|
||||
)),
|
||||
)?),
|
||||
};
|
||||
onnx_model.warmup()?;
|
||||
Ok(onnx_model)
|
||||
|
@ -1,7 +1,7 @@
|
||||
use anyhow::{anyhow, Result};
|
||||
use arrayvec::ArrayVec;
|
||||
use itertools::Itertools;
|
||||
use log::{error, info, warn};
|
||||
use log::{error, info};
|
||||
use std::fmt::{Debug, Display};
|
||||
use std::string::String;
|
||||
use std::sync::Arc;
|
||||
@ -24,7 +24,7 @@ use serde_json::{self, Value};
|
||||
|
||||
pub trait Model: Send + Sync + Display + Debug + 'static {
|
||||
fn warmup(&self) -> Result<()>;
|
||||
//TODO: refactor this to return Vec<Vec<TensorScores>>, i.e.
|
||||
//TODO: refactor this to return vec<vec<TensorScores>>, i.e.
|
||||
//we have the underlying runtime impl to split the response to each client.
|
||||
//It will eliminate some inefficient memory copy in onnx_model.rs as well as simplify code
|
||||
fn do_predict(
|
||||
@ -179,17 +179,17 @@ impl<T: Model> PredictService<T> {
|
||||
//initialize the latest version array
|
||||
let mut cur_versions = vec!["".to_owned(); MODEL_SPECS.len()];
|
||||
loop {
|
||||
let config = utils::read_config(&meta_file).unwrap_or_else(|e| {
|
||||
warn!("config file {} not found due to: {}", meta_file, e);
|
||||
Value::Null
|
||||
});
|
||||
info!("***polling for models***"); //nice deliminter
|
||||
info!("config:{}", config);
|
||||
if let Some(ref cli) = ARGS.modelsync_cli {
|
||||
if let Err(e) = call_external_modelsync(cli, &cur_versions).await {
|
||||
error!("model sync cli running error:{}", e)
|
||||
}
|
||||
}
|
||||
let config = utils::read_config(&meta_file).unwrap_or_else(|e| {
|
||||
info!("config file {} not found due to: {}", meta_file, e);
|
||||
Value::Null
|
||||
});
|
||||
info!("config:{}", config);
|
||||
for (idx, cur_version) in cur_versions.iter_mut().enumerate() {
|
||||
let model_dir = &ARGS.model_dir[idx];
|
||||
PredictService::scan_load_latest_model_from_model_dir(
|
||||
@ -222,33 +222,39 @@ impl<T: Model> PredictService<T> {
|
||||
.map(|b| b.parse().unwrap())
|
||||
.collect::<Vec<u64>>();
|
||||
let no_msg_wait_millis = *batch_time_out_millis.iter().min().unwrap();
|
||||
let mut all_model_predictors =
|
||||
ArrayVec::<ArrayVec<BatchPredictor<T>, MAX_VERSIONS_PER_MODEL>, MAX_NUM_MODELS>::new();
|
||||
let mut all_model_predictors: ArrayVec::<ArrayVec<BatchPredictor<T>, MAX_VERSIONS_PER_MODEL>, MAX_NUM_MODELS> =
|
||||
(0 ..MAX_NUM_MODELS).map( |_| ArrayVec::<BatchPredictor<T>, MAX_VERSIONS_PER_MODEL>::new()).collect();
|
||||
loop {
|
||||
let msg = rx.try_recv();
|
||||
let no_more_msg = match msg {
|
||||
Ok(PredictMessage::Predict(model_spec_at, version, val, resp, ts)) => {
|
||||
if let Some(model_predictors) = all_model_predictors.get_mut(model_spec_at) {
|
||||
match version {
|
||||
None => model_predictors[0].push(val, resp, ts),
|
||||
Some(the_version) => match model_predictors
|
||||
.iter_mut()
|
||||
.find(|x| x.model.version() == the_version)
|
||||
{
|
||||
None => resp
|
||||
.send(PredictResult::ModelVersionNotFound(
|
||||
model_spec_at,
|
||||
the_version,
|
||||
))
|
||||
.unwrap_or_else(|e| {
|
||||
error!("cannot send back version error: {:?}", e)
|
||||
}),
|
||||
Some(predictor) => predictor.push(val, resp, ts),
|
||||
},
|
||||
if model_predictors.is_empty() {
|
||||
resp.send(PredictResult::ModelNotReady(model_spec_at))
|
||||
.unwrap_or_else(|e| error!("cannot send back model not ready error: {:?}", e));
|
||||
}
|
||||
else {
|
||||
match version {
|
||||
None => model_predictors[0].push(val, resp, ts),
|
||||
Some(the_version) => match model_predictors
|
||||
.iter_mut()
|
||||
.find(|x| x.model.version() == the_version)
|
||||
{
|
||||
None => resp
|
||||
.send(PredictResult::ModelVersionNotFound(
|
||||
model_spec_at,
|
||||
the_version,
|
||||
))
|
||||
.unwrap_or_else(|e| {
|
||||
error!("cannot send back version error: {:?}", e)
|
||||
}),
|
||||
Some(predictor) => predictor.push(val, resp, ts),
|
||||
},
|
||||
}
|
||||
}
|
||||
} else {
|
||||
resp.send(PredictResult::ModelNotFound(model_spec_at))
|
||||
.unwrap_or_else(|e| error!("cannot send back model error: {:?}", e))
|
||||
.unwrap_or_else(|e| error!("cannot send back model not found error: {:?}", e))
|
||||
}
|
||||
MPSC_CHANNEL_SIZE.dec();
|
||||
false
|
||||
@ -266,27 +272,23 @@ impl<T: Model> PredictService<T> {
|
||||
queue_reset_ts: Instant::now(),
|
||||
queue_earliest_rq_ts: Instant::now(),
|
||||
};
|
||||
if idx < all_model_predictors.len() {
|
||||
metrics::NEW_MODEL_SNAPSHOT
|
||||
.with_label_values(&[&MODEL_SPECS[idx]])
|
||||
.inc();
|
||||
assert!(idx < all_model_predictors.len());
|
||||
metrics::NEW_MODEL_SNAPSHOT
|
||||
.with_label_values(&[&MODEL_SPECS[idx]])
|
||||
.inc();
|
||||
|
||||
info!("now we serve updated model: {}", predictor.model);
|
||||
//we can do this since the vector is small
|
||||
let predictors = &mut all_model_predictors[idx];
|
||||
if predictors.len() == ARGS.versions_per_model {
|
||||
predictors.remove(predictors.len() - 1);
|
||||
}
|
||||
predictors.insert(0, predictor);
|
||||
} else {
|
||||
info!("now we serve new model: {:}", predictor.model);
|
||||
let mut predictors =
|
||||
ArrayVec::<BatchPredictor<T>, MAX_VERSIONS_PER_MODEL>::new();
|
||||
predictors.push(predictor);
|
||||
all_model_predictors.push(predictors);
|
||||
//check the invariant that we always push the last model to the end
|
||||
assert_eq!(all_model_predictors.len(), idx + 1)
|
||||
//we can do this since the vector is small
|
||||
let predictors = &mut all_model_predictors[idx];
|
||||
if predictors.len() == 0 {
|
||||
info!("now we serve new model: {}", predictor.model);
|
||||
}
|
||||
else {
|
||||
info!("now we serve updated model: {}", predictor.model);
|
||||
}
|
||||
if predictors.len() == ARGS.versions_per_model {
|
||||
predictors.remove(predictors.len() - 1);
|
||||
}
|
||||
predictors.insert(0, predictor);
|
||||
false
|
||||
}
|
||||
Err(TryRecvError::Empty) => true,
|
||||
|
@ -3,9 +3,9 @@ name = "segdense"
|
||||
version = "0.1.0"
|
||||
edition = "2021"
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
|
||||
[dependencies]
|
||||
env_logger = "0.10.0"
|
||||
serde = { version = "1.0.104", features = ["derive"] }
|
||||
serde_json = "1.0.48"
|
||||
log = "0.4.17"
|
||||
|
@ -5,13 +5,13 @@ use std::fmt::Display;
|
||||
*/
|
||||
#[derive(Debug)]
|
||||
pub enum SegDenseError {
|
||||
IoError(std::io::Error),
|
||||
Json(serde_json::Error),
|
||||
JsonMissingRoot,
|
||||
JsonMissingObject,
|
||||
JsonMissingArray,
|
||||
JsonArraySize,
|
||||
JsonMissingInputFeature,
|
||||
IoError(std::io::Error),
|
||||
Json(serde_json::Error),
|
||||
JsonMissingRoot,
|
||||
JsonMissingObject,
|
||||
JsonMissingArray,
|
||||
JsonArraySize,
|
||||
JsonMissingInputFeature,
|
||||
}
|
||||
|
||||
impl Display for SegDenseError {
|
||||
@ -25,19 +25,18 @@ impl Display for SegDenseError {
|
||||
SegDenseError::JsonArraySize => write!(f, "SegDense JSON: Array size not as expected!"),
|
||||
SegDenseError::JsonMissingInputFeature => write!(f, "SegDense JSON: Missing input feature!"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl std::error::Error for SegDenseError {}
|
||||
|
||||
impl From<std::io::Error> for SegDenseError {
|
||||
fn from(err: std::io::Error) -> Self {
|
||||
SegDenseError::IoError(err)
|
||||
}
|
||||
fn from(err: std::io::Error) -> Self {
|
||||
SegDenseError::IoError(err)
|
||||
}
|
||||
}
|
||||
|
||||
impl From<serde_json::Error> for SegDenseError {
|
||||
fn from(err: serde_json::Error) -> Self {
|
||||
SegDenseError::Json(err)
|
||||
}
|
||||
fn from(err: serde_json::Error) -> Self {
|
||||
SegDenseError::Json(err)
|
||||
}
|
||||
}
|
||||
|
@ -1,4 +1,4 @@
|
||||
pub mod error;
|
||||
pub mod segdense_transform_spec_home_recap_2022;
|
||||
pub mod mapper;
|
||||
pub mod util;
|
||||
pub mod segdense_transform_spec_home_recap_2022;
|
||||
pub mod util;
|
||||
|
@ -5,19 +5,18 @@ use segdense::error::SegDenseError;
|
||||
use segdense::util;
|
||||
|
||||
fn main() -> Result<(), SegDenseError> {
|
||||
env_logger::init();
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
let schema_file_name: &str = if args.len() == 1 {
|
||||
"json/compact.json"
|
||||
} else {
|
||||
&args[1]
|
||||
};
|
||||
env_logger::init();
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
let json_str = fs::read_to_string(schema_file_name)?;
|
||||
let schema_file_name: &str = if args.len() == 1 {
|
||||
"json/compact.json"
|
||||
} else {
|
||||
&args[1]
|
||||
};
|
||||
|
||||
util::safe_load_config(&json_str)?;
|
||||
let json_str = fs::read_to_string(schema_file_name)?;
|
||||
|
||||
Ok(())
|
||||
util::safe_load_config(&json_str)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
|
@ -19,13 +19,13 @@ pub struct FeatureMapper {
|
||||
impl FeatureMapper {
|
||||
pub fn new() -> FeatureMapper {
|
||||
FeatureMapper {
|
||||
map: HashMap::new()
|
||||
map: HashMap::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub trait MapWriter {
|
||||
fn set(&mut self, feature_id: i64, info: FeatureInfo);
|
||||
fn set(&mut self, feature_id: i64, info: FeatureInfo);
|
||||
}
|
||||
|
||||
pub trait MapReader {
|
||||
|
@ -164,7 +164,6 @@ pub struct ComplexFeatureTypeTransformSpec {
|
||||
pub tensor_shape: Vec<i64>,
|
||||
}
|
||||
|
||||
|
||||
#[derive(Default, Debug, Clone, PartialEq, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct InputFeatureMapRecord {
|
||||
|
@ -1,10 +1,10 @@
|
||||
use log::debug;
|
||||
use std::fs;
|
||||
use log::{debug};
|
||||
|
||||
use serde_json::{Value, Map};
|
||||
use serde_json::{Map, Value};
|
||||
|
||||
use crate::error::SegDenseError;
|
||||
use crate::mapper::{FeatureMapper, FeatureInfo, MapWriter};
|
||||
use crate::mapper::{FeatureInfo, FeatureMapper, MapWriter};
|
||||
use crate::segdense_transform_spec_home_recap_2022::{self as seg_dense, InputFeature};
|
||||
|
||||
pub fn load_config(file_name: &str) -> seg_dense::Root {
|
||||
@ -42,15 +42,8 @@ pub fn safe_load_config(json_str: &str) -> Result<FeatureMapper, SegDenseError>
|
||||
load_from_parsed_config(root)
|
||||
}
|
||||
|
||||
pub fn load_from_parsed_config_ref(root: &seg_dense::Root) -> FeatureMapper {
|
||||
load_from_parsed_config(root.clone()).unwrap_or_else(
|
||||
|error| panic!("Error loading all_config.json - {}", error))
|
||||
}
|
||||
|
||||
// Perf note : make 'root' un-owned
|
||||
pub fn load_from_parsed_config(root: seg_dense::Root) ->
|
||||
Result<FeatureMapper, SegDenseError> {
|
||||
|
||||
pub fn load_from_parsed_config(root: seg_dense::Root) -> Result<FeatureMapper, SegDenseError> {
|
||||
let v = root.input_features_map;
|
||||
|
||||
// Do error check
|
||||
@ -84,7 +77,7 @@ pub fn load_from_parsed_config(root: seg_dense::Root) ->
|
||||
Some(info) => {
|
||||
debug!("{:?}", info);
|
||||
fm.set(feature_id, info)
|
||||
},
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
}
|
||||
@ -92,19 +85,22 @@ pub fn load_from_parsed_config(root: seg_dense::Root) ->
|
||||
Ok(fm)
|
||||
}
|
||||
#[allow(dead_code)]
|
||||
fn add_feature_info_to_mapper(feature_mapper: &mut FeatureMapper, input_features: &Vec<InputFeature>) {
|
||||
fn add_feature_info_to_mapper(
|
||||
feature_mapper: &mut FeatureMapper,
|
||||
input_features: &Vec<InputFeature>,
|
||||
) {
|
||||
for input_feature in input_features.iter() {
|
||||
let feature_id = input_feature.feature_id;
|
||||
let feature_info = to_feature_info(input_feature);
|
||||
|
||||
match feature_info {
|
||||
Some(info) => {
|
||||
debug!("{:?}", info);
|
||||
feature_mapper.set(feature_id, info)
|
||||
},
|
||||
None => (),
|
||||
let feature_id = input_feature.feature_id;
|
||||
let feature_info = to_feature_info(input_feature);
|
||||
|
||||
match feature_info {
|
||||
Some(info) => {
|
||||
debug!("{:?}", info);
|
||||
feature_mapper.set(feature_id, info)
|
||||
}
|
||||
None => (),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub fn to_feature_info(input_feature: &seg_dense::InputFeature) -> Option<FeatureInfo> {
|
||||
@ -137,7 +133,7 @@ pub fn to_feature_info(input_feature: &seg_dense::InputFeature) -> Option<Featur
|
||||
2 => 0,
|
||||
3 => 2,
|
||||
_ => -1,
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
if input_feature.index < 0 {
|
||||
@ -154,4 +150,3 @@ pub fn to_feature_info(input_feature: &seg_dense::InputFeature) -> Option<Featur
|
||||
index_within_tensor: input_feature.index,
|
||||
})
|
||||
}
|
||||
|
||||
|
1
representation-manager/BUILD.bazel
Normal file
1
representation-manager/BUILD.bazel
Normal file
@ -0,0 +1 @@
|
||||
# This prevents SQ query from grabbing //:all since it traverses up once to find a BUILD
|
4
representation-manager/README.md
Normal file
4
representation-manager/README.md
Normal file
@ -0,0 +1,4 @@
|
||||
# Representation Manager #
|
||||
|
||||
**Representation Manager** (RMS) serves as a centralized embedding management system, providing SimClusters or other embeddings as facade of the underlying storage or services.
|
||||
|
4
representation-manager/bin/deploy.sh
Executable file
4
representation-manager/bin/deploy.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
JOB=representation-manager bazel run --ui_event_filters=-info,-stdout,-stderr --noshow_progress \
|
||||
//relevance-platform/src/main/python/deploy -- "$@"
|
@ -0,0 +1,17 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-thrift-client",
|
||||
"frigate/frigate-common/src/main/scala/com/twitter/frigate/common/store/strato",
|
||||
"hermit/hermit-core/src/main/scala/com/twitter/hermit/store/common",
|
||||
"relevance-platform/src/main/scala/com/twitter/relevance_platform/common/readablestore",
|
||||
"representation-manager/client/src/main/scala/com/twitter/representation_manager/config",
|
||||
"representation-manager/server/src/main/thrift:thrift-scala",
|
||||
"src/scala/com/twitter/simclusters_v2/common",
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift-scala",
|
||||
"stitch/stitch-storehaus",
|
||||
"strato/src/main/scala/com/twitter/strato/client",
|
||||
],
|
||||
)
|
@ -0,0 +1,208 @@
|
||||
package com.twitter.representation_manager
|
||||
|
||||
import com.twitter.finagle.memcached.{Client => MemcachedClient}
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.frigate.common.store.strato.StratoFetchableStore
|
||||
import com.twitter.hermit.store.common.ObservedCachedReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedReadableStore
|
||||
import com.twitter.representation_manager.config.ClientConfig
|
||||
import com.twitter.representation_manager.config.DisabledInMemoryCacheParams
|
||||
import com.twitter.representation_manager.config.EnabledInMemoryCacheParams
|
||||
import com.twitter.representation_manager.thriftscala.SimClustersEmbeddingView
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.LocaleEntityId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.TopicId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import com.twitter.strato.client.{Client => StratoClient}
|
||||
import com.twitter.strato.thrift.ScroogeConvImplicits._
|
||||
|
||||
/**
|
||||
* This is the class that offers features to build readable stores for a given
|
||||
* SimClustersEmbeddingView (i.e. embeddingType and modelVersion). It applies ClientConfig
|
||||
* for a particular service and build ReadableStores which implement that config.
|
||||
*/
|
||||
class StoreBuilder(
|
||||
clientConfig: ClientConfig,
|
||||
stratoClient: StratoClient,
|
||||
memCachedClient: MemcachedClient,
|
||||
globalStats: StatsReceiver,
|
||||
) {
|
||||
private val stats =
|
||||
globalStats.scope("representation_manager_client").scope(this.getClass.getSimpleName)
|
||||
|
||||
// Column consts
|
||||
private val ColPathPrefix = "recommendations/representation_manager/"
|
||||
private val SimclustersTweetColPath = ColPathPrefix + "simClustersEmbedding.Tweet"
|
||||
private val SimclustersUserColPath = ColPathPrefix + "simClustersEmbedding.User"
|
||||
private val SimclustersTopicIdColPath = ColPathPrefix + "simClustersEmbedding.TopicId"
|
||||
private val SimclustersLocaleEntityIdColPath =
|
||||
ColPathPrefix + "simClustersEmbedding.LocaleEntityId"
|
||||
|
||||
def buildSimclustersTweetEmbeddingStore(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[Long, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[Long, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersTweetColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
addCacheLayer(rawStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersUserEmbeddingStore(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[Long, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[Long, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersUserColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
addCacheLayer(rawStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersTopicIdEmbeddingStore(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[TopicId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[TopicId, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersTopicIdColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
addCacheLayer(rawStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersLocaleEntityIdEmbeddingStore(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[LocaleEntityId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[LocaleEntityId, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersLocaleEntityIdColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
addCacheLayer(rawStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersTweetEmbeddingStoreWithEmbeddingIdAsKey(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[Long, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersTweetColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
val embeddingIdAsKeyStore = rawStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.TweetId(tweetId)) =>
|
||||
tweetId
|
||||
}
|
||||
|
||||
addCacheLayer(embeddingIdAsKeyStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersUserEmbeddingStoreWithEmbeddingIdAsKey(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[Long, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersUserColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
val embeddingIdAsKeyStore = rawStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.UserId(userId)) =>
|
||||
userId
|
||||
}
|
||||
|
||||
addCacheLayer(embeddingIdAsKeyStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersTopicEmbeddingStoreWithEmbeddingIdAsKey(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[TopicId, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersTopicIdColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
val embeddingIdAsKeyStore = rawStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.TopicId(topicId)) =>
|
||||
topicId
|
||||
}
|
||||
|
||||
addCacheLayer(embeddingIdAsKeyStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersTopicIdEmbeddingStoreWithEmbeddingIdAsKey(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[TopicId, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersTopicIdColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
val embeddingIdAsKeyStore = rawStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.TopicId(topicId)) =>
|
||||
topicId
|
||||
}
|
||||
|
||||
addCacheLayer(embeddingIdAsKeyStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
def buildSimclustersLocaleEntityIdEmbeddingStoreWithEmbeddingIdAsKey(
|
||||
embeddingColumnView: SimClustersEmbeddingView
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withView[LocaleEntityId, SimClustersEmbeddingView, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
SimclustersLocaleEntityIdColPath,
|
||||
embeddingColumnView)
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
val embeddingIdAsKeyStore = rawStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.LocaleEntityId(localeEntityId)) =>
|
||||
localeEntityId
|
||||
}
|
||||
|
||||
addCacheLayer(embeddingIdAsKeyStore, embeddingColumnView)
|
||||
}
|
||||
|
||||
private def addCacheLayer[K](
|
||||
rawStore: ReadableStore[K, SimClustersEmbedding],
|
||||
embeddingColumnView: SimClustersEmbeddingView,
|
||||
): ReadableStore[K, SimClustersEmbedding] = {
|
||||
// Add in-memory caching based on ClientConfig
|
||||
val inMemCacheParams = clientConfig.inMemoryCacheConfig
|
||||
.getCacheSetup(embeddingColumnView.embeddingType, embeddingColumnView.modelVersion)
|
||||
|
||||
val statsPerStore = stats
|
||||
.scope(embeddingColumnView.embeddingType.name).scope(embeddingColumnView.modelVersion.name)
|
||||
|
||||
inMemCacheParams match {
|
||||
case DisabledInMemoryCacheParams =>
|
||||
ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(statsPerStore)
|
||||
case EnabledInMemoryCacheParams(ttl, maxKeys, cacheName) =>
|
||||
ObservedCachedReadableStore.from[K, SimClustersEmbedding](
|
||||
rawStore,
|
||||
ttl = ttl,
|
||||
maxKeys = maxKeys,
|
||||
cacheName = cacheName,
|
||||
windowSize = 10000L
|
||||
)(statsPerStore)
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,12 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-thrift-client",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/common",
|
||||
"representation-manager/server/src/main/thrift:thrift-scala",
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift-scala",
|
||||
"strato/src/main/scala/com/twitter/strato/client",
|
||||
],
|
||||
)
|
@ -0,0 +1,25 @@
|
||||
package com.twitter.representation_manager.config
|
||||
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
|
||||
/*
|
||||
* This is RMS client config class.
|
||||
* We only support setting up in memory cache params for now, but we expect to enable other
|
||||
* customisations in the near future e.g. request timeout
|
||||
*
|
||||
* --------------------------------------------
|
||||
* PLEASE NOTE:
|
||||
* Having in-memory cache is not necessarily a free performance win, anyone considering it should
|
||||
* investigate rather than blindly enabling it
|
||||
* */
|
||||
class ClientConfig(inMemCacheParamsOverrides: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
InMemoryCacheParams
|
||||
] = Map.empty) {
|
||||
// In memory cache config per embedding
|
||||
val inMemCacheParams = DefaultInMemoryCacheConfig.cacheParamsMap ++ inMemCacheParamsOverrides
|
||||
val inMemoryCacheConfig = new InMemoryCacheConfig(inMemCacheParams)
|
||||
}
|
||||
|
||||
object DefaultClientConfig extends ClientConfig
|
@ -0,0 +1,53 @@
|
||||
package com.twitter.representation_manager.config
|
||||
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.util.Duration
|
||||
|
||||
/*
|
||||
* --------------------------------------------
|
||||
* PLEASE NOTE:
|
||||
* Having in-memory cache is not necessarily a free performance win, anyone considering it should
|
||||
* investigate rather than blindly enabling it
|
||||
* --------------------------------------------
|
||||
* */
|
||||
|
||||
sealed trait InMemoryCacheParams
|
||||
|
||||
/*
|
||||
* This holds params that is required to set up a in-mem cache for a single embedding store
|
||||
*/
|
||||
case class EnabledInMemoryCacheParams(
|
||||
ttl: Duration,
|
||||
maxKeys: Int,
|
||||
cacheName: String)
|
||||
extends InMemoryCacheParams
|
||||
object DisabledInMemoryCacheParams extends InMemoryCacheParams
|
||||
|
||||
/*
|
||||
* This is the class for the in-memory cache config. Client could pass in their own cacheParamsMap to
|
||||
* create a new InMemoryCacheConfig instead of using the DefaultInMemoryCacheConfig object below
|
||||
* */
|
||||
class InMemoryCacheConfig(
|
||||
cacheParamsMap: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
InMemoryCacheParams
|
||||
] = Map.empty) {
|
||||
|
||||
def getCacheSetup(
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): InMemoryCacheParams = {
|
||||
// When requested embedding type doesn't exist, we return DisabledInMemoryCacheParams
|
||||
cacheParamsMap.getOrElse((embeddingType, modelVersion), DisabledInMemoryCacheParams)
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* Default config for the in-memory cache
|
||||
* Clients can directly import and use this one if they don't want to set up a customised config
|
||||
* */
|
||||
object DefaultInMemoryCacheConfig extends InMemoryCacheConfig {
|
||||
// set default to no in-memory caching
|
||||
val cacheParamsMap = Map.empty
|
||||
}
|
21
representation-manager/server/BUILD
Normal file
21
representation-manager/server/BUILD
Normal file
@ -0,0 +1,21 @@
|
||||
jvm_binary(
|
||||
name = "bin",
|
||||
basename = "representation-manager",
|
||||
main = "com.twitter.representation_manager.RepresentationManagerFedServerMain",
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-logback/src/main/scala",
|
||||
"loglens/loglens-logback/src/main/scala/com/twitter/loglens/logback",
|
||||
"representation-manager/server/src/main/resources",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager",
|
||||
"twitter-server/logback-classic/src/main/scala",
|
||||
],
|
||||
)
|
||||
|
||||
# Aurora Workflows build phase convention requires a jvm_app named with ${project-name}-app
|
||||
jvm_app(
|
||||
name = "representation-manager-app",
|
||||
archive = "zip",
|
||||
binary = ":bin",
|
||||
)
|
7
representation-manager/server/src/main/resources/BUILD
Normal file
7
representation-manager/server/src/main/resources/BUILD
Normal file
@ -0,0 +1,7 @@
|
||||
resources(
|
||||
sources = [
|
||||
"*.xml",
|
||||
"config/*.yml",
|
||||
],
|
||||
tags = ["bazel-compatible"],
|
||||
)
|
@ -0,0 +1,219 @@
|
||||
# ---------- traffic percentage by embedding type and model version ----------
|
||||
# Decider strings are build dynamically following the rule in there
|
||||
# i.e. s"enable_${embeddingType.name}_${modelVersion.name}"
|
||||
# Hence this should be updated accordingly if usage is changed in the embedding stores
|
||||
|
||||
# Tweet embeddings
|
||||
"enable_LogFavBasedTweet_Model20m145k2020":
|
||||
comment: "Enable x% read traffic (0<=x<=10000, e.g. 1000=10%) for LogFavBasedTweet - Model20m145k2020. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedTweet_Model20m145kUpdated":
|
||||
comment: "Enable x% read traffic (0<=x<=10000, e.g. 1000=10%) for LogFavBasedTweet - Model20m145kUpdated. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavLongestL2EmbeddingTweet_Model20m145k2020":
|
||||
comment: "Enable x% read traffic (0<=x<=10000, e.g. 1000=10%) for LogFavLongestL2EmbeddingTweet - Model20m145k2020. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavLongestL2EmbeddingTweet_Model20m145kUpdated":
|
||||
comment: "Enable x% read traffic (0<=x<=10000, e.g. 1000=10%) for LogFavLongestL2EmbeddingTweet - Model20m145kUpdated. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
# Topic embeddings
|
||||
"enable_FavTfgTopic_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FavTfgTopic - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedKgoApeTopic_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedKgoApeTopic - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
# User embeddings - KnownFor
|
||||
"enable_FavBasedProducer_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to FavBasedProducer - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FavBasedProducer_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FavBasedProducer - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FollowBasedProducer_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FollowBasedProducer - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_AggregatableFavBasedProducer_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to AggregatableFavBasedProducer - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_AggregatableFavBasedProducer_Model20m145k2020":
|
||||
comment: "Enable the read traffic to AggregatableFavBasedProducer - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_AggregatableLogFavBasedProducer_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to AggregatableLogFavBasedProducer - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_AggregatableLogFavBasedProducer_Model20m145k2020":
|
||||
comment: "Enable the read traffic to AggregatableLogFavBasedProducer - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
enable_RelaxedAggregatableLogFavBasedProducer_Model20m145kUpdated:
|
||||
comment: "Enable the read traffic to RelaxedAggregatableLogFavBasedProducer - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
enable_RelaxedAggregatableLogFavBasedProducer_Model20m145k2020:
|
||||
comment: "Enable the read traffic to RelaxedAggregatableLogFavBasedProducer - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
# User embeddings - InterestedIn
|
||||
"enable_LogFavBasedUserInterestedInFromAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedInFromAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FollowBasedUserInterestedInFromAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FollowBasedUserInterestedInFromAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FavBasedUserInterestedIn_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to FavBasedUserInterestedIn - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FavBasedUserInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FavBasedUserInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FollowBasedUserInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FollowBasedUserInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FavBasedUserInterestedInFromPE_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to FavBasedUserInterestedInFromPE - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FilteredUserInterestedIn_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to FilteredUserInterestedIn - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FilteredUserInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to FilteredUserInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_FilteredUserInterestedInFromPE_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to FilteredUserInterestedInFromPE - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_UnfilteredUserInterestedIn_Model20m145kUpdated":
|
||||
comment: "Enable the read traffic to UnfilteredUserInterestedIn - Model20m145kUpdated from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_UnfilteredUserInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to UnfilteredUserInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_UserNextInterestedIn_Model20m145k2020":
|
||||
comment: "Enable the read traffic to UserNextInterestedIn - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedAverageAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedAverageAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedAverageAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
"enable_LogFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE_Model20m145k2020":
|
||||
comment: "Enable the read traffic to LogFavBasedUserInterestedAverageAddressBookFromIIAPE - Model20m145k2020 from 0% to 100%. 0 means return EMPTY for all requests."
|
||||
default_availability: 10000
|
||||
|
||||
# ---------- load shedding by caller id ----------
|
||||
# To create a new decider, add here with the same format and caller's details :
|
||||
# "representation-manager_load_shed_by_caller_id_twtr:{{role}}:{{name}}:{{environment}}:{{cluster}}"
|
||||
# All the deciders below are generated by this script:
|
||||
# ./strato/bin/fed deciders representation-manager --service-role=representation-manager --service-name=representation-manager
|
||||
# If you need to run the script and paste the output, add ONLY the prod deciders here.
|
||||
"representation-manager_load_shed_by_caller_id_all":
|
||||
comment: "Reject all traffic from caller id: all"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:cr-mixer:cr-mixer:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:cr-mixer:cr-mixer:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:cr-mixer:cr-mixer:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:cr-mixer:cr-mixer:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-1:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-1:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-1:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-1:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-3:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-3:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-3:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-3:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-4:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-4:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-4:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-4:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-experimental:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-experimental:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann-experimental:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann-experimental:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:simclusters-ann:simclusters-ann:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:simclusters-ann:simclusters-ann:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:stratostore:stratoapi:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoapi:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:stratostore:stratoserver:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoserver:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-manager_load_shed_by_caller_id_twtr:svc:stratostore:stratoserver:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoserver:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
# ---------- Dark Traffic Proxy ----------
|
||||
representation-manager_forward_dark_traffic:
|
||||
comment: "Defines the percentage of traffic to forward to diffy-proxy. Set to 0 to disable dark traffic forwarding"
|
||||
default_availability: 0
|
165
representation-manager/server/src/main/resources/logback.xml
Normal file
165
representation-manager/server/src/main/resources/logback.xml
Normal file
@ -0,0 +1,165 @@
|
||||
<configuration>
|
||||
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Service Config -->
|
||||
<!-- ===================================================== -->
|
||||
<property name="DEFAULT_SERVICE_PATTERN"
|
||||
value="%-16X{traceId} %-12X{clientId:--} %-16X{method} %-25logger{0} %msg"/>
|
||||
|
||||
<property name="DEFAULT_ACCESS_PATTERN"
|
||||
value="%msg"/>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Common Config -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<!-- JUL/JDK14 to Logback bridge -->
|
||||
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
|
||||
<resetJUL>true</resetJUL>
|
||||
</contextListener>
|
||||
|
||||
<!-- ====================================================================================== -->
|
||||
<!-- NOTE: The following appenders use a simple TimeBasedRollingPolicy configuration. -->
|
||||
<!-- You may want to consider using a more advanced SizeAndTimeBasedRollingPolicy. -->
|
||||
<!-- See: https://logback.qos.ch/manual/appenders.html#SizeAndTimeBasedRollingPolicy -->
|
||||
<!-- ====================================================================================== -->
|
||||
|
||||
<!-- Service Log (rollover daily, keep maximum of 21 days of gzip compressed logs) -->
|
||||
<appender name="SERVICE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>${log.service.output}</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>${log.service.output}.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>3GB</totalSizeCap>
|
||||
<!-- keep maximum 21 days' worth of history -->
|
||||
<maxHistory>21</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>%date %.-3level ${DEFAULT_SERVICE_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- Access Log (rollover daily, keep maximum of 21 days of gzip compressed logs) -->
|
||||
<appender name="ACCESS" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>${log.access.output}</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>${log.access.output}.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>100MB</totalSizeCap>
|
||||
<!-- keep maximum 7 days' worth of history -->
|
||||
<maxHistory>7</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>${DEFAULT_ACCESS_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!--LogLens -->
|
||||
<appender name="LOGLENS" class="com.twitter.loglens.logback.LoglensAppender">
|
||||
<mdcAdditionalContext>true</mdcAdditionalContext>
|
||||
<category>${log.lens.category}</category>
|
||||
<index>${log.lens.index}</index>
|
||||
<tag>${log.lens.tag}/service</tag>
|
||||
<encoder>
|
||||
<pattern>%msg</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- LogLens Access -->
|
||||
<appender name="LOGLENS-ACCESS" class="com.twitter.loglens.logback.LoglensAppender">
|
||||
<mdcAdditionalContext>true</mdcAdditionalContext>
|
||||
<category>${log.lens.category}</category>
|
||||
<index>${log.lens.index}</index>
|
||||
<tag>${log.lens.tag}/access</tag>
|
||||
<encoder>
|
||||
<pattern>%msg</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- Pipeline Execution Logs -->
|
||||
<appender name="ALLOW-LISTED-PIPELINE-EXECUTIONS" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>allow_listed_pipeline_executions.log</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>allow_listed_pipeline_executions.log.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>100MB</totalSizeCap>
|
||||
<!-- keep maximum 7 days' worth of history -->
|
||||
<maxHistory>7</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>%date %.-3level ${DEFAULT_SERVICE_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Primary Async Appenders -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<property name="async_queue_size" value="${queue.size:-50000}"/>
|
||||
<property name="async_max_flush_time" value="${max.flush.time:-0}"/>
|
||||
|
||||
<appender name="ASYNC-SERVICE" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="SERVICE"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-ACCESS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="ACCESS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-ALLOW-LISTED-PIPELINE-EXECUTIONS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="ALLOW-LISTED-PIPELINE-EXECUTIONS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-LOGLENS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="LOGLENS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-LOGLENS-ACCESS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="LOGLENS-ACCESS"/>
|
||||
</appender>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Package Config -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<!-- Per-Package Config -->
|
||||
<logger name="com.twitter" level="INHERITED"/>
|
||||
<logger name="com.twitter.wilyns" level="INHERITED"/>
|
||||
<logger name="com.twitter.configbus.client.file" level="INHERITED"/>
|
||||
<logger name="com.twitter.finagle.mux" level="INHERITED"/>
|
||||
<logger name="com.twitter.finagle.serverset2" level="INHERITED"/>
|
||||
<logger name="com.twitter.logging.ScribeHandler" level="INHERITED"/>
|
||||
<logger name="com.twitter.zookeeper.client.internal" level="INHERITED"/>
|
||||
|
||||
<!-- Root Config -->
|
||||
<!-- For all logs except access logs, disable logging below log_level level by default. This can be overriden in the per-package loggers, and dynamically in the admin panel of individual instances. -->
|
||||
<root level="${log_level:-INFO}">
|
||||
<appender-ref ref="ASYNC-SERVICE"/>
|
||||
<appender-ref ref="ASYNC-LOGLENS"/>
|
||||
</root>
|
||||
|
||||
<!-- Access Logging -->
|
||||
<!-- Access logs are turned off by default -->
|
||||
<logger name="com.twitter.finatra.thrift.filters.AccessLoggingFilter" level="OFF" additivity="false">
|
||||
<appender-ref ref="ASYNC-ACCESS"/>
|
||||
<appender-ref ref="ASYNC-LOGLENS-ACCESS"/>
|
||||
</logger>
|
||||
|
||||
</configuration>
|
@ -0,0 +1,13 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-thrift-client",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns/topic",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns/tweet",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns/user",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,40 @@
|
||||
package com.twitter.representation_manager
|
||||
|
||||
import com.google.inject.Module
|
||||
import com.twitter.inject.thrift.modules.ThriftClientIdModule
|
||||
import com.twitter.representation_manager.columns.topic.LocaleEntityIdSimClustersEmbeddingCol
|
||||
import com.twitter.representation_manager.columns.topic.TopicIdSimClustersEmbeddingCol
|
||||
import com.twitter.representation_manager.columns.tweet.TweetSimClustersEmbeddingCol
|
||||
import com.twitter.representation_manager.columns.user.UserSimClustersEmbeddingCol
|
||||
import com.twitter.representation_manager.modules.CacheModule
|
||||
import com.twitter.representation_manager.modules.InterestsThriftClientModule
|
||||
import com.twitter.representation_manager.modules.LegacyRMSConfigModule
|
||||
import com.twitter.representation_manager.modules.StoreModule
|
||||
import com.twitter.representation_manager.modules.TimerModule
|
||||
import com.twitter.representation_manager.modules.UttClientModule
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.fed.server._
|
||||
|
||||
object RepresentationManagerFedServerMain extends RepresentationManagerFedServer
|
||||
|
||||
trait RepresentationManagerFedServer extends StratoFedServer {
|
||||
override def dest: String = "/s/representation-manager/representation-manager"
|
||||
override val modules: Seq[Module] =
|
||||
Seq(
|
||||
CacheModule,
|
||||
InterestsThriftClientModule,
|
||||
LegacyRMSConfigModule,
|
||||
StoreModule,
|
||||
ThriftClientIdModule,
|
||||
TimerModule,
|
||||
UttClientModule
|
||||
)
|
||||
|
||||
override def columns: Seq[Class[_ <: StratoFed.Column]] =
|
||||
Seq(
|
||||
classOf[TweetSimClustersEmbeddingCol],
|
||||
classOf[UserSimClustersEmbeddingCol],
|
||||
classOf[TopicIdSimClustersEmbeddingCol],
|
||||
classOf[LocaleEntityIdSimClustersEmbeddingCol]
|
||||
)
|
||||
}
|
@ -0,0 +1,9 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,26 @@
|
||||
package com.twitter.representation_manager.columns
|
||||
|
||||
import com.twitter.strato.access.Access.LdapGroup
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.FromColumns
|
||||
import com.twitter.strato.config.Has
|
||||
import com.twitter.strato.config.Prefix
|
||||
import com.twitter.strato.config.ServiceIdentifierPattern
|
||||
|
||||
object ColumnConfigBase {
|
||||
|
||||
/****************** Internal permissions *******************/
|
||||
val recosPermissions: Seq[com.twitter.strato.config.Policy] = Seq()
|
||||
|
||||
/****************** External permissions *******************/
|
||||
// This is used to grant limited access to members outside of RP team.
|
||||
val externalPermissions: Seq[com.twitter.strato.config.Policy] = Seq()
|
||||
|
||||
val contactInfo: ContactInfo = ContactInfo(
|
||||
description = "Please contact Relevance Platform for more details",
|
||||
contactEmail = "no-reply@twitter.com",
|
||||
ldapGroup = "ldap",
|
||||
jiraProject = "JIRA",
|
||||
links = Seq("http://go/rms-runbook")
|
||||
)
|
||||
}
|
@ -0,0 +1,14 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-core/src/main/scala",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/modules",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/store",
|
||||
"representation-manager/server/src/main/thrift:thrift-scala",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,77 @@
|
||||
package com.twitter.representation_manager.columns.topic
|
||||
|
||||
import com.twitter.representation_manager.columns.ColumnConfigBase
|
||||
import com.twitter.representation_manager.store.TopicSimClustersEmbeddingStore
|
||||
import com.twitter.representation_manager.thriftscala.SimClustersEmbeddingView
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.LocaleEntityId
|
||||
import com.twitter.stitch
|
||||
import com.twitter.stitch.Stitch
|
||||
import com.twitter.stitch.storehaus.StitchOfReadableStore
|
||||
import com.twitter.strato.catalog.OpMetadata
|
||||
import com.twitter.strato.config.AnyOf
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.FromColumns
|
||||
import com.twitter.strato.config.Policy
|
||||
import com.twitter.strato.config.Prefix
|
||||
import com.twitter.strato.data.Conv
|
||||
import com.twitter.strato.data.Description.PlainText
|
||||
import com.twitter.strato.data.Lifecycle
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.thrift.ScroogeConv
|
||||
import javax.inject.Inject
|
||||
|
||||
class LocaleEntityIdSimClustersEmbeddingCol @Inject() (
|
||||
embeddingStore: TopicSimClustersEmbeddingStore)
|
||||
extends StratoFed.Column(
|
||||
"recommendations/representation_manager/simClustersEmbedding.LocaleEntityId")
|
||||
with StratoFed.Fetch.Stitch {
|
||||
|
||||
private val storeStitch: SimClustersEmbeddingId => Stitch[SimClustersEmbedding] =
|
||||
StitchOfReadableStore(embeddingStore.topicSimClustersEmbeddingStore.mapValues(_.toThrift))
|
||||
|
||||
val colPermissions: Seq[com.twitter.strato.config.Policy] =
|
||||
ColumnConfigBase.recosPermissions ++ ColumnConfigBase.externalPermissions :+ FromColumns(
|
||||
Set(
|
||||
Prefix("ml/featureStore/simClusters"),
|
||||
))
|
||||
|
||||
override val policy: Policy = AnyOf({
|
||||
colPermissions
|
||||
})
|
||||
|
||||
override type Key = LocaleEntityId
|
||||
override type View = SimClustersEmbeddingView
|
||||
override type Value = SimClustersEmbedding
|
||||
|
||||
override val keyConv: Conv[Key] = ScroogeConv.fromStruct[LocaleEntityId]
|
||||
override val viewConv: Conv[View] = ScroogeConv.fromStruct[SimClustersEmbeddingView]
|
||||
override val valueConv: Conv[Value] = ScroogeConv.fromStruct[SimClustersEmbedding]
|
||||
|
||||
override val contactInfo: ContactInfo = ColumnConfigBase.contactInfo
|
||||
|
||||
override val metadata: OpMetadata = OpMetadata(
|
||||
lifecycle = Some(Lifecycle.Production),
|
||||
description = Some(
|
||||
PlainText(
|
||||
"The Topic SimClusters Embedding Endpoint in Representation Management Service with LocaleEntityId." +
|
||||
" TDD: http://go/rms-tdd"))
|
||||
)
|
||||
|
||||
override def fetch(key: Key, view: View): Stitch[Result[Value]] = {
|
||||
val embeddingId = SimClustersEmbeddingId(
|
||||
view.embeddingType,
|
||||
view.modelVersion,
|
||||
InternalId.LocaleEntityId(key)
|
||||
)
|
||||
|
||||
storeStitch(embeddingId)
|
||||
.map(embedding => found(embedding))
|
||||
.handle {
|
||||
case stitch.NotFound => missing
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,74 @@
|
||||
package com.twitter.representation_manager.columns.topic
|
||||
|
||||
import com.twitter.representation_manager.columns.ColumnConfigBase
|
||||
import com.twitter.representation_manager.store.TopicSimClustersEmbeddingStore
|
||||
import com.twitter.representation_manager.thriftscala.SimClustersEmbeddingView
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.TopicId
|
||||
import com.twitter.stitch
|
||||
import com.twitter.stitch.Stitch
|
||||
import com.twitter.stitch.storehaus.StitchOfReadableStore
|
||||
import com.twitter.strato.catalog.OpMetadata
|
||||
import com.twitter.strato.config.AnyOf
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.FromColumns
|
||||
import com.twitter.strato.config.Policy
|
||||
import com.twitter.strato.config.Prefix
|
||||
import com.twitter.strato.data.Conv
|
||||
import com.twitter.strato.data.Description.PlainText
|
||||
import com.twitter.strato.data.Lifecycle
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.thrift.ScroogeConv
|
||||
import javax.inject.Inject
|
||||
|
||||
class TopicIdSimClustersEmbeddingCol @Inject() (embeddingStore: TopicSimClustersEmbeddingStore)
|
||||
extends StratoFed.Column("recommendations/representation_manager/simClustersEmbedding.TopicId")
|
||||
with StratoFed.Fetch.Stitch {
|
||||
|
||||
private val storeStitch: SimClustersEmbeddingId => Stitch[SimClustersEmbedding] =
|
||||
StitchOfReadableStore(embeddingStore.topicSimClustersEmbeddingStore.mapValues(_.toThrift))
|
||||
|
||||
val colPermissions: Seq[com.twitter.strato.config.Policy] =
|
||||
ColumnConfigBase.recosPermissions ++ ColumnConfigBase.externalPermissions :+ FromColumns(
|
||||
Set(
|
||||
Prefix("ml/featureStore/simClusters"),
|
||||
))
|
||||
|
||||
override val policy: Policy = AnyOf({
|
||||
colPermissions
|
||||
})
|
||||
|
||||
override type Key = TopicId
|
||||
override type View = SimClustersEmbeddingView
|
||||
override type Value = SimClustersEmbedding
|
||||
|
||||
override val keyConv: Conv[Key] = ScroogeConv.fromStruct[TopicId]
|
||||
override val viewConv: Conv[View] = ScroogeConv.fromStruct[SimClustersEmbeddingView]
|
||||
override val valueConv: Conv[Value] = ScroogeConv.fromStruct[SimClustersEmbedding]
|
||||
|
||||
override val contactInfo: ContactInfo = ColumnConfigBase.contactInfo
|
||||
|
||||
override val metadata: OpMetadata = OpMetadata(
|
||||
lifecycle = Some(Lifecycle.Production),
|
||||
description = Some(PlainText(
|
||||
"The Topic SimClusters Embedding Endpoint in Representation Management Service with TopicId." +
|
||||
" TDD: http://go/rms-tdd"))
|
||||
)
|
||||
|
||||
override def fetch(key: Key, view: View): Stitch[Result[Value]] = {
|
||||
val embeddingId = SimClustersEmbeddingId(
|
||||
view.embeddingType,
|
||||
view.modelVersion,
|
||||
InternalId.TopicId(key)
|
||||
)
|
||||
|
||||
storeStitch(embeddingId)
|
||||
.map(embedding => found(embedding))
|
||||
.handle {
|
||||
case stitch.NotFound => missing
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,14 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-core/src/main/scala",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/modules",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/store",
|
||||
"representation-manager/server/src/main/thrift:thrift-scala",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,73 @@
|
||||
package com.twitter.representation_manager.columns.tweet
|
||||
|
||||
import com.twitter.representation_manager.columns.ColumnConfigBase
|
||||
import com.twitter.representation_manager.store.TweetSimClustersEmbeddingStore
|
||||
import com.twitter.representation_manager.thriftscala.SimClustersEmbeddingView
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.stitch
|
||||
import com.twitter.stitch.Stitch
|
||||
import com.twitter.stitch.storehaus.StitchOfReadableStore
|
||||
import com.twitter.strato.catalog.OpMetadata
|
||||
import com.twitter.strato.config.AnyOf
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.FromColumns
|
||||
import com.twitter.strato.config.Policy
|
||||
import com.twitter.strato.config.Prefix
|
||||
import com.twitter.strato.data.Conv
|
||||
import com.twitter.strato.data.Description.PlainText
|
||||
import com.twitter.strato.data.Lifecycle
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.thrift.ScroogeConv
|
||||
import javax.inject.Inject
|
||||
|
||||
class TweetSimClustersEmbeddingCol @Inject() (embeddingStore: TweetSimClustersEmbeddingStore)
|
||||
extends StratoFed.Column("recommendations/representation_manager/simClustersEmbedding.Tweet")
|
||||
with StratoFed.Fetch.Stitch {
|
||||
|
||||
private val storeStitch: SimClustersEmbeddingId => Stitch[SimClustersEmbedding] =
|
||||
StitchOfReadableStore(embeddingStore.tweetSimClustersEmbeddingStore.mapValues(_.toThrift))
|
||||
|
||||
val colPermissions: Seq[com.twitter.strato.config.Policy] =
|
||||
ColumnConfigBase.recosPermissions ++ ColumnConfigBase.externalPermissions :+ FromColumns(
|
||||
Set(
|
||||
Prefix("ml/featureStore/simClusters"),
|
||||
))
|
||||
|
||||
override val policy: Policy = AnyOf({
|
||||
colPermissions
|
||||
})
|
||||
|
||||
override type Key = Long // TweetId
|
||||
override type View = SimClustersEmbeddingView
|
||||
override type Value = SimClustersEmbedding
|
||||
|
||||
override val keyConv: Conv[Key] = Conv.long
|
||||
override val viewConv: Conv[View] = ScroogeConv.fromStruct[SimClustersEmbeddingView]
|
||||
override val valueConv: Conv[Value] = ScroogeConv.fromStruct[SimClustersEmbedding]
|
||||
|
||||
override val contactInfo: ContactInfo = ColumnConfigBase.contactInfo
|
||||
|
||||
override val metadata: OpMetadata = OpMetadata(
|
||||
lifecycle = Some(Lifecycle.Production),
|
||||
description = Some(
|
||||
PlainText("The Tweet SimClusters Embedding Endpoint in Representation Management Service." +
|
||||
" TDD: http://go/rms-tdd"))
|
||||
)
|
||||
|
||||
override def fetch(key: Key, view: View): Stitch[Result[Value]] = {
|
||||
val embeddingId = SimClustersEmbeddingId(
|
||||
view.embeddingType,
|
||||
view.modelVersion,
|
||||
InternalId.TweetId(key)
|
||||
)
|
||||
|
||||
storeStitch(embeddingId)
|
||||
.map(embedding => found(embedding))
|
||||
.handle {
|
||||
case stitch.NotFound => missing
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,14 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-core/src/main/scala",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/columns",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/modules",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/store",
|
||||
"representation-manager/server/src/main/thrift:thrift-scala",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,73 @@
|
||||
package com.twitter.representation_manager.columns.user
|
||||
|
||||
import com.twitter.representation_manager.columns.ColumnConfigBase
|
||||
import com.twitter.representation_manager.store.UserSimClustersEmbeddingStore
|
||||
import com.twitter.representation_manager.thriftscala.SimClustersEmbeddingView
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.stitch
|
||||
import com.twitter.stitch.Stitch
|
||||
import com.twitter.stitch.storehaus.StitchOfReadableStore
|
||||
import com.twitter.strato.catalog.OpMetadata
|
||||
import com.twitter.strato.config.AnyOf
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.FromColumns
|
||||
import com.twitter.strato.config.Policy
|
||||
import com.twitter.strato.config.Prefix
|
||||
import com.twitter.strato.data.Conv
|
||||
import com.twitter.strato.data.Description.PlainText
|
||||
import com.twitter.strato.data.Lifecycle
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.thrift.ScroogeConv
|
||||
import javax.inject.Inject
|
||||
|
||||
class UserSimClustersEmbeddingCol @Inject() (embeddingStore: UserSimClustersEmbeddingStore)
|
||||
extends StratoFed.Column("recommendations/representation_manager/simClustersEmbedding.User")
|
||||
with StratoFed.Fetch.Stitch {
|
||||
|
||||
private val storeStitch: SimClustersEmbeddingId => Stitch[SimClustersEmbedding] =
|
||||
StitchOfReadableStore(embeddingStore.userSimClustersEmbeddingStore.mapValues(_.toThrift))
|
||||
|
||||
val colPermissions: Seq[com.twitter.strato.config.Policy] =
|
||||
ColumnConfigBase.recosPermissions ++ ColumnConfigBase.externalPermissions :+ FromColumns(
|
||||
Set(
|
||||
Prefix("ml/featureStore/simClusters"),
|
||||
))
|
||||
|
||||
override val policy: Policy = AnyOf({
|
||||
colPermissions
|
||||
})
|
||||
|
||||
override type Key = Long // UserId
|
||||
override type View = SimClustersEmbeddingView
|
||||
override type Value = SimClustersEmbedding
|
||||
|
||||
override val keyConv: Conv[Key] = Conv.long
|
||||
override val viewConv: Conv[View] = ScroogeConv.fromStruct[SimClustersEmbeddingView]
|
||||
override val valueConv: Conv[Value] = ScroogeConv.fromStruct[SimClustersEmbedding]
|
||||
|
||||
override val contactInfo: ContactInfo = ColumnConfigBase.contactInfo
|
||||
|
||||
override val metadata: OpMetadata = OpMetadata(
|
||||
lifecycle = Some(Lifecycle.Production),
|
||||
description = Some(
|
||||
PlainText("The User SimClusters Embedding Endpoint in Representation Management Service." +
|
||||
" TDD: http://go/rms-tdd"))
|
||||
)
|
||||
|
||||
override def fetch(key: Key, view: View): Stitch[Result[Value]] = {
|
||||
val embeddingId = SimClustersEmbeddingId(
|
||||
view.embeddingType,
|
||||
view.modelVersion,
|
||||
InternalId.UserId(key)
|
||||
)
|
||||
|
||||
storeStitch(embeddingId)
|
||||
.map(embedding => found(embedding))
|
||||
.handle {
|
||||
case stitch.NotFound => missing
|
||||
}
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,13 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"decider/src/main/scala",
|
||||
"finagle/finagle-memcached",
|
||||
"hermit/hermit-core/src/main/scala/com/twitter/hermit/store/common",
|
||||
"relevance-platform/src/main/scala/com/twitter/relevance_platform/common/injection",
|
||||
"src/scala/com/twitter/simclusters_v2/common",
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift-scala",
|
||||
],
|
||||
)
|
@ -0,0 +1,153 @@
|
||||
package com.twitter.representation_manager.common
|
||||
|
||||
import com.twitter.bijection.scrooge.BinaryScalaCodec
|
||||
import com.twitter.conversions.DurationOps._
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.hashing.KeyHasher
|
||||
import com.twitter.hermit.store.common.ObservedMemcachedReadableStore
|
||||
import com.twitter.relevance_platform.common.injection.LZ4Injection
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbeddingIdCacheKeyBuilder
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType._
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion._
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import com.twitter.util.Duration
|
||||
|
||||
/*
|
||||
* NOTE - ALL the cache configs here are just placeholders, NONE of them is used anyweher in RMS yet
|
||||
* */
|
||||
sealed trait MemCacheParams
|
||||
sealed trait MemCacheConfig
|
||||
|
||||
/*
|
||||
* This holds params that is required to set up a memcache cache for a single embedding store
|
||||
* */
|
||||
case class EnabledMemCacheParams(ttl: Duration) extends MemCacheParams
|
||||
object DisabledMemCacheParams extends MemCacheParams
|
||||
|
||||
/*
|
||||
* We use this MemcacheConfig as the single source to set up the memcache for all RMS use cases
|
||||
* NO OVERRIDE FROM CLIENT
|
||||
* */
|
||||
object MemCacheConfig {
|
||||
val keyHasher: KeyHasher = KeyHasher.FNV1A_64
|
||||
val hashKeyPrefix: String = "RMS"
|
||||
val simclustersEmbeddingCacheKeyBuilder =
|
||||
SimClustersEmbeddingIdCacheKeyBuilder(keyHasher.hashKey, hashKeyPrefix)
|
||||
|
||||
val cacheParamsMap: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
MemCacheParams
|
||||
] = Map(
|
||||
// Tweet Embeddings
|
||||
(LogFavBasedTweet, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 10.minutes),
|
||||
(LogFavBasedTweet, Model20m145k2020) -> EnabledMemCacheParams(ttl = 10.minutes),
|
||||
(LogFavLongestL2EmbeddingTweet, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 10.minutes),
|
||||
(LogFavLongestL2EmbeddingTweet, Model20m145k2020) -> EnabledMemCacheParams(ttl = 10.minutes),
|
||||
// User - KnownFor Embeddings
|
||||
(FavBasedProducer, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FavBasedProducer, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FollowBasedProducer, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(AggregatableLogFavBasedProducer, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(RelaxedAggregatableLogFavBasedProducer, Model20m145kUpdated) -> EnabledMemCacheParams(ttl =
|
||||
12.hours),
|
||||
(RelaxedAggregatableLogFavBasedProducer, Model20m145k2020) -> EnabledMemCacheParams(ttl =
|
||||
12.hours),
|
||||
// User - InterestedIn Embeddings
|
||||
(LogFavBasedUserInterestedInFromAPE, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FollowBasedUserInterestedInFromAPE, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FavBasedUserInterestedIn, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FavBasedUserInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FollowBasedUserInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(LogFavBasedUserInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FavBasedUserInterestedInFromPE, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FilteredUserInterestedIn, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FilteredUserInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(FilteredUserInterestedInFromPE, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(UnfilteredUserInterestedIn, Model20m145kUpdated) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(UnfilteredUserInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(UserNextInterestedIn, Model20m145k2020) -> EnabledMemCacheParams(ttl =
|
||||
30.minutes), //embedding is updated every 2 hours, keeping it lower to avoid staleness
|
||||
(
|
||||
LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(
|
||||
LogFavBasedUserInterestedAverageAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(
|
||||
LogFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(
|
||||
LogFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(
|
||||
LogFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(
|
||||
LogFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
// Topic Embeddings
|
||||
(FavTfgTopic, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
(LogFavBasedKgoApeTopic, Model20m145k2020) -> EnabledMemCacheParams(ttl = 12.hours),
|
||||
)
|
||||
|
||||
def getCacheSetup(
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): MemCacheParams = {
|
||||
// When requested (embeddingType, modelVersion) doesn't exist, we return DisabledMemCacheParams
|
||||
cacheParamsMap.getOrElse((embeddingType, modelVersion), DisabledMemCacheParams)
|
||||
}
|
||||
|
||||
def getCacheKeyPrefix(embeddingType: EmbeddingType, modelVersion: ModelVersion) =
|
||||
s"${embeddingType.value}_${modelVersion.value}_"
|
||||
|
||||
def getStatsName(embeddingType: EmbeddingType, modelVersion: ModelVersion) =
|
||||
s"${embeddingType.name}_${modelVersion.name}_mem_cache"
|
||||
|
||||
/**
|
||||
* Build a ReadableStore based on MemCacheConfig.
|
||||
*
|
||||
* If memcache is disabled, it will return a normal readable store wrapper of the rawStore,
|
||||
* with SimClustersEmbedding as value;
|
||||
* If memcache is enabled, it will return a ObservedMemcachedReadableStore wrapper of the rawStore,
|
||||
* with memcache set up according to the EnabledMemCacheParams
|
||||
* */
|
||||
def buildMemCacheStoreForSimClustersEmbedding(
|
||||
rawStore: ReadableStore[SimClustersEmbeddingId, ThriftSimClustersEmbedding],
|
||||
cacheClient: Client,
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion,
|
||||
stats: StatsReceiver
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val cacheParams = getCacheSetup(embeddingType, modelVersion)
|
||||
val store = cacheParams match {
|
||||
case DisabledMemCacheParams => rawStore
|
||||
case EnabledMemCacheParams(ttl) =>
|
||||
val memCacheKeyPrefix = MemCacheConfig.getCacheKeyPrefix(
|
||||
embeddingType,
|
||||
modelVersion
|
||||
)
|
||||
val statsName = MemCacheConfig.getStatsName(
|
||||
embeddingType,
|
||||
modelVersion
|
||||
)
|
||||
ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = rawStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = ttl
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = stats.scope(statsName),
|
||||
keyToString = { k => memCacheKeyPrefix + k.toString }
|
||||
)
|
||||
}
|
||||
store.mapValues(SimClustersEmbedding(_))
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,25 @@
|
||||
package com.twitter.representation_manager.common
|
||||
|
||||
import com.twitter.decider.Decider
|
||||
import com.twitter.decider.RandomRecipient
|
||||
import com.twitter.decider.Recipient
|
||||
import com.twitter.simclusters_v2.common.DeciderGateBuilderWithIdHashing
|
||||
import javax.inject.Inject
|
||||
|
||||
case class RepresentationManagerDecider @Inject() (decider: Decider) {
|
||||
|
||||
val deciderGateBuilder = new DeciderGateBuilderWithIdHashing(decider)
|
||||
|
||||
def isAvailable(feature: String, recipient: Option[Recipient]): Boolean = {
|
||||
decider.isAvailable(feature, recipient)
|
||||
}
|
||||
|
||||
/**
|
||||
* When useRandomRecipient is set to false, the decider is either completely on or off.
|
||||
* When useRandomRecipient is set to true, the decider is on for the specified % of traffic.
|
||||
*/
|
||||
def isAvailable(feature: String, useRandomRecipient: Boolean = true): Boolean = {
|
||||
if (useRandomRecipient) isAvailable(feature, Some(RandomRecipient))
|
||||
else isAvailable(feature, None)
|
||||
}
|
||||
}
|
@ -0,0 +1,25 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"content-recommender/server/src/main/scala/com/twitter/contentrecommender:representation-manager-deps",
|
||||
"frigate/frigate-common/src/main/scala/com/twitter/frigate/common/store/strato",
|
||||
"frigate/frigate-common/src/main/scala/com/twitter/frigate/common/util",
|
||||
"hermit/hermit-core/src/main/scala/com/twitter/hermit/store/common",
|
||||
"relevance-platform/src/main/scala/com/twitter/relevance_platform/common/injection",
|
||||
"relevance-platform/src/main/scala/com/twitter/relevance_platform/common/readablestore",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/common",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/store",
|
||||
"src/scala/com/twitter/ml/api/embedding",
|
||||
"src/scala/com/twitter/simclusters_v2/common",
|
||||
"src/scala/com/twitter/simclusters_v2/score",
|
||||
"src/scala/com/twitter/simclusters_v2/summingbird/stores",
|
||||
"src/scala/com/twitter/storehaus_internal/manhattan",
|
||||
"src/scala/com/twitter/storehaus_internal/util",
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift-scala",
|
||||
"src/thrift/com/twitter/socialgraph:thrift-scala",
|
||||
"storage/clients/manhattan/client/src/main/scala",
|
||||
"tweetypie/src/scala/com/twitter/tweetypie/util",
|
||||
],
|
||||
)
|
@ -0,0 +1,846 @@
|
||||
package com.twitter.representation_manager.migration
|
||||
|
||||
import com.twitter.bijection.Injection
|
||||
import com.twitter.bijection.scrooge.BinaryScalaCodec
|
||||
import com.twitter.contentrecommender.store.ApeEntityEmbeddingStore
|
||||
import com.twitter.contentrecommender.store.InterestsOptOutStore
|
||||
import com.twitter.contentrecommender.store.SemanticCoreTopicSeedStore
|
||||
import com.twitter.contentrecommender.twistly
|
||||
import com.twitter.conversions.DurationOps._
|
||||
import com.twitter.decider.Decider
|
||||
import com.twitter.escherbird.util.uttclient.CacheConfigV2
|
||||
import com.twitter.escherbird.util.uttclient.CachedUttClientV2
|
||||
import com.twitter.escherbird.util.uttclient.UttClientCacheConfigsV2
|
||||
import com.twitter.escherbird.utt.strato.thriftscala.Environment
|
||||
import com.twitter.finagle.ThriftMux
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import com.twitter.finagle.mtls.authentication.ServiceIdentifier
|
||||
import com.twitter.finagle.mtls.client.MtlsStackClient.MtlsThriftMuxClientSyntax
|
||||
import com.twitter.finagle.mux.ClientDiscardedRequestException
|
||||
import com.twitter.finagle.service.ReqRep
|
||||
import com.twitter.finagle.service.ResponseClass
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.finagle.thrift.ClientId
|
||||
import com.twitter.frigate.common.store.strato.StratoFetchableStore
|
||||
import com.twitter.frigate.common.util.SeqLongInjection
|
||||
import com.twitter.hashing.KeyHasher
|
||||
import com.twitter.hermit.store.common.DeciderableReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedCachedReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedMemcachedReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedReadableStore
|
||||
import com.twitter.interests.thriftscala.InterestsThriftService
|
||||
import com.twitter.relevance_platform.common.injection.LZ4Injection
|
||||
import com.twitter.relevance_platform.common.readablestore.ReadableStoreWithTimeout
|
||||
import com.twitter.representation_manager.common.RepresentationManagerDecider
|
||||
import com.twitter.representation_manager.store.DeciderConstants
|
||||
import com.twitter.representation_manager.store.DeciderKey
|
||||
import com.twitter.simclusters_v2.common.ModelVersions
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbeddingIdCacheKeyBuilder
|
||||
import com.twitter.simclusters_v2.stores.SimClustersEmbeddingStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.PersistentTweetEmbeddingStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.ProducerClusterEmbeddingReadableStores
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore
|
||||
import com.twitter.simclusters_v2.thriftscala.ClustersUserIsInterestedIn
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType._
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion.Model20m145k2020
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion.Model20m145kUpdated
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersMultiEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersMultiEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storage.client.manhattan.kv.ManhattanKVClientMtlsParams
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import com.twitter.storehaus_internal.manhattan.Athena
|
||||
import com.twitter.storehaus_internal.manhattan.ManhattanRO
|
||||
import com.twitter.storehaus_internal.manhattan.ManhattanROConfig
|
||||
import com.twitter.storehaus_internal.util.ApplicationID
|
||||
import com.twitter.storehaus_internal.util.DatasetName
|
||||
import com.twitter.storehaus_internal.util.HDFSPath
|
||||
import com.twitter.strato.client.Strato
|
||||
import com.twitter.strato.client.{Client => StratoClient}
|
||||
import com.twitter.strato.thrift.ScroogeConvImplicits._
|
||||
import com.twitter.tweetypie.util.UserId
|
||||
import com.twitter.util.Duration
|
||||
import com.twitter.util.Future
|
||||
import com.twitter.util.Throw
|
||||
import com.twitter.util.Timer
|
||||
import javax.inject.Inject
|
||||
import javax.inject.Named
|
||||
import scala.reflect.ClassTag
|
||||
|
||||
class LegacyRMS @Inject() (
|
||||
serviceIdentifier: ServiceIdentifier,
|
||||
cacheClient: Client,
|
||||
stats: StatsReceiver,
|
||||
decider: Decider,
|
||||
clientId: ClientId,
|
||||
timer: Timer,
|
||||
@Named("cacheHashKeyPrefix") val cacheHashKeyPrefix: String = "RMS",
|
||||
@Named("useContentRecommenderConfiguration") val useContentRecommenderConfiguration: Boolean =
|
||||
false) {
|
||||
|
||||
private val mhMtlsParams: ManhattanKVClientMtlsParams = ManhattanKVClientMtlsParams(
|
||||
serviceIdentifier)
|
||||
private val rmsDecider = RepresentationManagerDecider(decider)
|
||||
val keyHasher: KeyHasher = KeyHasher.FNV1A_64
|
||||
|
||||
private val embeddingCacheKeyBuilder =
|
||||
SimClustersEmbeddingIdCacheKeyBuilder(keyHasher.hashKey, cacheHashKeyPrefix)
|
||||
private val statsReceiver = stats.scope("representation_management")
|
||||
|
||||
// Strato client, default timeout = 280ms
|
||||
val stratoClient: StratoClient =
|
||||
Strato.client
|
||||
.withMutualTls(serviceIdentifier)
|
||||
.build()
|
||||
|
||||
// Builds ThriftMux client builder for Content-Recommender service
|
||||
private def makeThriftClientBuilder(
|
||||
requestTimeout: Duration
|
||||
): ThriftMux.Client = {
|
||||
ThriftMux.client
|
||||
.withClientId(clientId)
|
||||
.withMutualTls(serviceIdentifier)
|
||||
.withRequestTimeout(requestTimeout)
|
||||
.withStatsReceiver(statsReceiver.scope("clnt"))
|
||||
.withResponseClassifier {
|
||||
case ReqRep(_, Throw(_: ClientDiscardedRequestException)) => ResponseClass.Ignorable
|
||||
}
|
||||
}
|
||||
|
||||
private def makeThriftClient[ThriftServiceType: ClassTag](
|
||||
dest: String,
|
||||
label: String,
|
||||
requestTimeout: Duration = 450.milliseconds
|
||||
): ThriftServiceType = {
|
||||
makeThriftClientBuilder(requestTimeout)
|
||||
.build[ThriftServiceType](dest, label)
|
||||
}
|
||||
|
||||
/** *** SimCluster Embedding Stores ******/
|
||||
implicit val simClustersEmbeddingIdInjection: Injection[SimClustersEmbeddingId, Array[Byte]] =
|
||||
BinaryScalaCodec(SimClustersEmbeddingId)
|
||||
implicit val simClustersEmbeddingInjection: Injection[ThriftSimClustersEmbedding, Array[Byte]] =
|
||||
BinaryScalaCodec(ThriftSimClustersEmbedding)
|
||||
implicit val simClustersMultiEmbeddingInjection: Injection[SimClustersMultiEmbedding, Array[
|
||||
Byte
|
||||
]] =
|
||||
BinaryScalaCodec(SimClustersMultiEmbedding)
|
||||
implicit val simClustersMultiEmbeddingIdInjection: Injection[SimClustersMultiEmbeddingId, Array[
|
||||
Byte
|
||||
]] =
|
||||
BinaryScalaCodec(SimClustersMultiEmbeddingId)
|
||||
|
||||
def getEmbeddingsDataset(
|
||||
mhMtlsParams: ManhattanKVClientMtlsParams,
|
||||
datasetName: String
|
||||
): ReadableStore[SimClustersEmbeddingId, ThriftSimClustersEmbedding] = {
|
||||
ManhattanRO.getReadableStoreWithMtls[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
ManhattanROConfig(
|
||||
HDFSPath(""), // not needed
|
||||
ApplicationID("content_recommender_athena"),
|
||||
DatasetName(datasetName), // this should be correct
|
||||
Athena
|
||||
),
|
||||
mhMtlsParams
|
||||
)
|
||||
}
|
||||
|
||||
lazy val logFavBasedLongestL2Tweet20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.longestL2NormTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145k2020Dataset,
|
||||
statsReceiver,
|
||||
maxLength = 10,
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = rawStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 15.minutes
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver =
|
||||
statsReceiver.scope("log_fav_based_longest_l2_tweet_embedding_20m145k2020_mem_cache"),
|
||||
keyToString = { k =>
|
||||
s"scez_l2:${LogFavBasedTweet}_${ModelVersions.Model20M145K2020}_$k"
|
||||
}
|
||||
)
|
||||
|
||||
val inMemoryCacheStore: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] =
|
||||
memcachedStore
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
LogFavLongestL2EmbeddingTweet,
|
||||
Model20m145k2020,
|
||||
InternalId.TweetId(tweetId)) =>
|
||||
tweetId
|
||||
}
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
inMemoryCacheStore,
|
||||
ttl = 12.minute,
|
||||
maxKeys = 1048575,
|
||||
cacheName = "log_fav_based_longest_l2_tweet_embedding_20m145k2020_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_longest_l2_tweet_embedding_20m145k2020_store"))
|
||||
}
|
||||
|
||||
lazy val logFavBased20M145KUpdatedTweetEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.mostRecentTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145kUpdatedDataset,
|
||||
statsReceiver
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = rawStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 10.minutes
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_tweet_embedding_mem_cache"),
|
||||
keyToString = { k =>
|
||||
// SimClusters_embedding_LZ4/embeddingType_modelVersion_tweetId
|
||||
s"scez:${LogFavBasedTweet}_${ModelVersions.Model20M145KUpdated}_$k"
|
||||
}
|
||||
)
|
||||
|
||||
val inMemoryCacheStore: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
memcachedStore
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
LogFavBasedTweet,
|
||||
Model20m145kUpdated,
|
||||
InternalId.TweetId(tweetId)) =>
|
||||
tweetId
|
||||
}
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
}
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
inMemoryCacheStore,
|
||||
ttl = 5.minute,
|
||||
maxKeys = 1048575, // 200MB
|
||||
cacheName = "log_fav_based_tweet_embedding_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_tweet_embedding_store"))
|
||||
}
|
||||
|
||||
lazy val logFavBased20M145K2020TweetEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.mostRecentTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145k2020Dataset,
|
||||
statsReceiver,
|
||||
maxLength = 10,
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = rawStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 15.minutes
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_tweet_embedding_20m145k2020_mem_cache"),
|
||||
keyToString = { k =>
|
||||
// SimClusters_embedding_LZ4/embeddingType_modelVersion_tweetId
|
||||
s"scez:${LogFavBasedTweet}_${ModelVersions.Model20M145K2020}_$k"
|
||||
}
|
||||
)
|
||||
|
||||
val inMemoryCacheStore: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] =
|
||||
memcachedStore
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
LogFavBasedTweet,
|
||||
Model20m145k2020,
|
||||
InternalId.TweetId(tweetId)) =>
|
||||
tweetId
|
||||
}
|
||||
.mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
inMemoryCacheStore,
|
||||
ttl = 12.minute,
|
||||
maxKeys = 16777215,
|
||||
cacheName = "log_fav_based_tweet_embedding_20m145k2020_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_tweet_embedding_20m145k2020_store"))
|
||||
}
|
||||
|
||||
lazy val favBasedTfgTopicEmbedding2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val stratoStore =
|
||||
StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/favBasedTFGTopic20M145K2020")
|
||||
|
||||
val truncatedStore = stratoStore.mapValues { embedding =>
|
||||
SimClustersEmbedding(embedding, truncate = 50)
|
||||
}
|
||||
|
||||
ObservedCachedReadableStore.from(
|
||||
ObservedReadableStore(truncatedStore)(
|
||||
statsReceiver.scope("fav_tfg_topic_embedding_2020_cache_backing_store")),
|
||||
ttl = 12.hours,
|
||||
maxKeys = 262143, // 200MB
|
||||
cacheName = "fav_tfg_topic_embedding_2020_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("fav_tfg_topic_embedding_2020_cache"))
|
||||
}
|
||||
|
||||
lazy val logFavBasedApe20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
ObservedReadableStore(
|
||||
StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/logFavBasedAPE20M145K2020")
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
AggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
internalId) =>
|
||||
SimClustersEmbeddingId(AggregatableLogFavBasedProducer, Model20m145k2020, internalId)
|
||||
}
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding, 50))
|
||||
)(statsReceiver.scope("aggregatable_producer_embeddings_by_logfav_score_2020"))
|
||||
}
|
||||
|
||||
val interestService: InterestsThriftService.MethodPerEndpoint =
|
||||
makeThriftClient[InterestsThriftService.MethodPerEndpoint](
|
||||
"/s/interests-thrift-service/interests-thrift-service",
|
||||
"interests_thrift_service"
|
||||
)
|
||||
|
||||
val interestsOptOutStore: InterestsOptOutStore = InterestsOptOutStore(interestService)
|
||||
|
||||
// Save 2 ^ 18 UTTs. Promising 100% cache rate
|
||||
lazy val defaultCacheConfigV2: CacheConfigV2 = CacheConfigV2(262143)
|
||||
lazy val uttClientCacheConfigsV2: UttClientCacheConfigsV2 = UttClientCacheConfigsV2(
|
||||
getTaxonomyConfig = defaultCacheConfigV2,
|
||||
getUttTaxonomyConfig = defaultCacheConfigV2,
|
||||
getLeafIds = defaultCacheConfigV2,
|
||||
getLeafUttEntities = defaultCacheConfigV2
|
||||
)
|
||||
|
||||
// CachedUttClient to use StratoClient
|
||||
lazy val cachedUttClientV2: CachedUttClientV2 = new CachedUttClientV2(
|
||||
stratoClient = stratoClient,
|
||||
env = Environment.Prod,
|
||||
cacheConfigs = uttClientCacheConfigsV2,
|
||||
statsReceiver = statsReceiver.scope("cached_utt_client")
|
||||
)
|
||||
|
||||
lazy val semanticCoreTopicSeedStore: ReadableStore[
|
||||
SemanticCoreTopicSeedStore.Key,
|
||||
Seq[UserId]
|
||||
] = {
|
||||
/*
|
||||
Up to 1000 Long seeds per topic/language = 62.5kb per topic/language (worst case)
|
||||
Assume ~10k active topic/languages ~= 650MB (worst case)
|
||||
*/
|
||||
val underlying = new SemanticCoreTopicSeedStore(cachedUttClientV2, interestsOptOutStore)(
|
||||
statsReceiver.scope("semantic_core_topic_seed_store"))
|
||||
|
||||
val memcacheStore = ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = underlying,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = SeqLongInjection,
|
||||
statsReceiver = statsReceiver.scope("topic_producer_seed_store_mem_cache"),
|
||||
keyToString = { k => s"tpss:${k.entityId}_${k.languageCode}" }
|
||||
)
|
||||
|
||||
ObservedCachedReadableStore.from[SemanticCoreTopicSeedStore.Key, Seq[UserId]](
|
||||
store = memcacheStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 20e3.toInt,
|
||||
cacheName = "topic_producer_seed_store_cache",
|
||||
windowSize = 5000
|
||||
)(statsReceiver.scope("topic_producer_seed_store_cache"))
|
||||
}
|
||||
|
||||
lazy val logFavBasedApeEntity20M145K2020EmbeddingStore: ApeEntityEmbeddingStore = {
|
||||
val apeStore = logFavBasedApe20M145K2020EmbeddingStore.composeKeyMapping[UserId]({ id =>
|
||||
SimClustersEmbeddingId(
|
||||
AggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
InternalId.UserId(id))
|
||||
})
|
||||
|
||||
new ApeEntityEmbeddingStore(
|
||||
semanticCoreSeedStore = semanticCoreTopicSeedStore,
|
||||
aggregatableProducerEmbeddingStore = apeStore,
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_ape_entity_2020_embedding_store"))
|
||||
}
|
||||
|
||||
lazy val logFavBasedApeEntity20M145K2020EmbeddingCachedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val truncatedStore =
|
||||
logFavBasedApeEntity20M145K2020EmbeddingStore.mapValues(_.truncate(50).toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = truncatedStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_ape_entity_2020_embedding_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
val inMemoryCachedStore =
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "log_fav_based_ape_entity_2020_embedding_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_ape_entity_2020_embedding_cached_store"))
|
||||
|
||||
DeciderableReadableStore(
|
||||
inMemoryCachedStore,
|
||||
rmsDecider.deciderGateBuilder.idGateWithHashing[SimClustersEmbeddingId](
|
||||
DeciderKey.enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore),
|
||||
statsReceiver.scope("log_fav_based_ape_entity_2020_embedding_deciderable_store")
|
||||
)
|
||||
}
|
||||
|
||||
lazy val relaxedLogFavBasedApe20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
ObservedReadableStore(
|
||||
StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/logFavBasedAPERelaxedFavEngagementThreshold20M145K2020")
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
internalId) =>
|
||||
SimClustersEmbeddingId(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
internalId)
|
||||
}
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding).truncate(50))
|
||||
)(statsReceiver.scope(
|
||||
"aggregatable_producer_embeddings_by_logfav_score_relaxed_fav_engagement_threshold_2020"))
|
||||
}
|
||||
|
||||
lazy val relaxedLogFavBasedApe20M145K2020EmbeddingCachedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val truncatedStore =
|
||||
relaxedLogFavBasedApe20M145K2020EmbeddingStore.mapValues(_.truncate(50).toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = truncatedStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver =
|
||||
statsReceiver.scope("relaxed_log_fav_based_ape_entity_2020_embedding_mem_cache"),
|
||||
keyToString = { k: SimClustersEmbeddingId => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "relaxed_log_fav_based_ape_entity_2020_embedding_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("relaxed_log_fav_based_ape_entity_2020_embedding_cache_store"))
|
||||
}
|
||||
|
||||
lazy val favBasedProducer20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore = ProducerClusterEmbeddingReadableStores
|
||||
.getProducerTopKSimClusters2020EmbeddingsStore(
|
||||
mhMtlsParams
|
||||
).composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
FavBasedProducer,
|
||||
Model20m145k2020,
|
||||
InternalId.UserId(userId)) =>
|
||||
userId
|
||||
}.mapValues { topSimClustersWithScore =>
|
||||
ThriftSimClustersEmbedding(topSimClustersWithScore.topClusters.take(10))
|
||||
}
|
||||
|
||||
// same memcache config as for favBasedUserInterestedIn20M145K2020Store
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 24.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("fav_based_producer_embedding_20M_145K_2020_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 12.hours,
|
||||
maxKeys = 16777215,
|
||||
cacheName = "fav_based_producer_embedding_20M_145K_2020_embedding_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("fav_based_producer_embedding_20M_145K_2020_embedding_store"))
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val interestedIn20M145KUpdatedStore: ReadableStore[UserId, ClustersUserIsInterestedIn] = {
|
||||
UserInterestedInReadableStore.defaultStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145KUpdated
|
||||
)
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val interestedIn20M145K2020Store: ReadableStore[UserId, ClustersUserIsInterestedIn] = {
|
||||
UserInterestedInReadableStore.defaultStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145K2020
|
||||
)
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val InterestedInFromPE20M145KUpdatedStore: ReadableStore[
|
||||
UserId,
|
||||
ClustersUserIsInterestedIn
|
||||
] = {
|
||||
UserInterestedInReadableStore.defaultIIPEStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145KUpdated)
|
||||
}
|
||||
|
||||
lazy val simClustersInterestedInStore: ReadableStore[
|
||||
(UserId, ModelVersion),
|
||||
ClustersUserIsInterestedIn
|
||||
] = {
|
||||
new ReadableStore[(UserId, ModelVersion), ClustersUserIsInterestedIn] {
|
||||
override def get(k: (UserId, ModelVersion)): Future[Option[ClustersUserIsInterestedIn]] = {
|
||||
k match {
|
||||
case (userId, Model20m145kUpdated) =>
|
||||
interestedIn20M145KUpdatedStore.get(userId)
|
||||
case (userId, Model20m145k2020) =>
|
||||
interestedIn20M145K2020Store.get(userId)
|
||||
case _ =>
|
||||
Future.None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lazy val simClustersInterestedInFromProducerEmbeddingsStore: ReadableStore[
|
||||
(UserId, ModelVersion),
|
||||
ClustersUserIsInterestedIn
|
||||
] = {
|
||||
new ReadableStore[(UserId, ModelVersion), ClustersUserIsInterestedIn] {
|
||||
override def get(k: (UserId, ModelVersion)): Future[Option[ClustersUserIsInterestedIn]] = {
|
||||
k match {
|
||||
case (userId, ModelVersion.Model20m145kUpdated) =>
|
||||
InterestedInFromPE20M145KUpdatedStore.get(userId)
|
||||
case _ =>
|
||||
Future.None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lazy val userInterestedInStore =
|
||||
new twistly.interestedin.EmbeddingStore(
|
||||
interestedInStore = simClustersInterestedInStore,
|
||||
interestedInFromProducerEmbeddingStore = simClustersInterestedInFromProducerEmbeddingsStore,
|
||||
statsReceiver = statsReceiver
|
||||
)
|
||||
|
||||
// Production
|
||||
lazy val favBasedUserInterestedIn20M145KUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore =
|
||||
UserInterestedInReadableStore
|
||||
.defaultSimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.FavBasedUserInterestedIn,
|
||||
ModelVersion.Model20m145kUpdated)
|
||||
.mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("fav_based_user_interested_in_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "fav_based_user_interested_in_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("fav_based_user_interested_in_store"))
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val LogFavBasedInterestedInFromAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore =
|
||||
UserInterestedInReadableStore
|
||||
.defaultIIAPESimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.LogFavBasedUserInterestedInFromAPE,
|
||||
ModelVersion.Model20m145k2020)
|
||||
.mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_user_interested_in_from_ape_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "log_fav_based_user_interested_in_from_ape_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_user_interested_in_from_ape_store"))
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val FollowBasedInterestedInFromAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore =
|
||||
UserInterestedInReadableStore
|
||||
.defaultIIAPESimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.FollowBasedUserInterestedInFromAPE,
|
||||
ModelVersion.Model20m145k2020)
|
||||
.mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("follow_based_user_interested_in_from_ape_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "follow_based_user_interested_in_from_ape_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("follow_based_user_interested_in_from_ape_store"))
|
||||
}
|
||||
|
||||
// production
|
||||
lazy val favBasedUserInterestedIn20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore: ReadableStore[SimClustersEmbeddingId, ThriftSimClustersEmbedding] =
|
||||
UserInterestedInReadableStore
|
||||
.defaultSimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.FavBasedUserInterestedIn,
|
||||
ModelVersion.Model20m145k2020).mapValues(_.toThrift)
|
||||
|
||||
ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("fav_based_user_interested_in_2020_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val logFavBasedUserInterestedIn20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore =
|
||||
UserInterestedInReadableStore
|
||||
.defaultSimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.LogFavBasedUserInterestedIn,
|
||||
ModelVersion.Model20m145k2020)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore.mapValues(_.toThrift),
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("log_fav_based_user_interested_in_2020_store"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "log_fav_based_user_interested_in_2020_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("log_fav_based_user_interested_in_2020_store"))
|
||||
}
|
||||
|
||||
// Production
|
||||
lazy val favBasedUserInterestedInFromPE20M145KUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val underlyingStore =
|
||||
UserInterestedInReadableStore
|
||||
.defaultIIPESimClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
EmbeddingType.FavBasedUserInterestedInFromPE,
|
||||
ModelVersion.Model20m145kUpdated)
|
||||
.mapValues(_.toThrift)
|
||||
|
||||
val memcachedStore = ObservedMemcachedReadableStore
|
||||
.fromCacheClient(
|
||||
backingStore = underlyingStore,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours
|
||||
)(
|
||||
valueInjection = LZ4Injection.compose(BinaryScalaCodec(ThriftSimClustersEmbedding)),
|
||||
statsReceiver = statsReceiver.scope("fav_based_user_interested_in_from_pe_mem_cache"),
|
||||
keyToString = { k => embeddingCacheKeyBuilder.apply(k) }
|
||||
).mapValues(SimClustersEmbedding(_))
|
||||
|
||||
ObservedCachedReadableStore.from[SimClustersEmbeddingId, SimClustersEmbedding](
|
||||
memcachedStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 262143,
|
||||
cacheName = "fav_based_user_interested_in_from_pe_cache",
|
||||
windowSize = 10000L
|
||||
)(statsReceiver.scope("fav_based_user_interested_in_from_pe_cache"))
|
||||
}
|
||||
|
||||
private val underlyingStores: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding]
|
||||
] = Map(
|
||||
// Tweet Embeddings
|
||||
(LogFavBasedTweet, Model20m145kUpdated) -> logFavBased20M145KUpdatedTweetEmbeddingStore,
|
||||
(LogFavBasedTweet, Model20m145k2020) -> logFavBased20M145K2020TweetEmbeddingStore,
|
||||
(
|
||||
LogFavLongestL2EmbeddingTweet,
|
||||
Model20m145k2020) -> logFavBasedLongestL2Tweet20M145K2020EmbeddingStore,
|
||||
// Entity Embeddings
|
||||
(FavTfgTopic, Model20m145k2020) -> favBasedTfgTopicEmbedding2020Store,
|
||||
(
|
||||
LogFavBasedKgoApeTopic,
|
||||
Model20m145k2020) -> logFavBasedApeEntity20M145K2020EmbeddingCachedStore,
|
||||
// KnownFor Embeddings
|
||||
(FavBasedProducer, Model20m145k2020) -> favBasedProducer20M145K2020EmbeddingStore,
|
||||
(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020) -> relaxedLogFavBasedApe20M145K2020EmbeddingCachedStore,
|
||||
// InterestedIn Embeddings
|
||||
(
|
||||
LogFavBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020) -> LogFavBasedInterestedInFromAPE20M145K2020Store,
|
||||
(
|
||||
FollowBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020) -> FollowBasedInterestedInFromAPE20M145K2020Store,
|
||||
(FavBasedUserInterestedIn, Model20m145kUpdated) -> favBasedUserInterestedIn20M145KUpdatedStore,
|
||||
(FavBasedUserInterestedIn, Model20m145k2020) -> favBasedUserInterestedIn20M145K2020Store,
|
||||
(LogFavBasedUserInterestedIn, Model20m145k2020) -> logFavBasedUserInterestedIn20M145K2020Store,
|
||||
(
|
||||
FavBasedUserInterestedInFromPE,
|
||||
Model20m145kUpdated) -> favBasedUserInterestedInFromPE20M145KUpdatedStore,
|
||||
(FilteredUserInterestedIn, Model20m145kUpdated) -> userInterestedInStore,
|
||||
(FilteredUserInterestedIn, Model20m145k2020) -> userInterestedInStore,
|
||||
(FilteredUserInterestedInFromPE, Model20m145kUpdated) -> userInterestedInStore,
|
||||
(UnfilteredUserInterestedIn, Model20m145kUpdated) -> userInterestedInStore,
|
||||
(UnfilteredUserInterestedIn, Model20m145k2020) -> userInterestedInStore,
|
||||
)
|
||||
|
||||
val simClustersEmbeddingStore: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val underlying: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] =
|
||||
SimClustersEmbeddingStore.buildWithDecider(
|
||||
underlyingStores = underlyingStores,
|
||||
decider = rmsDecider.decider,
|
||||
statsReceiver = statsReceiver.scope("simClusters_embeddings_store_deciderable")
|
||||
)
|
||||
|
||||
val underlyingWithTimeout: ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] =
|
||||
new ReadableStoreWithTimeout(
|
||||
rs = underlying,
|
||||
decider = rmsDecider.decider,
|
||||
enableTimeoutDeciderKey = DeciderConstants.enableSimClustersEmbeddingStoreTimeouts,
|
||||
timeoutValueKey = DeciderConstants.simClustersEmbeddingStoreTimeoutValueMillis,
|
||||
timer = timer,
|
||||
statsReceiver = statsReceiver.scope("simClusters_embedding_store_timeouts")
|
||||
)
|
||||
|
||||
ObservedReadableStore(
|
||||
store = underlyingWithTimeout
|
||||
)(statsReceiver.scope("simClusters_embeddings_store"))
|
||||
}
|
||||
}
|
@ -0,0 +1,18 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finagle-internal/mtls/src/main/scala/com/twitter/finagle/mtls/authentication",
|
||||
"finagle/finagle-stats",
|
||||
"finatra/inject/inject-core/src/main/scala",
|
||||
"frigate/frigate-common/src/main/scala/com/twitter/frigate/common/util",
|
||||
"interests-service/thrift/src/main/thrift:thrift-scala",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/common",
|
||||
"servo/util",
|
||||
"src/scala/com/twitter/storehaus_internal/manhattan",
|
||||
"src/scala/com/twitter/storehaus_internal/memcache",
|
||||
"src/scala/com/twitter/storehaus_internal/util",
|
||||
"strato/src/main/scala/com/twitter/strato/client",
|
||||
],
|
||||
)
|
@ -0,0 +1,34 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import javax.inject.Singleton
|
||||
import com.twitter.conversions.DurationOps._
|
||||
import com.twitter.inject.TwitterModule
|
||||
import com.twitter.finagle.mtls.authentication.ServiceIdentifier
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.storehaus_internal.memcache.MemcacheStore
|
||||
import com.twitter.storehaus_internal.util.ClientName
|
||||
import com.twitter.storehaus_internal.util.ZkEndPoint
|
||||
|
||||
object CacheModule extends TwitterModule {
|
||||
|
||||
private val cacheDest = flag[String]("cache_module.dest", "Path to memcache service")
|
||||
private val timeout = flag[Int]("memcache.timeout", "Memcache client timeout")
|
||||
private val retries = flag[Int]("memcache.retries", "Memcache timeout retries")
|
||||
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesCache(
|
||||
serviceIdentifier: ServiceIdentifier,
|
||||
stats: StatsReceiver
|
||||
): Client =
|
||||
MemcacheStore.memcachedClient(
|
||||
name = ClientName("memcache_representation_manager"),
|
||||
dest = ZkEndPoint(cacheDest()),
|
||||
timeout = timeout().milliseconds,
|
||||
retries = retries(),
|
||||
statsReceiver = stats.scope("cache_client"),
|
||||
serviceIdentifier = serviceIdentifier
|
||||
)
|
||||
}
|
@ -0,0 +1,40 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import com.twitter.conversions.DurationOps._
|
||||
import com.twitter.finagle.ThriftMux
|
||||
import com.twitter.finagle.mtls.authentication.ServiceIdentifier
|
||||
import com.twitter.finagle.mtls.client.MtlsStackClient.MtlsThriftMuxClientSyntax
|
||||
import com.twitter.finagle.mux.ClientDiscardedRequestException
|
||||
import com.twitter.finagle.service.ReqRep
|
||||
import com.twitter.finagle.service.ResponseClass
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.finagle.thrift.ClientId
|
||||
import com.twitter.inject.TwitterModule
|
||||
import com.twitter.interests.thriftscala.InterestsThriftService
|
||||
import com.twitter.util.Throw
|
||||
import javax.inject.Singleton
|
||||
|
||||
object InterestsThriftClientModule extends TwitterModule {
|
||||
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesInterestsThriftClient(
|
||||
clientId: ClientId,
|
||||
serviceIdentifier: ServiceIdentifier,
|
||||
statsReceiver: StatsReceiver
|
||||
): InterestsThriftService.MethodPerEndpoint = {
|
||||
ThriftMux.client
|
||||
.withClientId(clientId)
|
||||
.withMutualTls(serviceIdentifier)
|
||||
.withRequestTimeout(450.milliseconds)
|
||||
.withStatsReceiver(statsReceiver.scope("InterestsThriftClient"))
|
||||
.withResponseClassifier {
|
||||
case ReqRep(_, Throw(_: ClientDiscardedRequestException)) => ResponseClass.Ignorable
|
||||
}
|
||||
.build[InterestsThriftService.MethodPerEndpoint](
|
||||
dest = "/s/interests-thrift-service/interests-thrift-service",
|
||||
label = "interests_thrift_service"
|
||||
)
|
||||
}
|
||||
}
|
@ -0,0 +1,18 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import com.twitter.inject.TwitterModule
|
||||
import javax.inject.Named
|
||||
import javax.inject.Singleton
|
||||
|
||||
object LegacyRMSConfigModule extends TwitterModule {
|
||||
@Singleton
|
||||
@Provides
|
||||
@Named("cacheHashKeyPrefix")
|
||||
def providesCacheHashKeyPrefix: String = "RMS"
|
||||
|
||||
@Singleton
|
||||
@Provides
|
||||
@Named("useContentRecommenderConfiguration")
|
||||
def providesUseContentRecommenderConfiguration: Boolean = false
|
||||
}
|
@ -0,0 +1,24 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import javax.inject.Singleton
|
||||
import com.twitter.inject.TwitterModule
|
||||
import com.twitter.decider.Decider
|
||||
import com.twitter.finagle.mtls.authentication.ServiceIdentifier
|
||||
import com.twitter.representation_manager.common.RepresentationManagerDecider
|
||||
import com.twitter.storage.client.manhattan.kv.ManhattanKVClientMtlsParams
|
||||
|
||||
object StoreModule extends TwitterModule {
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesMhMtlsParams(
|
||||
serviceIdentifier: ServiceIdentifier
|
||||
): ManhattanKVClientMtlsParams = ManhattanKVClientMtlsParams(serviceIdentifier)
|
||||
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesRmsDecider(
|
||||
decider: Decider
|
||||
): RepresentationManagerDecider = RepresentationManagerDecider(decider)
|
||||
|
||||
}
|
@ -0,0 +1,13 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import com.twitter.finagle.util.DefaultTimer
|
||||
import com.twitter.inject.TwitterModule
|
||||
import com.twitter.util.Timer
|
||||
import javax.inject.Singleton
|
||||
|
||||
object TimerModule extends TwitterModule {
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesTimer: Timer = DefaultTimer
|
||||
}
|
@ -0,0 +1,39 @@
|
||||
package com.twitter.representation_manager.modules
|
||||
|
||||
import com.google.inject.Provides
|
||||
import com.twitter.escherbird.util.uttclient.CacheConfigV2
|
||||
import com.twitter.escherbird.util.uttclient.CachedUttClientV2
|
||||
import com.twitter.escherbird.util.uttclient.UttClientCacheConfigsV2
|
||||
import com.twitter.escherbird.utt.strato.thriftscala.Environment
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.inject.TwitterModule
|
||||
import com.twitter.strato.client.{Client => StratoClient}
|
||||
import javax.inject.Singleton
|
||||
|
||||
object UttClientModule extends TwitterModule {
|
||||
|
||||
@Singleton
|
||||
@Provides
|
||||
def providesUttClient(
|
||||
stratoClient: StratoClient,
|
||||
statsReceiver: StatsReceiver
|
||||
): CachedUttClientV2 = {
|
||||
// Save 2 ^ 18 UTTs. Promising 100% cache rate
|
||||
val defaultCacheConfigV2: CacheConfigV2 = CacheConfigV2(262143)
|
||||
|
||||
val uttClientCacheConfigsV2: UttClientCacheConfigsV2 = UttClientCacheConfigsV2(
|
||||
getTaxonomyConfig = defaultCacheConfigV2,
|
||||
getUttTaxonomyConfig = defaultCacheConfigV2,
|
||||
getLeafIds = defaultCacheConfigV2,
|
||||
getLeafUttEntities = defaultCacheConfigV2
|
||||
)
|
||||
|
||||
// CachedUttClient to use StratoClient
|
||||
new CachedUttClientV2(
|
||||
stratoClient = stratoClient,
|
||||
env = Environment.Prod,
|
||||
cacheConfigs = uttClientCacheConfigsV2,
|
||||
statsReceiver = statsReceiver.scope("cached_utt_client")
|
||||
)
|
||||
}
|
||||
}
|
@ -0,0 +1,16 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"content-recommender/server/src/main/scala/com/twitter/contentrecommender:representation-manager-deps",
|
||||
"frigate/frigate-common/src/main/scala/com/twitter/frigate/common/util",
|
||||
"hermit/hermit-core/src/main/scala/com/twitter/hermit/store/common",
|
||||
"representation-manager/server/src/main/scala/com/twitter/representation_manager/common",
|
||||
"src/scala/com/twitter/simclusters_v2/stores",
|
||||
"src/scala/com/twitter/simclusters_v2/summingbird/stores",
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift-scala",
|
||||
"storage/clients/manhattan/client/src/main/scala",
|
||||
"tweetypie/src/scala/com/twitter/tweetypie/util",
|
||||
],
|
||||
)
|
@ -0,0 +1,39 @@
|
||||
package com.twitter.representation_manager.store
|
||||
|
||||
import com.twitter.servo.decider.DeciderKeyEnum
|
||||
|
||||
object DeciderConstants {
|
||||
// Deciders inherited from CR and RSX and only used in LegacyRMS
|
||||
// Their value are manipulated by CR and RSX's yml file and their decider dashboard
|
||||
// We will remove them after migration completed
|
||||
val enableLogFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore =
|
||||
"enableLogFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore"
|
||||
|
||||
val enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore =
|
||||
"enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore"
|
||||
|
||||
val enablelogFavBased20M145K2020TweetEmbeddingStoreTimeouts =
|
||||
"enable_log_fav_based_tweet_embedding_20m145k2020_timeouts"
|
||||
val logFavBased20M145K2020TweetEmbeddingStoreTimeoutValueMillis =
|
||||
"log_fav_based_tweet_embedding_20m145k2020_timeout_value_millis"
|
||||
|
||||
val enablelogFavBased20M145KUpdatedTweetEmbeddingStoreTimeouts =
|
||||
"enable_log_fav_based_tweet_embedding_20m145kUpdated_timeouts"
|
||||
val logFavBased20M145KUpdatedTweetEmbeddingStoreTimeoutValueMillis =
|
||||
"log_fav_based_tweet_embedding_20m145kUpdated_timeout_value_millis"
|
||||
|
||||
val enableSimClustersEmbeddingStoreTimeouts = "enable_sim_clusters_embedding_store_timeouts"
|
||||
val simClustersEmbeddingStoreTimeoutValueMillis =
|
||||
"sim_clusters_embedding_store_timeout_value_millis"
|
||||
}
|
||||
|
||||
// Necessary for using servo Gates
|
||||
object DeciderKey extends DeciderKeyEnum {
|
||||
val enableLogFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore: Value = Value(
|
||||
DeciderConstants.enableLogFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore
|
||||
)
|
||||
|
||||
val enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore: Value = Value(
|
||||
DeciderConstants.enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore
|
||||
)
|
||||
}
|
@ -0,0 +1,198 @@
|
||||
package com.twitter.representation_manager.store
|
||||
|
||||
import com.twitter.contentrecommender.store.ApeEntityEmbeddingStore
|
||||
import com.twitter.contentrecommender.store.InterestsOptOutStore
|
||||
import com.twitter.contentrecommender.store.SemanticCoreTopicSeedStore
|
||||
import com.twitter.conversions.DurationOps._
|
||||
import com.twitter.escherbird.util.uttclient.CachedUttClientV2
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.frigate.common.store.strato.StratoFetchableStore
|
||||
import com.twitter.frigate.common.util.SeqLongInjection
|
||||
import com.twitter.hermit.store.common.ObservedCachedReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedMemcachedReadableStore
|
||||
import com.twitter.hermit.store.common.ObservedReadableStore
|
||||
import com.twitter.interests.thriftscala.InterestsThriftService
|
||||
import com.twitter.representation_manager.common.MemCacheConfig
|
||||
import com.twitter.representation_manager.common.RepresentationManagerDecider
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.stores.SimClustersEmbeddingStore
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType._
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion._
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.TopicId
|
||||
import com.twitter.simclusters_v2.thriftscala.LocaleEntityId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storage.client.manhattan.kv.ManhattanKVClientMtlsParams
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import com.twitter.strato.client.{Client => StratoClient}
|
||||
import com.twitter.tweetypie.util.UserId
|
||||
import javax.inject.Inject
|
||||
|
||||
class TopicSimClustersEmbeddingStore @Inject() (
|
||||
stratoClient: StratoClient,
|
||||
cacheClient: Client,
|
||||
globalStats: StatsReceiver,
|
||||
mhMtlsParams: ManhattanKVClientMtlsParams,
|
||||
rmsDecider: RepresentationManagerDecider,
|
||||
interestService: InterestsThriftService.MethodPerEndpoint,
|
||||
uttClient: CachedUttClientV2) {
|
||||
|
||||
private val stats = globalStats.scope(this.getClass.getSimpleName)
|
||||
private val interestsOptOutStore = InterestsOptOutStore(interestService)
|
||||
|
||||
/**
|
||||
* Note this is NOT an embedding store. It is a list of author account ids we use to represent
|
||||
* topics
|
||||
*/
|
||||
private val semanticCoreTopicSeedStore: ReadableStore[
|
||||
SemanticCoreTopicSeedStore.Key,
|
||||
Seq[UserId]
|
||||
] = {
|
||||
/*
|
||||
Up to 1000 Long seeds per topic/language = 62.5kb per topic/language (worst case)
|
||||
Assume ~10k active topic/languages ~= 650MB (worst case)
|
||||
*/
|
||||
val underlying = new SemanticCoreTopicSeedStore(uttClient, interestsOptOutStore)(
|
||||
stats.scope("semantic_core_topic_seed_store"))
|
||||
|
||||
val memcacheStore = ObservedMemcachedReadableStore.fromCacheClient(
|
||||
backingStore = underlying,
|
||||
cacheClient = cacheClient,
|
||||
ttl = 12.hours)(
|
||||
valueInjection = SeqLongInjection,
|
||||
statsReceiver = stats.scope("topic_producer_seed_store_mem_cache"),
|
||||
keyToString = { k => s"tpss:${k.entityId}_${k.languageCode}" }
|
||||
)
|
||||
|
||||
ObservedCachedReadableStore.from[SemanticCoreTopicSeedStore.Key, Seq[UserId]](
|
||||
store = memcacheStore,
|
||||
ttl = 6.hours,
|
||||
maxKeys = 20e3.toInt,
|
||||
cacheName = "topic_producer_seed_store_cache",
|
||||
windowSize = 5000
|
||||
)(stats.scope("topic_producer_seed_store_cache"))
|
||||
}
|
||||
|
||||
private val favBasedTfgTopicEmbedding20m145k2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/favBasedTFGTopic20M145K2020").mapValues(
|
||||
embedding => SimClustersEmbedding(embedding, truncate = 50).toThrift)
|
||||
.composeKeyMapping[LocaleEntityId] { localeEntityId =>
|
||||
SimClustersEmbeddingId(
|
||||
FavTfgTopic,
|
||||
Model20m145k2020,
|
||||
InternalId.LocaleEntityId(localeEntityId))
|
||||
}
|
||||
|
||||
buildLocaleEntityIdMemCacheStore(rawStore, FavTfgTopic, Model20m145k2020)
|
||||
}
|
||||
|
||||
private val logFavBasedApeEntity20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val apeStore = StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/logFavBasedAPE20M145K2020")
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding, truncate = 50))
|
||||
.composeKeyMapping[UserId]({ id =>
|
||||
SimClustersEmbeddingId(
|
||||
AggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
InternalId.UserId(id))
|
||||
})
|
||||
val rawStore = new ApeEntityEmbeddingStore(
|
||||
semanticCoreSeedStore = semanticCoreTopicSeedStore,
|
||||
aggregatableProducerEmbeddingStore = apeStore,
|
||||
statsReceiver = stats.scope("log_fav_based_ape_entity_2020_embedding_store"))
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding.toThrift, truncate = 50).toThrift)
|
||||
.composeKeyMapping[TopicId] { topicId =>
|
||||
SimClustersEmbeddingId(
|
||||
LogFavBasedKgoApeTopic,
|
||||
Model20m145k2020,
|
||||
InternalId.TopicId(topicId))
|
||||
}
|
||||
|
||||
buildTopicIdMemCacheStore(rawStore, LogFavBasedKgoApeTopic, Model20m145k2020)
|
||||
}
|
||||
|
||||
private def buildTopicIdMemCacheStore(
|
||||
rawStore: ReadableStore[TopicId, ThriftSimClustersEmbedding],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val observedStore: ObservedReadableStore[TopicId, ThriftSimClustersEmbedding] =
|
||||
ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
val storeWithKeyMapping = observedStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.TopicId(topicId)) =>
|
||||
topicId
|
||||
}
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
storeWithKeyMapping,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private def buildLocaleEntityIdMemCacheStore(
|
||||
rawStore: ReadableStore[LocaleEntityId, ThriftSimClustersEmbedding],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val observedStore: ObservedReadableStore[LocaleEntityId, ThriftSimClustersEmbedding] =
|
||||
ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
val storeWithKeyMapping = observedStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.LocaleEntityId(localeEntityId)) =>
|
||||
localeEntityId
|
||||
}
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
storeWithKeyMapping,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private val underlyingStores: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding]
|
||||
] = Map(
|
||||
// Topic Embeddings
|
||||
(FavTfgTopic, Model20m145k2020) -> favBasedTfgTopicEmbedding20m145k2020Store,
|
||||
(LogFavBasedKgoApeTopic, Model20m145k2020) -> logFavBasedApeEntity20M145K2020EmbeddingStore,
|
||||
)
|
||||
|
||||
val topicSimClustersEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
SimClustersEmbeddingStore.buildWithDecider(
|
||||
underlyingStores = underlyingStores,
|
||||
decider = rmsDecider.decider,
|
||||
statsReceiver = stats
|
||||
)
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,141 @@
|
||||
package com.twitter.representation_manager.store
|
||||
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.hermit.store.common.ObservedReadableStore
|
||||
import com.twitter.representation_manager.common.MemCacheConfig
|
||||
import com.twitter.representation_manager.common.RepresentationManagerDecider
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.common.TweetId
|
||||
import com.twitter.simclusters_v2.stores.SimClustersEmbeddingStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.PersistentTweetEmbeddingStore
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType._
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion._
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storage.client.manhattan.kv.ManhattanKVClientMtlsParams
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import javax.inject.Inject
|
||||
|
||||
class TweetSimClustersEmbeddingStore @Inject() (
|
||||
cacheClient: Client,
|
||||
globalStats: StatsReceiver,
|
||||
mhMtlsParams: ManhattanKVClientMtlsParams,
|
||||
rmsDecider: RepresentationManagerDecider) {
|
||||
|
||||
private val stats = globalStats.scope(this.getClass.getSimpleName)
|
||||
|
||||
val logFavBasedLongestL2Tweet20M145KUpdatedEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.longestL2NormTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145kUpdatedDataset,
|
||||
stats
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
buildMemCacheStore(rawStore, LogFavLongestL2EmbeddingTweet, Model20m145kUpdated)
|
||||
}
|
||||
|
||||
val logFavBasedLongestL2Tweet20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.longestL2NormTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145k2020Dataset,
|
||||
stats
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
buildMemCacheStore(rawStore, LogFavLongestL2EmbeddingTweet, Model20m145k2020)
|
||||
}
|
||||
|
||||
val logFavBased20M145KUpdatedTweetEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.mostRecentTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145kUpdatedDataset,
|
||||
stats
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
buildMemCacheStore(rawStore, LogFavBasedTweet, Model20m145kUpdated)
|
||||
}
|
||||
|
||||
val logFavBased20M145K2020TweetEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
PersistentTweetEmbeddingStore
|
||||
.mostRecentTweetEmbeddingStoreManhattan(
|
||||
mhMtlsParams,
|
||||
PersistentTweetEmbeddingStore.LogFavBased20m145k2020Dataset,
|
||||
stats
|
||||
).mapValues(_.toThrift)
|
||||
|
||||
buildMemCacheStore(rawStore, LogFavBasedTweet, Model20m145k2020)
|
||||
}
|
||||
|
||||
private def buildMemCacheStore(
|
||||
rawStore: ReadableStore[TweetId, ThriftSimClustersEmbedding],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val observedStore: ObservedReadableStore[TweetId, ThriftSimClustersEmbedding] =
|
||||
ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
val storeWithKeyMapping = observedStore.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.TweetId(tweetId)) =>
|
||||
tweetId
|
||||
}
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
storeWithKeyMapping,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private val underlyingStores: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding]
|
||||
] = Map(
|
||||
// Tweet Embeddings
|
||||
(LogFavBasedTweet, Model20m145kUpdated) -> logFavBased20M145KUpdatedTweetEmbeddingStore,
|
||||
(LogFavBasedTweet, Model20m145k2020) -> logFavBased20M145K2020TweetEmbeddingStore,
|
||||
(
|
||||
LogFavLongestL2EmbeddingTweet,
|
||||
Model20m145kUpdated) -> logFavBasedLongestL2Tweet20M145KUpdatedEmbeddingStore,
|
||||
(
|
||||
LogFavLongestL2EmbeddingTweet,
|
||||
Model20m145k2020) -> logFavBasedLongestL2Tweet20M145K2020EmbeddingStore,
|
||||
)
|
||||
|
||||
val tweetSimClustersEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
SimClustersEmbeddingStore.buildWithDecider(
|
||||
underlyingStores = underlyingStores,
|
||||
decider = rmsDecider.decider,
|
||||
statsReceiver = stats
|
||||
)
|
||||
}
|
||||
|
||||
}
|
@ -0,0 +1,602 @@
|
||||
package com.twitter.representation_manager.store
|
||||
|
||||
import com.twitter.contentrecommender.twistly
|
||||
import com.twitter.finagle.memcached.Client
|
||||
import com.twitter.finagle.stats.StatsReceiver
|
||||
import com.twitter.frigate.common.store.strato.StratoFetchableStore
|
||||
import com.twitter.hermit.store.common.ObservedReadableStore
|
||||
import com.twitter.representation_manager.common.MemCacheConfig
|
||||
import com.twitter.representation_manager.common.RepresentationManagerDecider
|
||||
import com.twitter.simclusters_v2.common.ModelVersions
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.stores.SimClustersEmbeddingStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.ProducerClusterEmbeddingReadableStores
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore.getStore
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore.modelVersionToDatasetMap
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore.knownModelVersions
|
||||
import com.twitter.simclusters_v2.summingbird.stores.UserInterestedInReadableStore.toSimClustersEmbedding
|
||||
import com.twitter.simclusters_v2.thriftscala.ClustersUserIsInterestedIn
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType
|
||||
import com.twitter.simclusters_v2.thriftscala.EmbeddingType._
|
||||
import com.twitter.simclusters_v2.thriftscala.InternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion
|
||||
import com.twitter.simclusters_v2.thriftscala.ModelVersion._
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.{SimClustersEmbedding => ThriftSimClustersEmbedding}
|
||||
import com.twitter.storage.client.manhattan.kv.ManhattanKVClientMtlsParams
|
||||
import com.twitter.storehaus.ReadableStore
|
||||
import com.twitter.storehaus_internal.manhattan.Apollo
|
||||
import com.twitter.storehaus_internal.manhattan.ManhattanCluster
|
||||
import com.twitter.strato.client.{Client => StratoClient}
|
||||
import com.twitter.strato.thrift.ScroogeConvImplicits._
|
||||
import com.twitter.tweetypie.util.UserId
|
||||
import com.twitter.util.Future
|
||||
import javax.inject.Inject
|
||||
|
||||
class UserSimClustersEmbeddingStore @Inject() (
|
||||
stratoClient: StratoClient,
|
||||
cacheClient: Client,
|
||||
globalStats: StatsReceiver,
|
||||
mhMtlsParams: ManhattanKVClientMtlsParams,
|
||||
rmsDecider: RepresentationManagerDecider) {
|
||||
|
||||
private val stats = globalStats.scope(this.getClass.getSimpleName)
|
||||
|
||||
private val favBasedProducer20M145KUpdatedEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = ProducerClusterEmbeddingReadableStores
|
||||
.getProducerTopKSimClustersEmbeddingsStore(
|
||||
mhMtlsParams
|
||||
).mapValues { topSimClustersWithScore =>
|
||||
ThriftSimClustersEmbedding(topSimClustersWithScore.topClusters)
|
||||
}.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.UserId(userId)) =>
|
||||
userId
|
||||
}
|
||||
|
||||
buildMemCacheStore(rawStore, FavBasedProducer, Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val favBasedProducer20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = ProducerClusterEmbeddingReadableStores
|
||||
.getProducerTopKSimClusters2020EmbeddingsStore(
|
||||
mhMtlsParams
|
||||
).mapValues { topSimClustersWithScore =>
|
||||
ThriftSimClustersEmbedding(topSimClustersWithScore.topClusters)
|
||||
}.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.UserId(userId)) =>
|
||||
userId
|
||||
}
|
||||
|
||||
buildMemCacheStore(rawStore, FavBasedProducer, Model20m145k2020)
|
||||
}
|
||||
|
||||
private val followBasedProducer20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = ProducerClusterEmbeddingReadableStores
|
||||
.getProducerTopKSimClustersEmbeddingsByFollowStore(
|
||||
mhMtlsParams
|
||||
).mapValues { topSimClustersWithScore =>
|
||||
ThriftSimClustersEmbedding(topSimClustersWithScore.topClusters)
|
||||
}.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(_, _, InternalId.UserId(userId)) =>
|
||||
userId
|
||||
}
|
||||
|
||||
buildMemCacheStore(rawStore, FollowBasedProducer, Model20m145k2020)
|
||||
}
|
||||
|
||||
private val logFavBasedApe20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/logFavBasedAPE20M145K2020")
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding, truncate = 50).toThrift)
|
||||
|
||||
buildMemCacheStore(rawStore, AggregatableLogFavBasedProducer, Model20m145k2020)
|
||||
}
|
||||
|
||||
private val rawRelaxedLogFavBasedApe20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
ThriftSimClustersEmbedding
|
||||
] = {
|
||||
StratoFetchableStore
|
||||
.withUnitView[SimClustersEmbeddingId, ThriftSimClustersEmbedding](
|
||||
stratoClient,
|
||||
"recommendations/simclusters_v2/embeddings/logFavBasedAPERelaxedFavEngagementThreshold20M145K2020")
|
||||
.mapValues(embedding => SimClustersEmbedding(embedding, truncate = 50).toThrift)
|
||||
}
|
||||
|
||||
private val relaxedLogFavBasedApe20M145K2020EmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(
|
||||
rawRelaxedLogFavBasedApe20M145K2020EmbeddingStore,
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val relaxedLogFavBasedApe20m145kUpdatedEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = rawRelaxedLogFavBasedApe20M145K2020EmbeddingStore
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145kUpdated,
|
||||
internalId) =>
|
||||
SimClustersEmbeddingId(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020,
|
||||
internalId)
|
||||
}
|
||||
|
||||
buildMemCacheStore(rawStore, RelaxedAggregatableLogFavBasedProducer, Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val logFavBasedInterestedInFromAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultIIAPESimClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val followBasedInterestedInFromAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultIIAPESimClustersEmbeddingStoreWithMtls,
|
||||
FollowBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val favBasedUserInterestedIn20M145KUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultSimClustersEmbeddingStoreWithMtls,
|
||||
FavBasedUserInterestedIn,
|
||||
Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val favBasedUserInterestedIn20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultSimClustersEmbeddingStoreWithMtls,
|
||||
FavBasedUserInterestedIn,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val followBasedUserInterestedIn20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultSimClustersEmbeddingStoreWithMtls,
|
||||
FollowBasedUserInterestedIn,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val logFavBasedUserInterestedIn20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultSimClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedIn,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val favBasedUserInterestedInFromPE20M145KUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultIIPESimClustersEmbeddingStoreWithMtls,
|
||||
FavBasedUserInterestedInFromPE,
|
||||
Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val twistlyUserInterestedInStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
ThriftSimClustersEmbedding
|
||||
] = {
|
||||
val interestedIn20M145KUpdatedStore = {
|
||||
UserInterestedInReadableStore.defaultStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145KUpdated
|
||||
)
|
||||
}
|
||||
val interestedIn20M145K2020Store = {
|
||||
UserInterestedInReadableStore.defaultStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145K2020
|
||||
)
|
||||
}
|
||||
val interestedInFromPE20M145KUpdatedStore = {
|
||||
UserInterestedInReadableStore.defaultIIPEStoreWithMtls(
|
||||
mhMtlsParams,
|
||||
modelVersion = ModelVersions.Model20M145KUpdated)
|
||||
}
|
||||
val simClustersInterestedInStore: ReadableStore[
|
||||
(UserId, ModelVersion),
|
||||
ClustersUserIsInterestedIn
|
||||
] = {
|
||||
new ReadableStore[(UserId, ModelVersion), ClustersUserIsInterestedIn] {
|
||||
override def get(k: (UserId, ModelVersion)): Future[Option[ClustersUserIsInterestedIn]] = {
|
||||
k match {
|
||||
case (userId, Model20m145kUpdated) =>
|
||||
interestedIn20M145KUpdatedStore.get(userId)
|
||||
case (userId, Model20m145k2020) =>
|
||||
interestedIn20M145K2020Store.get(userId)
|
||||
case _ =>
|
||||
Future.None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
val simClustersInterestedInFromProducerEmbeddingsStore: ReadableStore[
|
||||
(UserId, ModelVersion),
|
||||
ClustersUserIsInterestedIn
|
||||
] = {
|
||||
new ReadableStore[(UserId, ModelVersion), ClustersUserIsInterestedIn] {
|
||||
override def get(k: (UserId, ModelVersion)): Future[Option[ClustersUserIsInterestedIn]] = {
|
||||
k match {
|
||||
case (userId, ModelVersion.Model20m145kUpdated) =>
|
||||
interestedInFromPE20M145KUpdatedStore.get(userId)
|
||||
case _ =>
|
||||
Future.None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
new twistly.interestedin.EmbeddingStore(
|
||||
interestedInStore = simClustersInterestedInStore,
|
||||
interestedInFromProducerEmbeddingStore = simClustersInterestedInFromProducerEmbeddingsStore,
|
||||
statsReceiver = stats
|
||||
).mapValues(_.toThrift)
|
||||
}
|
||||
|
||||
private val userNextInterestedIn20m145k2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildUserInterestedInStore(
|
||||
UserInterestedInReadableStore.defaultNextInterestedInStoreWithMtls,
|
||||
UserNextInterestedIn,
|
||||
Model20m145k2020)
|
||||
}
|
||||
|
||||
private val filteredUserInterestedIn20m145kUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(twistlyUserInterestedInStore, FilteredUserInterestedIn, Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val filteredUserInterestedIn20m145k2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(twistlyUserInterestedInStore, FilteredUserInterestedIn, Model20m145k2020)
|
||||
}
|
||||
|
||||
private val filteredUserInterestedInFromPE20m145kUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(
|
||||
twistlyUserInterestedInStore,
|
||||
FilteredUserInterestedInFromPE,
|
||||
Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val unfilteredUserInterestedIn20m145kUpdatedStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(
|
||||
twistlyUserInterestedInStore,
|
||||
UnfilteredUserInterestedIn,
|
||||
Model20m145kUpdated)
|
||||
}
|
||||
|
||||
private val unfilteredUserInterestedIn20m145k2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
buildMemCacheStore(twistlyUserInterestedInStore, UnfilteredUserInterestedIn, Model20m145k2020)
|
||||
}
|
||||
|
||||
// [Experimental] User InterestedIn, generated by aggregating IIAPE embedding from AddressBook
|
||||
|
||||
private val logFavBasedInterestedMaxpoolingAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_maxpooling"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
private val logFavBasedInterestedAverageAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_average"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedAverageAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
private val logFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_booktype_maxpooling"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
private val logFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_largestdim_maxpooling"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
private val logFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_louvain_maxpooling"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
private val logFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE20M145K2020Store: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val datasetName = "addressbook_sims_embedding_iiape_connected_maxpooling"
|
||||
val appId = "wtf_embedding_apollo"
|
||||
buildUserInterestedInStoreGeneric(
|
||||
simClustersEmbeddingStoreWithMtls,
|
||||
LogFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020,
|
||||
datasetName = datasetName,
|
||||
appId = appId,
|
||||
manhattanCluster = Apollo
|
||||
)
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper func to build a readable store for some UserInterestedIn embeddings with
|
||||
* 1. A storeFunc from UserInterestedInReadableStore
|
||||
* 2. EmbeddingType
|
||||
* 3. ModelVersion
|
||||
* 4. MemCacheConfig
|
||||
* */
|
||||
private def buildUserInterestedInStore(
|
||||
storeFunc: (ManhattanKVClientMtlsParams, EmbeddingType, ModelVersion) => ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore = storeFunc(mhMtlsParams, embeddingType, modelVersion)
|
||||
.mapValues(_.toThrift)
|
||||
val observedStore = ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
observedStore,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private def buildUserInterestedInStoreGeneric(
|
||||
storeFunc: (ManhattanKVClientMtlsParams, EmbeddingType, ModelVersion, String, String,
|
||||
ManhattanCluster) => ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion,
|
||||
datasetName: String,
|
||||
appId: String,
|
||||
manhattanCluster: ManhattanCluster
|
||||
): ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
val rawStore =
|
||||
storeFunc(mhMtlsParams, embeddingType, modelVersion, datasetName, appId, manhattanCluster)
|
||||
.mapValues(_.toThrift)
|
||||
val observedStore = ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
observedStore,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private def simClustersEmbeddingStoreWithMtls(
|
||||
mhMtlsParams: ManhattanKVClientMtlsParams,
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion,
|
||||
datasetName: String,
|
||||
appId: String,
|
||||
manhattanCluster: ManhattanCluster
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
|
||||
if (!modelVersionToDatasetMap.contains(ModelVersions.toKnownForModelVersion(modelVersion))) {
|
||||
throw new IllegalArgumentException(
|
||||
"Unknown model version: " + modelVersion + ". Known model versions: " + knownModelVersions)
|
||||
}
|
||||
getStore(appId, mhMtlsParams, datasetName, manhattanCluster)
|
||||
.composeKeyMapping[SimClustersEmbeddingId] {
|
||||
case SimClustersEmbeddingId(theEmbeddingType, theModelVersion, InternalId.UserId(userId))
|
||||
if theEmbeddingType == embeddingType && theModelVersion == modelVersion =>
|
||||
userId
|
||||
}.mapValues(toSimClustersEmbedding(_, embeddingType))
|
||||
}
|
||||
|
||||
private def buildMemCacheStore(
|
||||
rawStore: ReadableStore[SimClustersEmbeddingId, ThriftSimClustersEmbedding],
|
||||
embeddingType: EmbeddingType,
|
||||
modelVersion: ModelVersion
|
||||
): ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding] = {
|
||||
val observedStore = ObservedReadableStore(
|
||||
store = rawStore
|
||||
)(stats.scope(embeddingType.name).scope(modelVersion.name))
|
||||
|
||||
MemCacheConfig.buildMemCacheStoreForSimClustersEmbedding(
|
||||
observedStore,
|
||||
cacheClient,
|
||||
embeddingType,
|
||||
modelVersion,
|
||||
stats
|
||||
)
|
||||
}
|
||||
|
||||
private val underlyingStores: Map[
|
||||
(EmbeddingType, ModelVersion),
|
||||
ReadableStore[SimClustersEmbeddingId, SimClustersEmbedding]
|
||||
] = Map(
|
||||
// KnownFor Embeddings
|
||||
(FavBasedProducer, Model20m145kUpdated) -> favBasedProducer20M145KUpdatedEmbeddingStore,
|
||||
(FavBasedProducer, Model20m145k2020) -> favBasedProducer20M145K2020EmbeddingStore,
|
||||
(FollowBasedProducer, Model20m145k2020) -> followBasedProducer20M145K2020EmbeddingStore,
|
||||
(AggregatableLogFavBasedProducer, Model20m145k2020) -> logFavBasedApe20M145K2020EmbeddingStore,
|
||||
(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145kUpdated) -> relaxedLogFavBasedApe20m145kUpdatedEmbeddingStore,
|
||||
(
|
||||
RelaxedAggregatableLogFavBasedProducer,
|
||||
Model20m145k2020) -> relaxedLogFavBasedApe20M145K2020EmbeddingStore,
|
||||
// InterestedIn Embeddings
|
||||
(
|
||||
LogFavBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020) -> logFavBasedInterestedInFromAPE20M145K2020Store,
|
||||
(
|
||||
FollowBasedUserInterestedInFromAPE,
|
||||
Model20m145k2020) -> followBasedInterestedInFromAPE20M145K2020Store,
|
||||
(FavBasedUserInterestedIn, Model20m145kUpdated) -> favBasedUserInterestedIn20M145KUpdatedStore,
|
||||
(FavBasedUserInterestedIn, Model20m145k2020) -> favBasedUserInterestedIn20M145K2020Store,
|
||||
(FollowBasedUserInterestedIn, Model20m145k2020) -> followBasedUserInterestedIn20M145K2020Store,
|
||||
(LogFavBasedUserInterestedIn, Model20m145k2020) -> logFavBasedUserInterestedIn20M145K2020Store,
|
||||
(
|
||||
FavBasedUserInterestedInFromPE,
|
||||
Model20m145kUpdated) -> favBasedUserInterestedInFromPE20M145KUpdatedStore,
|
||||
(FilteredUserInterestedIn, Model20m145kUpdated) -> filteredUserInterestedIn20m145kUpdatedStore,
|
||||
(FilteredUserInterestedIn, Model20m145k2020) -> filteredUserInterestedIn20m145k2020Store,
|
||||
(
|
||||
FilteredUserInterestedInFromPE,
|
||||
Model20m145kUpdated) -> filteredUserInterestedInFromPE20m145kUpdatedStore,
|
||||
(
|
||||
UnfilteredUserInterestedIn,
|
||||
Model20m145kUpdated) -> unfilteredUserInterestedIn20m145kUpdatedStore,
|
||||
(UnfilteredUserInterestedIn, Model20m145k2020) -> unfilteredUserInterestedIn20m145k2020Store,
|
||||
(UserNextInterestedIn, Model20m145k2020) -> userNextInterestedIn20m145k2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedInterestedMaxpoolingAddressBookFromIIAPE20M145K2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedAverageAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedInterestedAverageAddressBookFromIIAPE20M145K2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedUserInterestedBooktypeMaxpoolingAddressBookFromIIAPE20M145K2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedUserInterestedLargestDimMaxpoolingAddressBookFromIIAPE20M145K2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedUserInterestedLouvainMaxpoolingAddressBookFromIIAPE20M145K2020Store,
|
||||
(
|
||||
LogFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE,
|
||||
Model20m145k2020) -> logFavBasedUserInterestedConnectedMaxpoolingAddressBookFromIIAPE20M145K2020Store,
|
||||
)
|
||||
|
||||
val userSimClustersEmbeddingStore: ReadableStore[
|
||||
SimClustersEmbeddingId,
|
||||
SimClustersEmbedding
|
||||
] = {
|
||||
SimClustersEmbeddingStore.buildWithDecider(
|
||||
underlyingStores = underlyingStores,
|
||||
decider = rmsDecider.decider,
|
||||
statsReceiver = stats
|
||||
)
|
||||
}
|
||||
|
||||
}
|
18
representation-manager/server/src/main/thrift/BUILD
Normal file
18
representation-manager/server/src/main/thrift/BUILD
Normal file
@ -0,0 +1,18 @@
|
||||
create_thrift_libraries(
|
||||
base_name = "thrift",
|
||||
sources = [
|
||||
"com/twitter/representation_manager/service.thrift",
|
||||
],
|
||||
platform = "java8",
|
||||
tags = [
|
||||
"bazel-compatible",
|
||||
],
|
||||
dependency_roots = [
|
||||
"src/thrift/com/twitter/simclusters_v2:simclusters_v2-thrift",
|
||||
],
|
||||
generate_languages = [
|
||||
"java",
|
||||
"scala",
|
||||
"strato",
|
||||
],
|
||||
)
|
@ -0,0 +1,14 @@
|
||||
namespace java com.twitter.representation_manager.thriftjava
|
||||
#@namespace scala com.twitter.representation_manager.thriftscala
|
||||
#@namespace strato com.twitter.representation_manager
|
||||
|
||||
include "com/twitter/simclusters_v2/online_store.thrift"
|
||||
include "com/twitter/simclusters_v2/identifier.thrift"
|
||||
|
||||
/**
|
||||
* A uniform column view for all kinds of SimClusters based embeddings.
|
||||
**/
|
||||
struct SimClustersEmbeddingView {
|
||||
1: required identifier.EmbeddingType embeddingType
|
||||
2: required online_store.ModelVersion modelVersion
|
||||
}(persisted = 'false', hasPersonalData = 'false')
|
1
representation-scorer/BUILD.bazel
Normal file
1
representation-scorer/BUILD.bazel
Normal file
@ -0,0 +1 @@
|
||||
# This prevents SQ query from grabbing //:all since it traverses up once to find a BUILD
|
5
representation-scorer/README.md
Normal file
5
representation-scorer/README.md
Normal file
@ -0,0 +1,5 @@
|
||||
# Representation Scorer #
|
||||
|
||||
**Representation Scorer** (RSX) serves as a centralized scoring system, offering SimClusters or other embedding-based scoring solutions as machine learning features.
|
||||
|
||||
The Representation Scorer acquires user behavior data from the User Signal Service (USS) and extracts embeddings from the Representation Manager (RMS). It then calculates both pairwise and listwise features. These features are used at various stages, including candidate retrieval and ranking.
|
8
representation-scorer/bin/canary-check.sh
Executable file
8
representation-scorer/bin/canary-check.sh
Executable file
@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
|
||||
export CANARY_CHECK_ROLE="representation-scorer"
|
||||
export CANARY_CHECK_NAME="representation-scorer"
|
||||
export CANARY_CHECK_INSTANCES="0-19"
|
||||
|
||||
python3 relevance-platform/tools/canary_check.py "$@"
|
||||
|
4
representation-scorer/bin/deploy.sh
Executable file
4
representation-scorer/bin/deploy.sh
Executable file
@ -0,0 +1,4 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
JOB=representation-scorer bazel run --ui_event_filters=-info,-stdout,-stderr --noshow_progress \
|
||||
//relevance-platform/src/main/python/deploy -- "$@"
|
66
representation-scorer/bin/remote-debug-tunnel.sh
Executable file
66
representation-scorer/bin/remote-debug-tunnel.sh
Executable file
@ -0,0 +1,66 @@
|
||||
#!/bin/bash
|
||||
|
||||
set -o nounset
|
||||
set -eu
|
||||
|
||||
DC="atla"
|
||||
ROLE="$USER"
|
||||
SERVICE="representation-scorer"
|
||||
INSTANCE="0"
|
||||
KEY="$DC/$ROLE/devel/$SERVICE/$INSTANCE"
|
||||
|
||||
while test $# -gt 0; do
|
||||
case "$1" in
|
||||
-h|--help)
|
||||
echo "$0 Set up an ssh tunnel for $SERVICE remote debugging and disable aurora health checks"
|
||||
echo " "
|
||||
echo "See representation-scorer/README.md for details of how to use this script, and go/remote-debug for"
|
||||
echo "general information about remote debugging in Aurora"
|
||||
echo " "
|
||||
echo "Default instance if called with no args:"
|
||||
echo " $KEY"
|
||||
echo " "
|
||||
echo "Positional args:"
|
||||
echo " $0 [datacentre] [role] [service_name] [instance]"
|
||||
echo " "
|
||||
echo "Options:"
|
||||
echo " -h, --help show brief help"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
break
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [ -n "${1-}" ]; then
|
||||
DC="$1"
|
||||
fi
|
||||
|
||||
if [ -n "${2-}" ]; then
|
||||
ROLE="$2"
|
||||
fi
|
||||
|
||||
if [ -n "${3-}" ]; then
|
||||
SERVICE="$3"
|
||||
fi
|
||||
|
||||
if [ -n "${4-}" ]; then
|
||||
INSTANCE="$4"
|
||||
fi
|
||||
|
||||
KEY="$DC/$ROLE/devel/$SERVICE/$INSTANCE"
|
||||
read -p "Set up remote debugger tunnel for $KEY? (y/n) " -r CONFIRM
|
||||
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
|
||||
echo "Exiting, tunnel not created"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "Disabling health check and opening tunnel. Exit with control-c when you're finished"
|
||||
CMD="aurora task ssh $KEY -c 'touch .healthchecksnooze' && aurora task ssh $KEY -L '5005:debug' --ssh-options '-N -S none -v '"
|
||||
|
||||
echo "Running $CMD"
|
||||
eval "$CMD"
|
||||
|
||||
|
||||
|
39
representation-scorer/docs/index.rst
Normal file
39
representation-scorer/docs/index.rst
Normal file
@ -0,0 +1,39 @@
|
||||
Representation Scorer (RSX)
|
||||
###########################
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
Representation Scorer (RSX) is a StratoFed service which serves scores for pairs of entities (User, Tweet, Topic...) based on some representation of those entities. For example, it serves User-Tweet scores based on the cosine similarity of SimClusters embeddings for each of these. It aims to provide these with low latency and at high scale, to support applications such as scoring for ANN candidate generation and feature hydration via feature store.
|
||||
|
||||
|
||||
Current use cases
|
||||
-----------------
|
||||
|
||||
RSX currently serves traffic for the following use cases:
|
||||
|
||||
- User-Tweet similarity scores for Home ranking, using SimClusters embedding dot product
|
||||
- Topic-Tweet similarity scores for topical tweet candidate generation and topic social proof, using SimClusters embedding cosine similarity and CERTO scores
|
||||
- Tweet-Tweet and User-Tweet similarity scores for ANN candidate generation, using SimClusters embedding cosine similarity
|
||||
- (in development) User-Tweet similarity scores for Home ranking, based on various aggregations of similarities with recent faves, retweets and follows performed by the user
|
||||
|
||||
Getting Started
|
||||
===============
|
||||
|
||||
Fetching scores
|
||||
---------------
|
||||
|
||||
Scores are served from the recommendations/representation_scorer/score column.
|
||||
|
||||
Using RSX for your application
|
||||
------------------------------
|
||||
|
||||
RSX may be a good fit for your application if you need scores based on combinations of SimCluster embeddings for core nouns. We also plan to support other embeddings and scoring approaches in the future.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
:hidden:
|
||||
|
||||
index
|
||||
|
||||
|
22
representation-scorer/server/BUILD
Normal file
22
representation-scorer/server/BUILD
Normal file
@ -0,0 +1,22 @@
|
||||
jvm_binary(
|
||||
name = "bin",
|
||||
basename = "representation-scorer",
|
||||
main = "com.twitter.representationscorer.RepresentationScorerFedServerMain",
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finatra/inject/inject-logback/src/main/scala",
|
||||
"loglens/loglens-logback/src/main/scala/com/twitter/loglens/logback",
|
||||
"representation-scorer/server/src/main/resources",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer",
|
||||
"twitter-server/logback-classic/src/main/scala",
|
||||
],
|
||||
)
|
||||
|
||||
# Aurora Workflows build phase convention requires a jvm_app named with ${project-name}-app
|
||||
jvm_app(
|
||||
name = "representation-scorer-app",
|
||||
archive = "zip",
|
||||
binary = ":bin",
|
||||
tags = ["bazel-compatible"],
|
||||
)
|
9
representation-scorer/server/src/main/resources/BUILD
Normal file
9
representation-scorer/server/src/main/resources/BUILD
Normal file
@ -0,0 +1,9 @@
|
||||
resources(
|
||||
sources = [
|
||||
"*.xml",
|
||||
"*.yml",
|
||||
"com/twitter/slo/slo.json",
|
||||
"config/*.yml",
|
||||
],
|
||||
tags = ["bazel-compatible"],
|
||||
)
|
@ -0,0 +1,55 @@
|
||||
{
|
||||
"servers": [
|
||||
{
|
||||
"name": "strato",
|
||||
"indicators": [
|
||||
{
|
||||
"id": "success_rate_3m",
|
||||
"indicator_type": "SuccessRateIndicator",
|
||||
"duration": 3,
|
||||
"duration_unit": "MINUTES"
|
||||
}, {
|
||||
"id": "latency_3m_p99",
|
||||
"indicator_type": "LatencyIndicator",
|
||||
"duration": 3,
|
||||
"duration_unit": "MINUTES",
|
||||
"percentile": 0.99
|
||||
}
|
||||
],
|
||||
"objectives": [
|
||||
{
|
||||
"indicator": "success_rate_3m",
|
||||
"objective_type": "SuccessRateObjective",
|
||||
"operator": ">=",
|
||||
"threshold": 0.995
|
||||
},
|
||||
{
|
||||
"indicator": "latency_3m_p99",
|
||||
"objective_type": "LatencyObjective",
|
||||
"operator": "<=",
|
||||
"threshold": 50
|
||||
}
|
||||
],
|
||||
"long_term_objectives": [
|
||||
{
|
||||
"id": "success_rate_28_days",
|
||||
"objective_type": "SuccessRateObjective",
|
||||
"operator": ">=",
|
||||
"threshold": 0.993,
|
||||
"duration": 28,
|
||||
"duration_unit": "DAYS"
|
||||
},
|
||||
{
|
||||
"id": "latency_p99_28_days",
|
||||
"objective_type": "LatencyObjective",
|
||||
"operator": "<=",
|
||||
"threshold": 60,
|
||||
"duration": 28,
|
||||
"duration_unit": "DAYS",
|
||||
"percentile": 0.99
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"@version": 1
|
||||
}
|
@ -0,0 +1,155 @@
|
||||
enableLogFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore:
|
||||
comment: "Enable to use the non-empty store for logFavBasedApeEntity20M145KUpdatedEmbeddingCachedStore (from 0% to 100%). 0 means use EMPTY readable store for all requests."
|
||||
default_availability: 0
|
||||
|
||||
enableLogFavBasedApeEntity20M145K2020EmbeddingCachedStore:
|
||||
comment: "Enable to use the non-empty store for logFavBasedApeEntity20M145K2020EmbeddingCachedStore (from 0% to 100%). 0 means use EMPTY readable store for all requests."
|
||||
default_availability: 0
|
||||
|
||||
representation-scorer_forward_dark_traffic:
|
||||
comment: "Defines the percentage of traffic to forward to diffy-proxy. Set to 0 to disable dark traffic forwarding"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_non_prod_callers":
|
||||
comment: "Discard traffic from all non-prod callers"
|
||||
default_availability: 0
|
||||
|
||||
enable_log_fav_based_tweet_embedding_20m145k2020_timeouts:
|
||||
comment: "If enabled, set a timeout on calls to the logFavBased20M145K2020TweetEmbeddingStore"
|
||||
default_availability: 0
|
||||
|
||||
log_fav_based_tweet_embedding_20m145k2020_timeout_value_millis:
|
||||
comment: "The value of this decider defines the timeout (in milliseconds) to use on calls to the logFavBased20M145K2020TweetEmbeddingStore, i.e. 1.50% is 150ms. Only applied if enable_log_fav_based_tweet_embedding_20m145k2020_timeouts is true"
|
||||
default_availability: 2000
|
||||
|
||||
enable_log_fav_based_tweet_embedding_20m145kUpdated_timeouts:
|
||||
comment: "If enabled, set a timeout on calls to the logFavBased20M145KUpdatedTweetEmbeddingStore"
|
||||
default_availability: 0
|
||||
|
||||
log_fav_based_tweet_embedding_20m145kUpdated_timeout_value_millis:
|
||||
comment: "The value of this decider defines the timeout (in milliseconds) to use on calls to the logFavBased20M145KUpdatedTweetEmbeddingStore, i.e. 1.50% is 150ms. Only applied if enable_log_fav_based_tweet_embedding_20m145kUpdated_timeouts is true"
|
||||
default_availability: 2000
|
||||
|
||||
enable_cluster_tweet_index_store_timeouts:
|
||||
comment: "If enabled, set a timeout on calls to the ClusterTweetIndexStore"
|
||||
default_availability: 0
|
||||
|
||||
cluster_tweet_index_store_timeout_value_millis:
|
||||
comment: "The value of this decider defines the timeout (in milliseconds) to use on calls to the ClusterTweetIndexStore, i.e. 1.50% is 150ms. Only applied if enable_cluster_tweet_index_store_timeouts is true"
|
||||
default_availability: 2000
|
||||
|
||||
representation_scorer_fetch_signal_share:
|
||||
comment: "If enabled, fetches share signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_reply:
|
||||
comment: "If enabled, fetches reply signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_original_tweet:
|
||||
comment: "If enabled, fetches original tweet signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_video_playback:
|
||||
comment: "If enabled, fetches video playback signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_block:
|
||||
comment: "If enabled, fetches account block signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_mute:
|
||||
comment: "If enabled, fetches account mute signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_report:
|
||||
comment: "If enabled, fetches tweet report signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_dont_like:
|
||||
comment: "If enabled, fetches tweet don't like signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
representation_scorer_fetch_signal_see_fewer:
|
||||
comment: "If enabled, fetches tweet see fewer signals from USS"
|
||||
default_availability: 0
|
||||
|
||||
# To create a new decider, add here with the same format and caller's details : "representation-scorer_load_shed_by_caller_id_twtr:{{role}}:{{name}}:{{environment}}:{{cluster}}"
|
||||
# All the deciders below are generated by this script - ./strato/bin/fed deciders ./ --service-role=representation-scorer --service-name=representation-scorer
|
||||
# If you need to run the script and paste the output, add only the prod deciders here. Non-prod ones are being taken care of by representation-scorer_load_shed_non_prod_callers
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_all":
|
||||
comment: "Reject all traffic from caller id: all"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice-canary:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice-canary:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice-canary:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice-canary:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice-send:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice-send:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice:staging:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice:staging:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:frigate:frigate-pushservice:staging:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:frigate:frigate-pushservice:staging:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:home-scorer:home-scorer:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:home-scorer:home-scorer:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:home-scorer:home-scorer:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:home-scorer:home-scorer:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:stratostore:stratoapi:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoapi:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:stratostore:stratoserver:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoserver:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:stratostore:stratoserver:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:stratostore:stratoserver:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:timelinescorer:timelinescorer:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:timelinescorer:timelinescorer:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:timelinescorer:timelinescorer:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:timelinescorer:timelinescorer:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:topic-social-proof:topic-social-proof:prod:atla":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:topic-social-proof:topic-social-proof:prod:atla"
|
||||
default_availability: 0
|
||||
|
||||
"representation-scorer_load_shed_by_caller_id_twtr:svc:topic-social-proof:topic-social-proof:prod:pdxa":
|
||||
comment: "Reject all traffic from caller id: twtr:svc:topic-social-proof:topic-social-proof:prod:pdxa"
|
||||
default_availability: 0
|
||||
|
||||
"enable_sim_clusters_embedding_store_timeouts":
|
||||
comment: "If enabled, set a timeout on calls to the SimClustersEmbeddingStore"
|
||||
default_availability: 10000
|
||||
|
||||
sim_clusters_embedding_store_timeout_value_millis:
|
||||
comment: "The value of this decider defines the timeout (in milliseconds) to use on calls to the SimClustersEmbeddingStore, i.e. 1.50% is 150ms. Only applied if enable_sim_clusters_embedding_store_timeouts is true"
|
||||
default_availability: 2000
|
165
representation-scorer/server/src/main/resources/logback.xml
Normal file
165
representation-scorer/server/src/main/resources/logback.xml
Normal file
@ -0,0 +1,165 @@
|
||||
<configuration>
|
||||
<shutdownHook class="ch.qos.logback.core.hook.DelayingShutdownHook"/>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Service Config -->
|
||||
<!-- ===================================================== -->
|
||||
<property name="DEFAULT_SERVICE_PATTERN"
|
||||
value="%-16X{traceId} %-12X{clientId:--} %-16X{method} %-25logger{0} %msg"/>
|
||||
|
||||
<property name="DEFAULT_ACCESS_PATTERN"
|
||||
value="%msg"/>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Common Config -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<!-- JUL/JDK14 to Logback bridge -->
|
||||
<contextListener class="ch.qos.logback.classic.jul.LevelChangePropagator">
|
||||
<resetJUL>true</resetJUL>
|
||||
</contextListener>
|
||||
|
||||
<!-- ====================================================================================== -->
|
||||
<!-- NOTE: The following appenders use a simple TimeBasedRollingPolicy configuration. -->
|
||||
<!-- You may want to consider using a more advanced SizeAndTimeBasedRollingPolicy. -->
|
||||
<!-- See: https://logback.qos.ch/manual/appenders.html#SizeAndTimeBasedRollingPolicy -->
|
||||
<!-- ====================================================================================== -->
|
||||
|
||||
<!-- Service Log (rollover daily, keep maximum of 21 days of gzip compressed logs) -->
|
||||
<appender name="SERVICE" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>${log.service.output}</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>${log.service.output}.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>3GB</totalSizeCap>
|
||||
<!-- keep maximum 21 days' worth of history -->
|
||||
<maxHistory>21</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>%date %.-3level ${DEFAULT_SERVICE_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- Access Log (rollover daily, keep maximum of 21 days of gzip compressed logs) -->
|
||||
<appender name="ACCESS" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>${log.access.output}</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>${log.access.output}.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>100MB</totalSizeCap>
|
||||
<!-- keep maximum 7 days' worth of history -->
|
||||
<maxHistory>7</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>${DEFAULT_ACCESS_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!--LogLens -->
|
||||
<appender name="LOGLENS" class="com.twitter.loglens.logback.LoglensAppender">
|
||||
<mdcAdditionalContext>true</mdcAdditionalContext>
|
||||
<category>${log.lens.category}</category>
|
||||
<index>${log.lens.index}</index>
|
||||
<tag>${log.lens.tag}/service</tag>
|
||||
<encoder>
|
||||
<pattern>%msg</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- LogLens Access -->
|
||||
<appender name="LOGLENS-ACCESS" class="com.twitter.loglens.logback.LoglensAppender">
|
||||
<mdcAdditionalContext>true</mdcAdditionalContext>
|
||||
<category>${log.lens.category}</category>
|
||||
<index>${log.lens.index}</index>
|
||||
<tag>${log.lens.tag}/access</tag>
|
||||
<encoder>
|
||||
<pattern>%msg</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- Pipeline Execution Logs -->
|
||||
<appender name="ALLOW-LISTED-PIPELINE-EXECUTIONS" class="ch.qos.logback.core.rolling.RollingFileAppender">
|
||||
<file>allow_listed_pipeline_executions.log</file>
|
||||
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
|
||||
<!-- daily rollover -->
|
||||
<fileNamePattern>allow_listed_pipeline_executions.log.%d.gz</fileNamePattern>
|
||||
<!-- the maximum total size of all the log files -->
|
||||
<totalSizeCap>100MB</totalSizeCap>
|
||||
<!-- keep maximum 7 days' worth of history -->
|
||||
<maxHistory>7</maxHistory>
|
||||
<cleanHistoryOnStart>true</cleanHistoryOnStart>
|
||||
</rollingPolicy>
|
||||
<encoder>
|
||||
<pattern>%date %.-3level ${DEFAULT_SERVICE_PATTERN}%n</pattern>
|
||||
</encoder>
|
||||
</appender>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Primary Async Appenders -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<property name="async_queue_size" value="${queue.size:-50000}"/>
|
||||
<property name="async_max_flush_time" value="${max.flush.time:-0}"/>
|
||||
|
||||
<appender name="ASYNC-SERVICE" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="SERVICE"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-ACCESS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="ACCESS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-ALLOW-LISTED-PIPELINE-EXECUTIONS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="ALLOW-LISTED-PIPELINE-EXECUTIONS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-LOGLENS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="LOGLENS"/>
|
||||
</appender>
|
||||
|
||||
<appender name="ASYNC-LOGLENS-ACCESS" class="com.twitter.inject.logback.AsyncAppender">
|
||||
<queueSize>${async_queue_size}</queueSize>
|
||||
<maxFlushTime>${async_max_flush_time}</maxFlushTime>
|
||||
<appender-ref ref="LOGLENS-ACCESS"/>
|
||||
</appender>
|
||||
|
||||
<!-- ===================================================== -->
|
||||
<!-- Package Config -->
|
||||
<!-- ===================================================== -->
|
||||
|
||||
<!-- Per-Package Config -->
|
||||
<logger name="com.twitter" level="INHERITED"/>
|
||||
<logger name="com.twitter.wilyns" level="INHERITED"/>
|
||||
<logger name="com.twitter.configbus.client.file" level="INHERITED"/>
|
||||
<logger name="com.twitter.finagle.mux" level="INHERITED"/>
|
||||
<logger name="com.twitter.finagle.serverset2" level="INHERITED"/>
|
||||
<logger name="com.twitter.logging.ScribeHandler" level="INHERITED"/>
|
||||
<logger name="com.twitter.zookeeper.client.internal" level="INHERITED"/>
|
||||
|
||||
<!-- Root Config -->
|
||||
<!-- For all logs except access logs, disable logging below log_level level by default. This can be overriden in the per-package loggers, and dynamically in the admin panel of individual instances. -->
|
||||
<root level="${log_level:-INFO}">
|
||||
<appender-ref ref="ASYNC-SERVICE"/>
|
||||
<appender-ref ref="ASYNC-LOGLENS"/>
|
||||
</root>
|
||||
|
||||
<!-- Access Logging -->
|
||||
<!-- Access logs are turned off by default -->
|
||||
<logger name="com.twitter.finatra.thrift.filters.AccessLoggingFilter" level="OFF" additivity="false">
|
||||
<appender-ref ref="ASYNC-ACCESS"/>
|
||||
<appender-ref ref="ASYNC-LOGLENS-ACCESS"/>
|
||||
</logger>
|
||||
|
||||
</configuration>
|
@ -0,0 +1,13 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"finagle-internal/slo/src/main/scala/com/twitter/finagle/slo",
|
||||
"finatra/inject/inject-thrift-client",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer/columns",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
"twitter-server-internal/src/main/scala",
|
||||
],
|
||||
)
|
@ -0,0 +1,38 @@
|
||||
package com.twitter.representationscorer
|
||||
|
||||
import com.google.inject.Module
|
||||
import com.twitter.inject.thrift.modules.ThriftClientIdModule
|
||||
import com.twitter.representationscorer.columns.ListScoreColumn
|
||||
import com.twitter.representationscorer.columns.ScoreColumn
|
||||
import com.twitter.representationscorer.columns.SimClustersRecentEngagementSimilarityColumn
|
||||
import com.twitter.representationscorer.columns.SimClustersRecentEngagementSimilarityUserTweetEdgeColumn
|
||||
import com.twitter.representationscorer.modules.CacheModule
|
||||
import com.twitter.representationscorer.modules.EmbeddingStoreModule
|
||||
import com.twitter.representationscorer.modules.RMSConfigModule
|
||||
import com.twitter.representationscorer.modules.TimerModule
|
||||
import com.twitter.representationscorer.twistlyfeatures.UserSignalServiceRecentEngagementsClientModule
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.fed.server._
|
||||
|
||||
object RepresentationScorerFedServerMain extends RepresentationScorerFedServer
|
||||
|
||||
trait RepresentationScorerFedServer extends StratoFedServer {
|
||||
override def dest: String = "/s/representation-scorer/representation-scorer"
|
||||
override val modules: Seq[Module] =
|
||||
Seq(
|
||||
CacheModule,
|
||||
ThriftClientIdModule,
|
||||
UserSignalServiceRecentEngagementsClientModule,
|
||||
TimerModule,
|
||||
RMSConfigModule,
|
||||
EmbeddingStoreModule
|
||||
)
|
||||
|
||||
override def columns: Seq[Class[_ <: StratoFed.Column]] =
|
||||
Seq(
|
||||
classOf[ListScoreColumn],
|
||||
classOf[ScoreColumn],
|
||||
classOf[SimClustersRecentEngagementSimilarityUserTweetEdgeColumn],
|
||||
classOf[SimClustersRecentEngagementSimilarityColumn]
|
||||
)
|
||||
}
|
@ -0,0 +1,16 @@
|
||||
scala_library(
|
||||
compiler_option_sets = ["fatal_warnings"],
|
||||
platform = "java8",
|
||||
tags = ["bazel-compatible"],
|
||||
dependencies = [
|
||||
"content-recommender/thrift/src/main/thrift:thrift-scala",
|
||||
"finatra/inject/inject-core/src/main/scala",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer/common",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer/modules",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer/scorestore",
|
||||
"representation-scorer/server/src/main/scala/com/twitter/representationscorer/twistlyfeatures",
|
||||
"representation-scorer/server/src/main/thrift:thrift-scala",
|
||||
"strato/src/main/scala/com/twitter/strato/fed",
|
||||
"strato/src/main/scala/com/twitter/strato/fed/server",
|
||||
],
|
||||
)
|
@ -0,0 +1,13 @@
|
||||
package com.twitter.representationscorer.columns
|
||||
|
||||
import com.twitter.strato.config.{ContactInfo => StratoContactInfo}
|
||||
|
||||
object Info {
|
||||
val contactInfo: StratoContactInfo = StratoContactInfo(
|
||||
description = "Please contact Relevance Platform team for more details",
|
||||
contactEmail = "no-reply@twitter.com",
|
||||
ldapGroup = "representation-scorer-admins",
|
||||
jiraProject = "JIRA",
|
||||
links = Seq("http://go.twitter.biz/rsx-runbook")
|
||||
)
|
||||
}
|
@ -0,0 +1,116 @@
|
||||
package com.twitter.representationscorer.columns
|
||||
|
||||
import com.twitter.representationscorer.thriftscala.ListScoreId
|
||||
import com.twitter.representationscorer.thriftscala.ListScoreResponse
|
||||
import com.twitter.representationscorer.scorestore.ScoreStore
|
||||
import com.twitter.representationscorer.thriftscala.ScoreResult
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbeddingId.LongInternalId
|
||||
import com.twitter.simclusters_v2.common.SimClustersEmbeddingId.LongSimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.Score
|
||||
import com.twitter.simclusters_v2.thriftscala.ScoreId
|
||||
import com.twitter.simclusters_v2.thriftscala.ScoreInternalId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingId
|
||||
import com.twitter.simclusters_v2.thriftscala.SimClustersEmbeddingPairScoreId
|
||||
import com.twitter.stitch
|
||||
import com.twitter.stitch.Stitch
|
||||
import com.twitter.strato.catalog.OpMetadata
|
||||
import com.twitter.strato.config.ContactInfo
|
||||
import com.twitter.strato.config.Policy
|
||||
import com.twitter.strato.data.Conv
|
||||
import com.twitter.strato.data.Description.PlainText
|
||||
import com.twitter.strato.data.Lifecycle
|
||||
import com.twitter.strato.fed._
|
||||
import com.twitter.strato.thrift.ScroogeConv
|
||||
import com.twitter.util.Future
|
||||
import com.twitter.util.Return
|
||||
import com.twitter.util.Throw
|
||||
import javax.inject.Inject
|
||||
|
||||
class ListScoreColumn @Inject() (scoreStore: ScoreStore)
|
||||
extends StratoFed.Column("recommendations/representation_scorer/listScore")
|
||||
with StratoFed.Fetch.Stitch {
|
||||
|
||||
override val policy: Policy = Common.rsxReadPolicy
|
||||
|
||||
override type Key = ListScoreId
|
||||
override type View = Unit
|
||||
override type Value = ListScoreResponse
|
||||
|
||||
override val keyConv: Conv[Key] = ScroogeConv.fromStruct[ListScoreId]
|
||||
override val viewConv: Conv[View] = Conv.ofType
|
||||
override val valueConv: Conv[Value] = ScroogeConv.fromStruct[ListScoreResponse]
|
||||
|
||||
override val contactInfo: ContactInfo = Info.contactInfo
|
||||
|
||||
override val metadata: OpMetadata = OpMetadata(
|
||||
lifecycle = Some(Lifecycle.Production),
|
||||
description = Some(
|
||||
PlainText(
|
||||
"Scoring for multiple candidate entities against a single target entity"
|
||||
))
|
||||
)
|
||||
|
||||
override def fetch(key: Key, view: View): Stitch[Result[Value]] = {
|
||||
|
||||
val target = SimClustersEmbeddingId(
|
||||
embeddingType = key.targetEmbeddingType,
|
||||
modelVersion = key.modelVersion,
|
||||
internalId = key.targetId
|
||||
)
|
||||
val scoreIds = key.candidateIds.map { candidateId =>
|
||||
val candidate = SimClustersEmbeddingId(
|
||||
embeddingType = key.candidateEmbeddingType,
|
||||
modelVersion = key.modelVersion,
|
||||
internalId = candidateId
|
||||
)
|
||||
ScoreId(
|
||||
algorithm = key.algorithm,
|
||||
internalId = ScoreInternalId.SimClustersEmbeddingPairScoreId(
|
||||
SimClustersEmbeddingPairScoreId(target, candidate)
|
||||
)
|
||||
)
|
||||
}
|
||||
|
||||
Stitch
|
||||
.callFuture {
|
||||
val (keys: Iterable[ScoreId], vals: Iterable[Future[Option[Score]]]) =
|
||||
scoreStore.uniformScoringStore.multiGet(scoreIds.toSet).unzip
|
||||
val results: Future[Iterable[Option[Score]]] = Future.collectToTry(vals.toSeq) map {
|
||||
tryOptVals =>
|
||||
tryOptVals map {
|
||||
case Return(Some(v)) => Some(v)
|
||||
case Return(None) => None
|
||||
case Throw(_) => None
|
||||
}
|
||||
}
|
||||
val scoreMap: Future[Map[Long, Double]] = results.map { scores =>
|
||||
keys
|
||||
.zip(scores).collect {
|
||||
case (
|
||||
ScoreId(
|
||||
_,
|
||||
ScoreInternalId.SimClustersEmbeddingPairScoreId(
|
||||
SimClustersEmbeddingPairScoreId(
|
||||
_,
|
||||
LongSimClustersEmbeddingId(candidateId)))),
|
||||
Some(score)) =>
|
||||
(candidateId, score.score)
|
||||
}.toMap
|
||||
}
|
||||
scoreMap
|
||||
}
|
||||
.map { (scores: Map[Long, Double]) =>
|
||||
val orderedScores = key.candidateIds.collect {
|
||||
case LongInternalId(id) => ScoreResult(scores.get(id))
|
||||
case _ =>
|
||||
// This will return None scores for candidates which don't have Long ids, but that's fine:
|
||||
// at the moment we're only scoring for Tweets
|
||||
ScoreResult(None)
|
||||
}
|
||||
found(ListScoreResponse(orderedScores))
|
||||
}
|
||||
.handle {
|
||||
case stitch.NotFound => missing
|
||||
}
|
||||
}
|
||||
}
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user