A library to conduct ranking experiments with pre-trained transformers.

Setup

#Clone the repo
git clone https://github.com/Guzpenha/transformer_rankers.git
cd transformer_rankers    

#Create a virtual enviroment
python3 -m venv env; source env/bin/activate    

#Install the library and the requirements.
pip install -e .
pip install -r requirements.txt

Examples

Open In Colab Fine tune pointwise BERT for Community Question Answering.

Wandb report Wandb report of fine tunning BERT for Conversation Response Ranking.

News

29-01-2021: Two papers recently used transformer-rankers library to conduct experiments: On the Calibration and Uncertainty of Neural Learning to Rank Models for Conversational Search (EACL’21) and Weakly Supervised Label Smoothing (ECIR’21).

15-09-2020: Cross Entropy label smoothing was implemented as a loss function for learning to rank BERT models.

09-09-2020: Easily download and preprocess data for a task with DataDownloader. Currently 7 datasets for different retrieval tasks are implemented.

07-09-2020: Pairwise BERT ranker implemented.

10-08-2020: Transformer-rankers was used to generate baselines for the ClariQ challenge.

10-07-2020: Get uncertainty estimates, i.e. variance, for rankers relevance predictions with MC Dropout at inference time using predict_with_uncertainty.

09-07-2020: Transformer-rankers initial version realeased with support for 6 ranking datasets and negative sampling techniques (e.g. BM25, sentenceBERT similarity). The library uses huggingface pre-trained transformer models for ranking. See the main components at the documentation page.