😷 The Fill-Mask Association Test (FMAT): Measuring Propositions in Natural Language.
-
Updated
Jun 13, 2024 - R
😷 The Fill-Mask Association Test (FMAT): Measuring Propositions in Natural Language.
A high-performance inference system for large language models, designed for production environments.
A Simple PoC (Proof of Concept) of Hate speech (Toxic comment) Detector API Server
A high-throughput and memory-efficient inference and serving engine for LLMs
A framework for the comparative training and evaluation of statistical and deep learning models for multi-feature categorical sequence modeling, utilizing feature fusion and automated with MLflow and Optuna integration.
This repository is a paper digest of Transformer-related approaches in visual tracking tasks.
Build high-performance AI models with modular building blocks
An ultimately comprehensive paper list of Vision Transformer/Attention, including papers, codes, and related websites
Code I used for my YouTube videos
Popular Large Language Models from scratch - 2024
OpenMMLab Detection Toolbox and Benchmark
vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs
Deep neural network models implemented from scratch in PyTorch for time series forecasting
Large Language Model Text Generation Inference
Code for ICASSP 2024 paper WhisperSeg: Positive Transfer of the Whisper Speech Transformer to Human and Animal Voice Activity Detection
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
🔧 A Kotlin coroutine wrapper around Media3's Transformer API.
JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welcome).
Add a description, image, and links to the transformer topic page so that developers can more easily learn about it.
To associate your repository with the transformer topic, visit your repo's landing page and select "manage topics."