📄 Notable* Recent AI/ML arXiv Papers

Last updated just now...

📄 AdaptToken: Entropy-based Adaptive Token Selection for MLLM Long Video Understanding
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28696v1
👥 Authors: Haozhe Qi, Kevin Qu, Mahdi Rad, Rui Wang (possible past Tencent (China) affiliation), Alexander Mathis, Marc Pollefeys (possible past Google (United States) affiliation)
Abstract

Long video understanding remains challenging for Multi-modal Large Language Models (MLLMs) due to high memory costs and context-length limits. Prior approaches mitigate this by scoring and selecting frames/tokens within short clips, but they lack a principled mechanism to (i) compare relevance across distant video clips and (ii) stop processing once sufficient evidence has been gathered. We propose AdaptToken, a training-free framework that turns an MLLM's self-uncertainty into a global control ...

📄 MonitorBench: A Comprehensive Benchmark for Chain-of-Thought Monitorability in Large Language Models
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28590v1
👥 Authors: Han Wang (possible past Peking University affiliation), Yifan Sun (possible past Baidu (China) affiliation), Brian Ko, Mann Talati, Jiawen Gong, Zimeng Li, Naicheng Yu, Xucheng Yu, Wei Shen (possible past Tsinghua University affiliation), Vedant Jolly, Huan Zhang
Abstract

Large language models (LLMs) can generate chains of thought (CoTs) that are not always causally responsible for their final outputs. When such a mismatch occurs, the CoT no longer faithfully reflects the decision-critical factors driving the model's behavior, leading to the reduced CoT monitorability problem. However, a comprehensive and fully open-source benchmark for studying CoT monitorability remains lacking. To address this gap, we propose MonitorBench, a systematic benchmark for evaluating...

📄 Towards a Medical AI Scientist
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28589v1
👥 Authors: Hongtao Wu, Boyun Zheng, Dingjie Song, Yu Jiang, Jianfeng Gao (possible past Microsoft (United States) affiliation), Lei Xing (possible past Stanford University affiliation), Lichao Sun, Yixuan Yuan
Abstract

Autonomous systems that generate scientific hypotheses, conduct experiments, and draft manuscripts have recently emerged as a promising paradigm for accelerating discovery. However, existing AI Scientists remain largely domain-agnostic, limiting their applicability to clinical medicine, where research is required to be grounded in medical evidence with specialized data modalities. In this work, we introduce Medical AI Scientist, the first autonomous research framework tailored to clinical autono...

📄 Next-Token Prediction and Regret Minimization
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28499v1
👥 Authors: Mehryar Mohri (possible past Google (United States) affiliation), Clayton Sanford, Jon Schneider, Kiran Vodrahalli, Yifan Wu (possible past Carnegie Mellon University affiliation)
Abstract

We consider the question of how to employ next-token prediction algorithms in adversarial online decision-making environments. Specifically, if we train a next-token prediction model on a distribution $\mathcal{D}$ over sequences of opponent actions, when is it the case that the induced online decision-making algorithm (by approximately best responding to the model's predictions) has low adversarial regret (i.e., when is $\mathcal{D}$ a \emph{low-regret distribution})? For unbounded context wi...

📄 HISA: Efficient Hierarchical Indexing for Fine-Grained Sparse Attention
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28458v1
👥 Authors: Yufei Xu, Fanxu Meng, Fan Jiang (possible past Shanghai Jiao Tong University affiliation), Yuxuan Wang (possible past Google (United States) affiliation), Ruijie Zhou, Jiexi Wu, Zhixin Pan, Zhaohui Wang, Xiaojuan Tang, Wenjie Pei (possible past Tencent (China) affiliation), Tongxuan Liu, Di Yin, Xing Sun (possible past Tencent (China) affiliation), Muhan Zhang (possible past Meta (United States) affiliation)
Abstract

Token-level sparse attention mechanisms, exemplified by DeepSeek Sparse Attention (DSA), achieve fine-grained key selection by scoring every historical token for each query using a lightweight indexer, and then computing attention only over the selected subset. While the downstream sparse attention scales efficiently, the indexer still scans the entire prefix for every query, introducing an O($L^2$) per-layer bottleneck that becomes prohibitive as context length grows. We propose HISA (Hierarchi...

📄 MiroEval: Benchmarking Multimodal Deep Research Agents in Process and Outcome
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28407v1
👥 Authors: Fangda Ye, Yuxin Hu, Pengxiang Zhu, Yibo Li, Ziqi Jin, Yao Xiao, Yibo Wang, Lei Wang (possible past Baidu (China) affiliation), Zhen Zhang, Lu Wang (possible past University Of Washington affiliation), Yue Deng, Bin Wang, Yifan Zhang, Liangcai Su, Xinyu Wang, He Zhao (possible past Tencent (China) affiliation), Chen Wei, Qiang Ren, Bryan Hooi, An Bo, Shuicheng Yan (possible past National University Of Singapore affiliation), Lidong Bing (possible past Tencent (China) affiliation)
Abstract

Recent progress in deep research systems has been impressive, but evaluation still lags behind real user needs. Existing benchmarks predominantly assess final reports using fixed rubrics, failing to evaluate the underlying research process. Most also offer limited multimodal coverage, rely on synthetic tasks that do not reflect real-world query complexity, and cannot be refreshed as knowledge evolves. To address these gaps, we introduce MiroEval, a benchmark and evaluation framework for deep res...

📄 Skillful Kilometer-Scale Regional Weather Forecasting via Global and Regional Coupling
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28173v1
👥 Authors: Weiqi Chen, Wenwei Wang, Qilong Yuan, Lefei Shen, Bingqing Peng, Jiawei Chen (possible past Tencent (China) affiliation), Bo Wu (possible past Tencent (China) affiliation), Liang Sun
Abstract

Data-driven weather models have advanced global medium-range forecasting, yet high-resolution regional prediction remains challenging due to unresolved multiscale interactions between large-scale dynamics and small-scale processes such as terrain-induced circulations and coastal effects. This paper presents a global-regional coupling framework for kilometer-scale regional weather forecasting that synergistically couples a pretrained Transformer-based global model with a high-resolution regional ...

📄 CoT2-Meta: Budgeted Metacognitive Control for Test-Time Reasoning
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28135v1
👥 Authors: Siyuan Ma, Bo Gao, Zikai Xiao, Hailong Wang (possible past Google (United States) affiliation), Xinlei Yu, Rui Qian (possible past Shanghai Jiao Tong University affiliation), Jiayu Qian, Luqi Gong, Yang Liu (possible past Tsinghua University affiliation)
Abstract

Recent test-time reasoning methods improve performance by generating more candidate chains or searching over larger reasoning trees, but they typically lack explicit control over when to expand, what to prune, how to repair, and when to abstain. We introduce CoT2-Meta, a training-free metacognitive reasoning framework that combines object-level chain-of-thought generation with meta-level control over partial reasoning trajectories. The framework integrates four components: strategy-conditioned t...

📄 MolmoPoint: Better Pointing for VLMs with Grounding Tokens
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28069v1
👥 Authors: Christopher Clark, Yue Yang, Jae Sung Park (possible past University Of California, Berkeley affiliation), Zixian Ma, Jieyu Zhang, Rohun Tripathi, Mohammadreza Salehi, Sangho Lee, Taira Anderson, Winson Han, Ranjay Krishna (possible past University Of Washington affiliation)
Abstract

Grounding has become a fundamental capability of vision-language models (VLMs). Most existing VLMs point by generating coordinates as part of their text output, which requires learning a complicated coordinate system and results in a high token count. Instead, we propose a more intuitive pointing mechanism that directly selects the visual tokens that contain the target concept. Our model generates a special pointing token that cross-attends to the input image or video tokens and selects the appr...

📄 EvA: An Evidence-First Audio Understanding Paradigm for LALMs
🗓️ Published: 3/29/2026
🔗 http://arxiv.org/abs/2603.27667v1
👥 Authors: Xinyuan Xie, Shunian Chen, Zhiheng Liu, Yuhao Zhang, Zhiqiang Lv (possible past Tencent (China) affiliation), Liyin Liang, Benyou Wang (possible past Tencent (China) affiliation)
Abstract

Large Audio Language Models (LALMs) still struggle in complex acoustic scenes because they often fail to preserve task-relevant acoustic evidence before reasoning begins. We call this failure the evidence bottleneck: state-of-the-art systems show larger deficits in evidence extraction than in downstream reasoning, suggesting that the main limitation lies in upstream perception rather than reasoning policy. To address this problem, we propose EvA (Evidence-First Audio), a dual-path architecture t...

📄 AgentSwing: Adaptive Parallel Context Management Routing for Long-Horizon Web Agents
🗓️ Published: 3/29/2026
🔗 http://arxiv.org/abs/2603.27490v1
👥 Authors: Zhaopeng Feng, Liangcai Su, Zhen Zhang, Xinyu Wang, Xiaotian Zhang, Xiaobin Wang, Runnan Fang, Qi Zhang (possible past Tencent (China) affiliation), Baixuan Li, Shihao Cai, Rui Ye, Hui Chen, Jiang Yong, Joey Tianyi Zhou (possible past Tencent (China) affiliation), Chenxiong Qian, Pengjun Xie, Bryan Hooi, Zuozhu Liu, Jingren Zhou
Abstract

As large language models (LLMs) evolve into autonomous agents for long-horizon information-seeking, managing finite context capacity has become a critical bottleneck. Existing context management methods typically commit to a single fixed strategy throughout the entire trajectory. Such static designs may work well in some states, but they cannot adapt as the usefulness and reliability of the accumulated context evolve during long-horizon search. To formalize this challenge, we introduce a probabi...

📄 PeopleSearchBench: A Multi-Dimensional Benchmark for Evaluating AI-Powered People Search Platforms
🗓️ Published: 3/29/2026
🔗 http://arxiv.org/abs/2603.27476v1
👥 Authors: Wei Wang (possible past University Of Oxford affiliation), Tianyu Shi, Shuai Zhang, Boyang Xia, Zequn Xie, Chenyu Zeng, Qi Zhang (possible past Tencent (China) affiliation), Lynn Ai, Yaqi Yu, Kaiming Zhang, Feiyue Tang
Abstract

AI-powered people search platforms are increasingly used in recruiting, sales prospecting, and professional networking, yet no widely accepted benchmark exists for evaluating their performance. We introduce PeopleSearchBench, an open-source benchmark that compares four people search platforms on 119 real-world queries across four use cases: corporate recruiting, B2B sales prospecting, expert search with deterministic answers, and influencer/KOL discovery. A key contribution is Criteria-Grounded ...

📄 Project Imaging-X: A Survey of 1000+ Open-Access Medical Imaging Datasets for Foundation Model Development
🗓️ Published: 3/29/2026
🔗 http://arxiv.org/abs/2603.27460v1
👥 Authors: Zhongying Deng, Cheng Tang, Ziyan Huang, Jiashi Lin, Ying Chen (possible past Baidu (China) affiliation), Junzhi Ning, Chenglong Ma, Jiyao Liu, Wei Li (possible past Peking University affiliation), Yinghao Zhu, Shujian Gao, Yanyan Huang, Sibo Ju, Yanzhou Su, Pengcheng Chen, Wenhao Tang, Tianbin Li, Haoyu Wang (possible past Tencent (China) affiliation), Yuanfeng Ji, Hui Sun, Shaobo Min, Liang Peng, Feilong Tang, Haochen Xue, Rulin Zhou, Chaoyang Zhang, Wenjie Li, Shaohao Rui, Weijie Ma, Xingyue Zhao, Yibin Wang, Kun Yuan, Zhaohui Lu, Shujun Wang, Jinjie Wei, Lihao Liu, Dingkang Yang, Lin Wang, Yulong Li, Haolin Yang, Yiqing Shen, Lequan Yu, Xiaowei Hu, Yun Gu, Yicheng Wu, Benyou Wang (possible past Tencent (China) affiliation), Minghui Zhang, Angelica I. Aviles-Rivero, Qi Gao, Hongming Shan, Xiaoyu Ren, Fang Yan, Hongyu Zhou, Haodong Duan, Maosong Cao, Shanshan Wang, Bin Fu (possible past Tencent (China) affiliation), Xiaomeng Li, Zhi Hou, Chunfeng Song, Lei Bai, Yuan Cheng, Yuandong Pu, Xiang Li, Wenhai Wang, Hao Chen, Jiaxin Zhuang, Songyang Zhang, Huiguang He, Mengzhang Li, Bohan Zhuang, Zhian Bai, Rongshan Yu, Liansheng Wang, Yukun Zhou, Xiaosong Wang (possible past Nvidia (United States) affiliation), Xin Guo, Guanbin Li, Xiangru Lin, Dakai Jin, Mianxin Liu, Wenlong Zhang, Qi Qin, Conghui He (possible past Tsinghua University affiliation), Yuqiang Li, Ye Luo, Nanqing Dong, Jie Xu, Wenqi Shao, Bo Zhang (possible past Tencent (China) affiliation), Qiujuan Yan, Yihao Liu, Jun Ma, Zhi Lu, Yuewen Cao, Zongwei Zhou (possible past Google (United States) affiliation), Jianming Liang, Shixiang Tang, Qi Duan, Dongzhan Zhou, Chen Jiang, Yuyin Zhou, Yanwu Xu (possible past Baidu (China) affiliation), Jiancheng Yang, Shaoting Zhang (possible past Baidu (China) affiliation), Xiaohong Liu (possible past Shanghai Jiao Tong University affiliation), Siqi Luo, Yi Xin, Chaoyu Liu, Haochen Wen, Xin Chen (possible past Tencent (China) affiliation), Alejandro Lozano, Min Woo Sun, Yuhui Zhang, Yue Yao, Xiaoxiao Sun, Serena Yeung-Levy, Xia Li (possible past Meta (United States) affiliation), Jing Ke, Chunhui Zhang, Zongyuan Ge, Ming Hu, Jin Ye, Zhifeng Li (possible past Tencent (China) affiliation), Yirong Chen, Yu Qiao (possible past Shanghai Artificial Intelligence Laboratory affiliation), Junjun He
Abstract

Foundation models have demonstrated remarkable success across diverse domains and tasks, primarily due to the thrive of large-scale, diverse, and high-quality datasets. However, in the field of medical imaging, the curation and assembling of such medical datasets are highly challenging due to the reliance on clinical expertise and strict ethical and privacy constraints, resulting in a scarcity of large-scale unified medical datasets and hindering the development of powerful medical foundation mo...

📄 Multiple-Prediction-Powered Inference
🗓️ Published: 3/28/2026
🔗 http://arxiv.org/abs/2603.27414v1
👥 Authors: Charlie Cowen-Breen, Alekh Agarwal, Stephen Bates, William W. Cohen (possible past Google (United States) affiliation), Jacob Eisenstein (possible past Meta (United States) affiliation), Amir Globerson (possible past Google (United States) affiliation), Adam Fisch (possible past University Of Washington affiliation)
Abstract

Statistical estimation often involves tradeoffs between expensive, high-quality measurements and a variety of lower-quality proxies. We introduce Multiple-Prediction-Powered Inference (MultiPPI): a general framework for constructing statistically efficient estimates by optimally allocating resources across these diverse data sources. This work provides theoretical guarantees about the minimax optimality, finite-sample performance, and asymptotic normality of the MultiPPI estimator. Through exper...

📄 EpochX: Building the Infrastructure for an Emergent Agent Civilization
🗓️ Published: 3/28/2026
🔗 http://arxiv.org/abs/2603.27304v1
👥 Authors: Huacan Wang, Chaofa Yuan, Xialie Zhuang, Tu Hu, Shuo Zhang (possible past National University Of Defense Technology affiliation), Jun Han, Shi Wei, Daiqiang Li, Jingping Liu, Kunyi Wang, Zihan Yin, Zhenheng Tang, Andy Wang, Henry Peng Zou, Philip S. Yu (possible past Tsinghua University affiliation), Sen Hu (possible past Peking University affiliation), Qizhen Lan, Ronghao Chen
Abstract

General-purpose technologies reshape economies less by improving individual tools than by enabling new ways to organize production and coordination. We believe AI agents are approaching a similar inflection point: as foundation models make broad task execution and tool use increasingly accessible, the binding constraint shifts from raw capability to how work is delegated, verified, and rewarded at scale. We introduce EpochX, a credits-native marketplace infrastructure for human-agent production ...

📄 Rethinking Language Model Scaling under Transferable Hypersphere Optimization
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28743v1
👥 Authors: Liliang Ren, Yang Liu (possible past Tsinghua University affiliation), Yelong Shen (possible past Tencent (China) affiliation), Weizhu Chen
Abstract

Scaling laws for large language models depend critically on the optimizer and parameterization. Existing hyperparameter transfer laws are mainly developed for first-order optimizers, and they do not structurally prevent training instability at scale. Recent hypersphere optimization methods constrain weight matrices to a fixed-norm hypersphere, offering a promising alternative for more stable scaling. We introduce HyperP (Hypersphere Parameterization), the first framework for transferring optimal...

📄 See it to Place it: Evolving Macro Placements with Vision-Language Models
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28733v1
👥 Authors: Ikechukwu Uchendu, Swati Goel, Karly Hou, Ebrahim Songhori, Kuang-Huei Lee (possible past Google (United States) affiliation), Joe Wenjie Jiang (possible past Google (United States) affiliation), Vijay Janapa Reddi, Vincent Zhuang
Abstract

We propose using Vision-Language Models (VLMs) for macro placement in chip floorplanning, a complex optimization task that has recently shown promising advancements through machine learning methods. Because human designers rely heavily on spatial reasoning to arrange components on the chip canvas, we hypothesize that VLMs with strong visual reasoning abilities can effectively complement existing learning-based approaches. We introduce VeoPlace (Visual Evolutionary Optimization Placement), a nove...

📄 Taming the Instability: A Robust Second-Order Optimizer for Federated Learning over Non-IID Data
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28316v1
👥 Authors: Yuanqiao Zhang, Tiantian He, Yuan Gao (possible past Tencent (China) affiliation), Yixin Wang, Yew-Soon Ong, Maoguo Gong, A. K. Qin, Hui Li (possible past Baidu (China) affiliation)
Abstract

In this paper, we present Federated Robust Curvature Optimization (FedRCO), a novel second-order optimization framework designed to improve convergence speed and reduce communication cost in Federated Learning systems under statistical heterogeneity. Existing second-order optimization methods are often computationally expensive and numerically unstable in distributed settings. In contrast, FedRCO addresses these challenges by integrating an efficient approximate curvature optimizer with a provab...

📄 MuonEq: Balancing Before Orthogonalization with Lightweight Equilibration
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28254v1
👥 Authors: Da Chang, Qiankun Shi, Lvgang Zhang, Yu Li (possible past Tencent (China) affiliation), Ruijie Zhang, Yao Lu (possible past Google (United States) affiliation), Yongxiang Liu, Ganzhao Yuan
Abstract

Orthogonalized-update optimizers such as Muon improve training of matrix-valued parameters, but existing extensions mostly act either after orthogonalization by rescaling updates or before it with heavier whitening-based preconditioners. We introduce {\method}, a lightweight family of pre-orthogonalization equilibration schemes for Muon in three forms: two-sided row/column normalization (RC), row normalization (R), and column normalization (C). These variants rebalance the momentum matrix before...

📄 Neural Federated Learning for Livestock Growth Prediction
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.28117v1
👥 Authors: Shoujin Wang, Mingze Ni, Wei Liu (possible past Tsinghua University affiliation), Victor W. Chu, Kenny Sabir, Bryan Zheng, Ayush Kanwal, Roy Jing Yang, Fang Chen (possible past Tencent (China) affiliation)
Abstract

Livestock growth prediction is essential for optimising farm management and improving the efficiency and sustainability of livestock production, yet it remains underexplored due to limited large-scale datasets and privacy concerns surrounding farm-level data. Existing biophysical models rely on fixed formulations, while most machine learning approaches are trained on small, isolated datasets, limiting their robustness and generalisability. To address these challenges, we propose LivestockFL, the...

📄 Scaling Atomistic Protein Binder Design with Generative Pretraining and Test-Time Compute
🗓️ Published: 3/30/2026
🔗 http://arxiv.org/abs/2603.27950v1
👥 Authors: Kieran Didi, Zuobai Zhang, Guoqing Zhou, Danny Reidenbach, Zhonglin Cao, Sooyoung Cha, Tomas Geffner, Christian Dallago (possible past Nvidia (United States) affiliation), Jian Tang, Michael M. Bronstein, Martin Steinegger, Emine Kucukbenli, Arash Vahdat (possible past Nvidia (United States) affiliation), Karsten Kreis (possible past Nvidia (United States) affiliation)
Abstract

Protein interaction modeling is central to protein design, which has been transformed by machine learning with applications in drug discovery and beyond. In this landscape, structure-based de novo binder design is cast as either conditional generative modeling or sequence optimization via structure predictors ("hallucination"). We argue that this is a false dichotomy and propose Proteina-Complexa, a novel fully atomistic binder generation method unifying both paradigms. We extend recent flow-bas...

📄 KAT-Coder-V2 Technical Report
🗓️ Published: 3/29/2026
🔗 http://arxiv.org/abs/2603.27703v1
👥 Authors: Fengxiang Li, Han Zhang (possible past Tsinghua University affiliation), Haoyang Huang, Jinghui Wang, Jinhua Hao, Kun Yuan, Mengtong Li, Minglei Zhang, Pengcheng Xu, Wenhao Zhuang, Yizhen Shao, Zongxian Feng, Can Tang, Chao Wang (possible past Google (United States) affiliation), Chengxiao Tong, Fan Yang (possible past Tencent (China) affiliation), Gang Xiong, Haixuan Gao, Han Gao (possible past Tencent (China) affiliation), Hao Wang (possible past Tsinghua University affiliation), Haochen Liu, Hongliang Sun, Jiabao Li, Jingwen Chang, Jun Du, Junyi Peng, Leizhen Cui, Meimei Jing, Mingqi Wu, Shangpeng Yan, Shaotong Qi, Suzhe Xu, Wenxuan Zhao, Xianda Sun, Xuan Xie, Yanbo Wang, Yao Xia, Yinghan Cui, Yingpeng Chen, Yong Wang (possible past Baidu (China) affiliation), Yuze Shi, Zhiwei Shen, Ziyu Wang (possible past University Of Oxford affiliation), Ming Sun (possible past Baidu (China) affiliation), Lin Ye, Bin Chen
Abstract

We present KAT-Coder-V2, an agentic coding model developed by the KwaiKAT team at Kuaishou. KAT-Coder-V2 adopts a "Specialize-then-Unify" paradigm that decomposes agentic coding into five expert domains - SWE, WebCoding, Terminal, WebSearch, and General - each undergoing independent supervised fine-tuning and reinforcement learning, before being consolidated into a single model via on-policy distillation. We develop KwaiEnv, a modular infrastructure sustaining tens of thousands of concurrent san...

📄 ScoutAttention: Efficient KV Cache Offloading via Layer-Ahead CPU Pre-computation for LLM Inference
🗓️ Published: 3/28/2026
🔗 http://arxiv.org/abs/2603.27138v1
👥 Authors: Qiuyang Zhang, Kai Zhou, Ding Tang, Kai Lu, Cheng Li (possible past Google (United States) affiliation), Zhenyu Yang, Peng Xu (possible past Google (United States) affiliation), Jiguang Wan
Abstract

Large language models encounter critical GPU memory capacity constraints during long-context inference, where KV cache memory consumption severely limits decode batch sizes. While existing research has explored offloading KV cache to DRAM, these approaches either demand frequent GPU-CPU data transfers or impose extensive CPU computation requirements, resulting in poor GPU utilization as the system waits for I/O operations or CPU processing to complete. We propose ScoutAttention, a novel KV cac...

*Notable papers are those with at least two authors from a "big" AI/ML lab.