Job Description
职位描述
Position Overview:
职位概况:
Contribute to transitioning LLM or VLA models from research prototypes to our digital system and edge devices, enabling low-latency, high-reliability visual perception to monitor machines in field and action decision-making.
参与将前沿多模态大模型或VLA模型从实验室推向我们的远程数字中心系统 以及设备边缘设备,实现低延迟、高可靠的视觉理解以监测及控制现场设备。
Fine-tune, deploy, and optimize Multimodal Large Language Model (MLLM) or Vision-Language-Action (VLA) model.
多模态大模型或VLA模型的微调、部署和优化。
Main Responsibilities:
工作职责:
Deploy LLM on the digital system to monitor the real-time operational status of on-site equipment and provide instant feedback.
在数字中心系统上部署大模型以实时监控现场设备运行状态及实时反馈。
Responsible for the development, deployment, and optimization of Vision-Language-Action (VLA) or Multimodal Large Language Model (MLLM) applications.
负责视觉-语言-行动模型(VLA)和多模态大模型(MLLM)应用的开发...
Ready to Apply?
Take the next step in your AI career. Submit your application to TOMRA today.
Submit Application