Gathering your results ...
5 days
Not Specified
Not Specified
Not Specified
<p>About the Team The TikTok Model Infrastructure team is the core engine powering the world's most engaged "For You" feed. We focus on the engineering efficiency and architectural evolution of recommendation models at an unprecedented scale. As we lead the industry's shift toward LLM2Rec and Large Recommendation Models (LRM), our mission is to build ultra-high-performance infrastructure that bridges the gap between massive data scale and extreme algorithmic complexity. We tackle the industry's most demanding "frontier" challenges: managing Petabyte-scale distributed embedding states, optimizing thousand-node GPU clusters, and perfecting real-time Sparse/Dense streaming. Our work ensures that models with hundreds of billions of dense parameters-on par with the world's largest LLMs-can operate with millisecond-level latency.</p> <p>We are seeking Software Engineering Interns to join the Model Infra team to redefine the performance boundaries of recommendation systems. In this role, you will focus on the efficiency of the entire model lifecycle. You will work on the convergence of generative AI and recommendation architecture, optimizing everything from the raw throughput of multi-billion parameter dense blocks to the efficient retrieval of sparse features across massive distributed memory fabrics.</p> <p>As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests.</p> <p>Applications will be reviewed on a rolling basis - we encourage you to apply early.</p> <p>Responsibilities</p> <ul> <li>Engineering Efficiency at Scale: Drive the optimization of training and inference pipelines to maximize hardware utilization (MFU/HFU) for models featuring hundreds of billions of dense parameters. </li><li>LLM2Rec Infrastructure: Architect specialized systems to support the integration of LLMs into the recommendation stack, focusing on memory-efficient attention mechanisms and advanced KV cache management for long-sequence user modeling. </li><li>Massive Sparse & Dense Streaming: Build and optimize high-concurrency engines for Petabyte-scale streaming training, handling continuous parameter updates and high-frequency data ingestion without compromising stability. </li><li>Hardware-Aware Co-Design: Work closely with researchers to design next-generation recommendation architectures optimized for modern GPU/NPU interconnects, ensuring high-bandwidth utilization across the cluster. </li><li>Distributed State Management: Innovate on how we store and synchronize massive model states across heterogeneous memory hierarchies (HBM, DDR, and NVMe).Minimum Qualification(s) </li><li>Currently pursuing an Undergraduate/Master in Software Development, Computer Science, Computer Engineering, or a related technical discipline. </li><li>Strong programming skills in C++ and Python. </li><li>Solid understanding of Computer Architecture and the GPU software stack (CUDA, Triton, or NCCL). </li><li>Experience with deep learning frameworks (e.g., PyTorch, TensorFlow) and a desire to "look under the hood" of model execution runtimes. </li><li>A strong interest in solving system-level bottlenecks in large-scale distributed environments. </li></ul> <p>Preferred Qualification(s)</p> <ul> <li>Experience with Transformer-based architectures, 3D parallelism (TP/PP/DP). </li><li>Deep understanding of the torch.compile stack, including TorchDynamo (graph acquisition) and TorchInductor (lowering). </li><li>Hands-on experience writing high-performance kernels or optimizing collective communication (e.g., customizing NCCL/UCX). </li><li>Familiarity with RDMA networking, high-performance storage, or specialized Parameter Server architectures. </li><li>Success in programming competitions (ACM-ICPC) or contributions to prominent open-source AI infrastructure or high-performance computing projects. </li></ul>
POST A JOB
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!