Gathering your results ...
3 days
Not Specified
Not Specified
Not Specified
<p>About the Team: Data AML is ByteDance's Machine Learning mid-platform, providing training and inference systems for recommendation, advertising, CV, speech, and NLP for businesses such as Douyin, Jinri Toutiao, and Xigua Video. It provides powerful Machine Learning computing power to internal business units within the company and conducts research on some general and innovative algorithms for issues in these businesses. At the same time, it also provides some core capabilities of Machine Learning and Recommender systems to external enterprise customers through Volcano Engine. In addition, AML also conducts some cutting-edge research in fields such as Al for Science and scientific computing. Responsibilities: 1) Optimizing resource efficiency in distributed orchestration and scheduling, through engineering means, enhances the scale of business/models supported per unit of computing power: a) Use/secondarily develop distributed scheduling frameworks around the Kubernetes/Godel ecosystem, make reasonable selections in different business scenarios, and optimize scheduling strategies for cluster utilization/uniformity based on the characteristics of different scenarios; b) Connect/extend AutoScaling for various models and business operations, as well as automatic parallelization tasks. Through the method of load modeling and analysis of different models, automatically optimize resource requests for models, optimize resource utilization efficiency at scale, and achieve global optimality; c) Responsible for the preemption/eviction function of services with different priorities; responsible for the borrowing/mixed deployment docking work among different types of resources in different clusters; responsible for the scheduling/load adaptation in scenarios of multiple data centers, multiple regions, and multiple clouds; 2) Build a training system architecture for next-generation ultra-large and ultra-deep recommendation models: a) Build a flexible and robust distributed training runtime around ultra-large-scale embedding and ultra-large-scale GPU synchronization training; Design and optimize distributed computing APis and runtime for future-oriented research paradigms of recommended advertising models (e.g., RL/finetune/distillation); c) Interface with the platform to optimize the diagnosability and usability of distributed training. 3) Construct an online orchestration architecture for the next-generation Recommender system: a) Build a robust and stable distributed model inference architecture around the online training scenario of ultra-large-scale embeddings; b) Optimize the usability of the online architecture of the recommended advertising model and the MLops process by integrating the research and experimental model of the business.</p> <p>Minimum Qualifications: - Bachelor's degree or above in Computer Science or similar field of study. - At least 5 years of experience with proficiency in at least one programming language among Go/Python in a Linux environment, with excellent hands-on coding skills; - Familiar with some open-source distributed scheduling frameworks, such as Kubernetes (K8S), Yarn (as well as the Big data frameworks Flink and MapReduce in the Hadoop ecosystem), Mesos, Celery, and has rich practical and development experience in Machine Learning systems; - Master the principles of distributed systems, and have participated in the design, development, and maintenance of large-scale distributed systems; - Have excellent logical analysis skills, capable of reasonably abstracting and splitting business logic; - Have a strong sense of work responsibility, good learning ability, communication skills, and self-motivation, and be able to respond and act quickly; - Have good work documentation habits, and timely write and update work processes and Technical Documentation as required. Preferred Qualifications: - Familiar with at least one mainstream Machine Learning framework (PyTorch / TensorFlow); - Have experience in one of the following areas: Al Infrastructure, HW/SW Co-Design, High Performance Computing, ML Hardware Architecture (GPU, Accelerators, Networking); - Some experience in using/designing open-source training orchestration systems, such as veRL, VLLM, Ray, TFX. Those with development experience in at least one of them are preferred.</p>
POST A JOB
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!
It's completely FREE to post your jobs on ZiNG! There's no catch, no credit card needed, and no limits to number of job posts.
The first step is to SIGN UP so that you can manage all your job postings under your profile.
If you already have an account, you can LOGIN to post a job or manage your other postings.
Thank you for helping us get Americans back to work!