Principal Enterprise AI Engineer
Date: Jan 13, 2025
Location: Irvine, CA, US
Company: Skyworks
If you are looking for a challenging and exciting career in the world of technology, then look no further. Skyworks is an innovator of high-performance analog semiconductors whose solutions are powering the wireless networking revolution. Through our broad technology expertise and one of the most extensive product portfolios in the industry, we are Connecting Everyone and Everything, All the Time.
At Skyworks, you will find a fast-paced environment with a strong focus on global collaboration, minimal layers of management, and the freedom to make meaningful contributions in a setting that encourages creative thinking. We value open communication, mutual trust, and respect. We are excited about the opportunity to work with you and glad you want to be part of a team of talented individuals who together are changing the way the world communicates.
Requisition ID: 74855
Position Summary
The Enterprise AI Engineer position will be responsible for designing and implementing solutions that meet the needs of various business areas across Skyworks’ Enterprise in and around the Machine Learning space. The incumbent will work under senior architects in the Enterprise Architecture group and with different departments to determine how to best implement new technologies and improve existing ones with a focus on machine learning operations and cloud platforms. Projects will be exciting and on modern platforms varying across ML/AI as well as data governance.
Detailed Description
Responsibilities will include, but not be limited to:
- Stakeholder Collaboration: Partner with business stakeholders, data scientists, software engineers, and cross-functional teams to gather requirements and align machine learning and GenAI projects with strategic business objectives. Facilitate clear communication to ensure that developed models and solutions address the specific needs and expectations of all parties. Provide technical guidance to non-technical stakeholders to help them understand the potential and constraints of AI solutions.
- System Integration and Interoperability: Develop solutions that seamlessly integrate with existing enterprise systems, databases, and APIs. Collaborate with internal and external partners to ensure smooth data flow and system interoperability, providing consistent and accurate inputs for machine learning and GenAI models. Implement APIs and microservices to make AI functionalities accessible across various platforms and applications.
- Model Development and Optimization: Design, develop, and fine-tune traditional machine learning models and large language models (LLMs) such as GPT, BERT, and other GenAI frameworks. Leverage transfer learning and domain adaptation techniques to tailor pre-trained models to specific business use cases, ensuring optimal performance and relevance.
- Advanced NLP and GenAI Techniques: Apply state-of-the-art NLP techniques including text preprocessing, tokenization, named entity recognition (NER), sentiment analysis, text classification, and language generation. Fine-tune pre-trained models for specialized NLP tasks and applications to meet business needs.
- Traditional Machine Learning Techniques: Implement and optimize traditional machine learning techniques such as supervised and unsupervised learning, regression, classification, clustering, and ensemble methods. Conduct thorough model evaluation and hyperparameter tuning to achieve high performance and accuracy.
- Risk Management and Mitigation: Identify and address potential risks and challenges related to machine learning and GenAI, including data privacy, security vulnerabilities, and ethical considerations. Develop and implement strategies to mitigate these risks, ensuring compliance, data integrity, and model robustness. Regularly conduct audits and apply bias detection and mitigation techniques to ensure fair and unbiased model outcomes.
- Continuous Improvement: Continuously assess and enhance machine learning and GenAI models to improve accuracy, efficiency, and scalability. Stay current with the latest advancements in machine learning, NLP, and GenAI, and recommend new tools, frameworks, or methodologies to enhance organizational capabilities. Engage in research and development (R&D) activities to explore innovative AI techniques and their potential business applications.
- Data Pipeline Development: Design and maintain data pipelines for sourcing and processing datasets required for training traditional ML, NLP, and GenAI models. Ensure data quality and consistency through rigorous data cleaning, transformation, and augmentation processes. Optimize data flows to meet model training requirements efficiently.
Detailed Description (Additional)
- Model Deployment: Deploy traditional machine learning, NLP, and GenAI models into production environments using platforms such as Azure Machine Learning Studio, Azure Kubernetes, and on-prem Kubernetes. Implement scalable model serving solutions to handle high-volume inference requests effectively.
- Inference Pipeline Design: Architect and optimize inference pipelines, including data storage, data movement, compute instances, and networking, to support both traditional ML and GenAI models. Ensure low latency and high throughput to meet the demands of real-time applications.
- Research and Innovation: Stay abreast of the latest research and advancements in machine learning, NLP, and GenAI. Experiment with cutting-edge models and techniques, contributing to open-source projects and academic publications. Foster a culture of innovation within the team by organizing hackathons, workshops, and knowledge-sharing sessions.
Requirements
- BS Degree and 5+ years of experience
- Proficient in Python, with knowledge of C# and Java, and familiarity with scripting languages and tools such as Bash and PowerShell.
- Strong knowledge of machine learning concepts, frameworks, and technologies, such as TensorFlow, PyTorch, Scikit-learn, SciPy, NumPy, Pandas, Hugging Face Transformers, and OpenAI.
- Proficiency in advanced NLP techniques including text preprocessing, tokenization, named entity recognition (NER), sentiment analysis, text classification, and language generation.
- Experience fine-tuning large language models (LLMs) such as GPT, BERT, and other GenAI frameworks.
- Experience with traditional machine learning techniques such as supervised and unsupervised learning, regression, classification, clustering, and ensemble methods.
- Experience with MLOps tools and practices, such as CI/CD, Docker, Kubernetes, and MLflow.
- Experience with SQL and NoSQL databases, such as MSSQL and PostgreSQL.
- Proficiency in designing and deploying machine learning models in cloud environments (e.g., Azure/Azure ML, AWS, GCP, AKS, and Kubernetes).
- Demonstrated expertise in architecting scalable and secure machine learning infrastructure, including data pipelines, storage systems, and model deployment frameworks.
- Excellent communication and collaboration skills, with the ability to effectively engage with stakeholders at various levels of the organization.
- Ability to multitask and manage multiple activities simultaneously.
- Ability to use a wide degree of creativity and latitude to think differently, challenge conventional wisdom, and drive new best practices.
- Ability to work effectively with international teams.
- Commitment to promoting ethical AI practices and ensuring transparency, accountability, and fairness in AI model development and deployment.
- Strong problem-solving skills and the ability to adapt to rapidly changing technological landscapes.
- Participation in research and development (R&D) activities to explore innovative AI techniques and their potential applications.
The typical base pay range for this role across the U.S. is currently USD $89,100 - $172,100 per year. Starting base pay will depend on relevant experience and skills, training and education, business needs, market demands, the ultimate job duties and requirements, and work location. Skyworks has different base pay ranges for different work locations in the U.S. Benefits include access to healthcare benefits (including a premium-free medical plan option), a 401(k) plan and company match, an employee stock purchase plan, paid time off (including vacation, sick/wellness, parental leave), among others. Employees are eligible to participate in an incentive plan, and certain roles are also eligible for additional awards, including recognition and stock. These incentives and awards are based on individual and/or company performance.
Skyworks is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, protected veteran status, or any other characteristic protected by law. Skyworks strives to create an accessible workplace; if you need an accommodation due to a disability, please contact us at accommodations@skyworksinc.com.
Nearest Major Market: Irvine California
Nearest Secondary Market: Los Angeles
Job Segment:
Cloud, R&D Engineer, Information Technology, IT Architecture, Open Source, Technology, Engineering