Data Scientist
Description
- As a Machine Learning Engineer on our core Software AIML team, you will be at the forefront of designing and developing cutting-edge GenAI applications.
- You will be working closely with business stakeholders and data engineers to communicate AI recommendations to senior management.
Key Responsibilities:
- Engage deeply with business teams to identify opportunities and translate the needs into innovative and practical AIML solutions
- Design, build, and deploy state of the art AIML models to solve complex business problems
- Understand key performance levers and metrics to highlight AIML solution operational issues and drive improvement
- Foster close collaboration with engineers and infrastructure partners to implement robust and scalable solutions
- Communicate the results and insights effectively to partners and senior leaders, providing clear and actionable recommendations
- Stay updated with the latest trends, technologies, and best practices in AI and GenAI and data engineering
- Regularly research and present new ideas to improve the team's technical capabilities
Minimum Qualifications:
- Industry experience 10+ years BSCS, 5+ years MS/PhD in Computer Science, Statistics, Applied Math, or a related field
Preferred Qualifications:
- Expert in AIML modeling, Python, AIML Infrastructure, model deployment and MLOps
- Experience working in Supply Chain, Operations, or a related field
- End to end GenAI application experience
- Knowledge in RAG, Prompt Engineering, LoRA and other GenAI techniques
- Ability to operate independently and lead without authority
- Superb communication and interpersonal skills
AI SW Engineer Skills/Experience:
Experience in AI System Development foundational model Lifecycle training, retrieval of Augmented Systems, and Model Evaluation.
Proficient in Large Language Models (LLMs) Architecture, creating targeted Instruction Datasets, and implementing Quantization and Inference Optimizations.
Capable of covering Problem Definition, Data Acquisition, and Full-cycle Deployment, utilizing tools like MLflow, Cloud Platforms like AWS, Google Cloud, Azure.
Proprietary Models like GPT-4 and Claude, Open-source Models like Llama and Mistral, able to deliver high-performing, scalable, and cost-efficient AI solutions.
NLP libraries like NLTK, spaCy, GPT-3, BERT, and Hugging Face's Transformers to create sophisticated natural language processing solutions.
TECHNICAL SKILLS:
Methodologies: Spiral, Agile, Waterfall, Lean
Programming Language: C++, Python, Shell, SQL, R
IDE Tools: PyCharm, RStudio, Visual Studio Code, Jupyter Notebook, Google Collab, Navicat
ML Frameworks: Transformers, Scikit-Learn, Keras, TensorFlow, PyTorch, ONNX, NLTK, OpenAI, langchain,
llama-index,kore.ai
DL Architectures: LLM, ANN, CNN, R-CNN, RNN, GRU, LSTM, Transformers, Attention Mechanism, Tokenisers,
BERT, T5, Sentence Transformers, Foundational models
Packages: Pandas, NumPy, Spark, Matplotlib, SciPy
Cloud Technologies: AWS (EC2, S3, RDS, ECS, Lambda), Azure(VM, Functions, ACR, AKS), Digital Ocean, GCP, Paperspace
Database: MySQL, PostgreSQL, MongoDB, ChromaDB, Pinecone, SQLite
Web Frameworks and Web Servers/ Deployment:
Fast API, Flask, Nginx, Gunicorn, Uvicorn, Docker, Kubernetes
Miscellaneous Tools and Technologies:
JIRA, Auto Gluon, GitHub Actions, Github Issues, Datadog, MS Power BI, Postman, Locust, Stream lit, CUDA, cuDNN, TensorRT
Version Control: Git, GitHub, DVC (Data Version Control)
Operating Systems: Windows, Linux, Mac