NexoGlobal Inc
Data Scientist
đź“Ť
Auburn Hills, MI
🧪
Mid-Senior level
Role: Data Scientist
Location: Auburn Hills, MI (Remote 50%)
Only W2
Job Description
Purpose of Role/Organizational Unit:
Data scientist resource will be member of Client's Enterprise Data and Advanced Analytics team, working with cross-functional teams to leverage data and advanced analytical techniques and deliver data-driven solutions within an organization. This work entails problem solving in a variety of domains, including: regression and classification problems; natural language processing tasks; business process optimization; and conducting research and analysis on new technologies and business methodologies.
Functions/Responsibility Performed
The following functions are the responsibility of the Data Scientist. This individual may be asked to solely perform the functions, per agreement with the Enterprise Data Science Manager.
Location: Auburn Hills, MI (Remote 50%)
Only W2
Job Description
Purpose of Role/Organizational Unit:
Data scientist resource will be member of Client's Enterprise Data and Advanced Analytics team, working with cross-functional teams to leverage data and advanced analytical techniques and deliver data-driven solutions within an organization. This work entails problem solving in a variety of domains, including: regression and classification problems; natural language processing tasks; business process optimization; and conducting research and analysis on new technologies and business methodologies.
Functions/Responsibility Performed
The following functions are the responsibility of the Data Scientist. This individual may be asked to solely perform the functions, per agreement with the Enterprise Data Science Manager.
- Collaborate with business stakeholders to understand their challenges and requirements
- Translate business problems into analytical frameworks and identify opportunities to address complex problems
- Identify valuable data sources and use Analytical and Visualization tools to explore data, identify trends and discovery patterns
- Apply statistical analysis and data mining techniques to analyze large and complex datasets
- Enhance data collection process with preprocessing steps, transform raw data and apply cleansing of structured and unstructured data
- Automate data collection processes using pipelines
- Develop and implement models, algorithms, and statistical methodologies to derive actionable insights and make accurate predictions
- Build interactive dashboards and data visualizations applications to present the model results to stakeholders
- Document best practices and quality standards to be adherence during development of data science solutions
- Conduct review and provide feedback on data science work applications
- Monitor and manage production solutions. Optimize and fine-tune models for performance, accuracy, and scalability
- Collaborate with cross functional teams to deploy models into production environments
- Be a team player, proactive, an effective communicator with ability to work independently
- Eager to learn new data domains
- 8+ years of experience in Data Science and Analytical Domains
- Understanding of machine-learning and operations research
- Proficiency in Query languages SQL, Hive and scripting languages
- Knowledge of R; familiarity with Scala, Java is an asset
- Strong expertise in Python programming
- Usage of business intelligence tools (e.g. Tableau, Looker) and data frameworks (e.g. Hadoop, Google Cloud Platform, SAP)
- Experience in Kubernetes, GCP, AWS, and cloud-native technologies generally
- Design and Development in Google cloud platform solutions
- Build data ingestion processes using Google cloud ETL tools (Data fusion, Dataflow, Dataproc, Airflow)
- Expertise in Data Management – running queries in databases including Big Query, Hadoop, Oracle, SQL Server
- Usage of Google BigQuery Client and Vertex AI
- Proven experience in building end to end and automating machine learning pipelines
- Working knowledge of containers. For example, experience with Docker or podman
- Comfortable with git; making commits, creating and merging branches, navigating GitHub and GitLab
- Strong command lines skills in Linux environments
- A high tolerance for OpenShift, Confluence, Jira, and other enterprise tools
- Analytical mind and business acumen
- Proficiency in Data wrangling
- Strong math skills (e.g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills
- Bachelor’s Degree in Computer Science engineering or Data Analytics