Support delivery of one or more data science use cases, leading on data discovery and model building activities
Conceptualize and quickly build POC on new product ideas - should be willing to work as an individual contributor
Open to learn, implement newer tools\products
Experiment & identify best methods\techniques, algorithms for analytical problems
Operationalize – Work closely with the engineering, infrastructure, service management and business teams to operationalize use cases Essential Skills
Minimum 4-7 years of hands-on experience with statistical software tools: SQL, R, Python
3+ years’ experience in business analytics, forecasting or business planning with emphasis on analytical modeling, quantitative reasoning and metrics reporting
Experience working with large data sets in order to extract business insights or build predictive models
Proficiency in one or more statistical tools/languages – Python, Scala, R, SPSS or SAS and related packages like Pandas, SciPy/Scikit-learn, NumPy etc.
Good data intuition / analysis skills; sql, plsql knowledge must
Manage and transform variety of datasets to cleanse, join, aggregate the datasets
Hands-on experience running in running various methods like Regression, Random forest, k-NN, k-Means, boosted trees, SVM, Neural Network, text mining, NLP, statistical modelling, data mining, exploratory data analysis, statistics (hypothesis testing, descriptive statistics)
Industry experience in building and operationalizing various machine and deep learning models in finance/other domain would advantage
Deep domain (BFSI, Manufacturing, Auto, Airlines, Supply Chain, Retail & CPG) knowledge
Demonstrated ability to work under time constraints while delivering incremental value.
Education Minimum a Masters in Statistics, or PhD in domains linked to applied statistics, applied physics, Artificial Intelligence, Computer Vision etc. Desirable Skills (not must)
Ability to work with a wide range of stakeholders and convert abstract ideas into actionable requirements
Experience using web services: Redshift, S3, Spark, Azure ML, TensorFlow
Hands on experience and understanding of Big Data ecosystem with technologies like Spark/PySpark, MapReduce, Kafka, Hive etc. with unix scripting. Knowledge around NoSQL technologies would be an added advantage.
Out of the box thinking to solve real world problems
Should be able to juggle between multiple priorities efficiently
Work collaboratively with teams across different geographic locations and contribute to Machine Learning capability.
Good presentation skills – Should be able to present findings and insights in a manner understandable to business.
Email Id : email@example.com