< Back to jobs
Bengaluru , Pune , Hyderabad , Delhi
Job Description :
Minimum qualifications :
- Build a big data system based on product requirements.
- Should have strong knowledge of the Hadoop ecosystem such as Spark/HDFS/MapReduce/ Storm/Hive.
- This will involve data extraction, data modelling, transformation, integration with other data streaming solutions.
- Deep knowledge on SQL, additional exposure on other NOSQL databases like Mongodb would be an added advantage.
- Experience in data migration from older systems of varied types to a big data architecture
- Planning system & storage requirements for the big data platform.
- Should have worked on Apache Spark SQL, Spark Streaming.
- Must have worked with Python programming and integration with Apache Spark.
- Candidates need to be proficient in Apache Spark (PySpark) with at least 1+ Years of work on live projects.
- A Bachelor degree in B.E/B,Tech/BCA is must.
- Strong working knowledge of SQL and NoSQL data stores such as Mongodb.
- Strong programming knowledge in Python is preferred.
Key Job Attributes :
Educational Qualifications :
Key Skills :
Contact Details :
Email Id : email@example.com