Bigdata requirement - Java, Rest API and HADOOP
Hire IT People, LLC
Job Seekers, Please send resumes to resumes@hireitpeople.com
Job Description / Job Responsibilities:
Build distributed, scalable, and reliable data pipelines that ingest and process data at scale and in real-time.Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases.Perform offline analysis of large data sets using components from the Hadoop ecosystem.Evaluate and advise on technical aspects of open work requests in the product backlog with the project lead.Own product features from the development phase through to production deployment.Evaluate big data technologies and prototype solutions to improve our data processing architecture.Candidate Profile:
BS in Computer Science or related areaAround 6 years software development experienceMinimum 2 Year Experience on Big Data PlatformMust have active current experience with Scala, Java, Python, Oracle, HBase, HiveFlair for data, schema, data model, how to bring efficiency in big data related life cycleUnderstanding of automated QA needs related to Big dataUnderstanding of various Visualization platformExperience in Cloud providers like AWS preferableProficiency with agile or lean development practicesStrong object-oriented design and analysis skillsExcellent written and verbal communication skillsAPIGEE
Qualifications
Top skill sets / technologies in the ideal candidate:
* Programming language -- Java (must), Python, Scala
Database/Search – Oracle, complex SQL queries, stored procedures, performance tuning concepts, SOLR, AWS RDS, AWS Aurora
Batch processing -- Hadoop MapReduce, Cascading/Scalding, Apache Spark, AWS EMR
Stream processing -- Spark streaming, Kafka, Apache Storm, Flink
NoSQL -- HBase, Cassandra, MongoDB.
Por favor confirme su dirección de correo electrónico: Send Email