Responsible for one or more specific modules within a large software system scope
What will your job look like?
You will design, develop, modify, debug and/or maintain software code according to functional, non-functional and technical design specifications.
You will follow software engineering standards, applicable software development methodology and release processes, to ensure code is maintainable, scalable, and supportable and demo the software products to stakeholders
You will investigate issues by reviewing/debugging code, provides fixes and workarounds, and reviews changes for operability to maintain existing software solutions.
You will work within a team, collaborate and add value through participation in peer code reviews, provide comments and suggestions, work with cross functional teams to achieve goals.
You will assume technical accountability for your specific work products within an application and provide technical support during solution design for new requirements.
You will be encourage to actively look for innovation and continuous improvement, efficiency in all assigned tasks.
All you need is...
Bachelor's degree in Science/IT/Computing or equivalent
Clear understanding of Hadoop architecture
3+ Hands on in Hadoop (Hbase, HDFS, Pig, Hive, Map-reduce, Kafka, Storm, Spark)
3+ Hands on in Unix & advanced Unix Shell Scripting
Strong SQL knowledge
Experience in source control like Perforce and build tools like Maven
Hands on file transfer mechanism (NDM, sFTP etc)
Knowledge of Schedulers
Working experience on Data-lake environments
Handling XML, JSON, structured, fixed-width, un-structured files
Willingness to learn all data warehousing ETL technologies like Informatica, DataStage, Teradata etc.
Why you will love this job?
You will be challenged with design and develop new software applications
You will have the opportunity to work in a growing organization, with ever growing opportunities for personal growth
You will have the opportunity to work with the industry most advanced technologies