- Are passionate about iConnectiva core values – teamwork, quality, customer satisfaction
- Are qualified/capable in their respective field and bring an impeccable work ethic
- Resilient and agile enough to pick up new skills that the ever changing technology industry requires
- Possess an uncompromising ethical and moral compass
- Value collaborating with quality professionals while performing innovative, quality work vs. working for large, legacy, organizations on dull, repetitive client projects that might seem like ‘Safer Bets’
- Are structured but not bureaucratic in their approach to client delivery
- Possess personnel management skills, particularly iConnectiva Technical Project Managers and Architects
Big Data Architect and Developer
- Self-starter, with a keen interest in technology and highly motivated towards success
- Excellent oral and written communication, presentation, and analytical skills
- Ability to lead initiatives and people toward common goals.
- Minimum Educational Qualification: Bachelor's degree in Computer Science Engineering, higher degrees preferred
- Experience with a range of big data architectures, including Cloudera, Hadoop, Pig, Hive, Avro, Impala, Flume frameworks.
- Knowledge and understanding of real-time stream processing
- NoSQL data stores, data modeling and data management, using Casandra.
- Hands-on knowledge on Map-Reduce, PIG, PERL, HQL, SQL.
- Hands-on knowledge on Cassandra, Astyanax, Hector, CQL 3.
- Knowledge of predictive analytics tools and languages – R, Weka, Mahout, Jubatus.
- Proficient in Java, Java EE, JDK 1.8 (multi-core programming with Lamda & Mixins) and RDBMS-SQL technologies.
- Provide guidance & advice to design and development team, customers, Pre-Sales on Big Data Technologies.
- Translate business requirements to HLD & Technical designs, provide estimates & guide the team through the implementation.
Big Data Administrator
- Can communicate with software developers, system engineers, network engineers, and operation support engineers to run a reliable data processing pipeline using Hadoop cluster.
- Should be able to troubleshoot various types of network, system, and MapReduce application related issues.
- Be the strong advocate within the team to promote and strive to establish good practice using Hadoop and other Big Data processing framework.
- Must be a team player and excited to work with big data and new technologies.
- Hadoop AND Cassandra cluster administration – Must have hands on experience
- Experience with CDH 4 or later on Linux preferred.
- Experience with Apache Hadoop and its complete eco-system.
- Familiarity with Hadoop MapReduce for assisting developers with debugging and job tuning as needed.
- Shell scripting (bash, python) AND programming (Java).
- Experience with Pig, Hive, Impala, HBase and other related technologies within Hadoop and big data ecosystem.
- Experience setting up and administering real-time data processing system (Storm)
- Experience setting up and administering database servers (SQL/NoSQL)
- Experience setting up an automated build system (Maven, Jenkins, Hudson)
- Big Data Certification preferred
- Must be proficient in Core Java and Java Enterprise Edition (J2EE or JEE), SQL
- Knowledge of Java 8, JEE 7 will have added advantages
- Familiarity with Scala, Perl/Python/Ruby, Web technologies (SOAP/REST/CORBA etc)
- Exposure to Hadoop, Cassandra, Spark, Storm preferred
- Exposure to R, Weka, Mahout preferred