Adfolks LLC has formally joined the Zain Tech family Learn more

Big Data Engineer

Years of Experience : 5+
  • Our Insight and Data team helps our clients make better business decisions by transforming an ocean of data into streams of insight
  • The successful candidate who will take this role will serve as an expert Big Data Engineer. The Big Data Engineer will integrate with multiple data sources and systems, develop data engineering workflows, and will work with rapidly evolving technology-based solutions and teams. You will work closely with data architects, data scientists, data analysts and business stakeholders for achieving strategic business goals.
  • To succeed in this new position, you will:
  • Work closely with data architects, data analysts and data science teams to come up with technical solutions to solve specific business problems.
  • Ability to combine & apply data modelling from different data sources and data types.
  • Integrate and Ingest data from multiple source systems using into big data environment.
  • Create end to end data transformations and workflows for the data engineering transformations that serve the business needs along with the proper logging and recovery procedures for these workflows.
  • Internal Stakeholders
  • Commercial Organization Account teams / Team Managers, Technical Sales Support, Marketing, Finance and Accounting, Professional Services, After Sales Delivery, Customer Services Organization.
  • External Stakeholders
  • Customer's Senior Level Contacts (C-Level, Directors, VPs, Senior VPs, etc.) Vendor Commercial Relationship Management, 3rd party suppliers, Industry associations and consulting firms within the sector.
  • We are looking for a Big Data Engineer with a successful track record for building scalable data solutions in a fast-paced environment. You will need 4+ years of experience in developing big data solutions, including experience with the following:
  • Solid Experience in ingesting the data from multiple data sources through APIs into HDFS.
  • Solid Experience in creating data pipelines using NiFi.
  • Solid Experience dealing with different data types [structured, unstructured, semi structured] along with solid experience in dealing with different file formats [Text, Json, Avro, Parquet, ORC, etc.].
  • Solid experience in creating transformation using Spark, Hive on Tez and different components existing in the Hadoop eco system.
  • Proven Experience with Hive, Impala, NoSQL data bases querying.
  • Experience on Data Modelling Star, Snowflakes. Able to apply Kimball and Inmon Methodology.
  • Experience with NoSQL, Hive, Impala.
  • Experience in Tuning bigdata transformations using Apache Spark and Hive QL.
  • Solid experience in Python, Java, Scala and SQL (including Spark SQL).
  • Solid experience in creating multithreading Python, Java applications to ingest data.
  • Solid experience in UNIX Bash Commands and Scripts.
  • Experience with databases like Postgres, Greenplum, MYSQL, and Oracle.
  • Experience in creating batch, real-time and near-real-time data pipelines.
  • Experience with streaming data technologies using [ Kafka, Spark Streaming, Apache Flink, etc.].
  • Align the organization's big data solutions with their client initiatives as requested.
  • Work with domain experts to put together a delivery plan with and stay on track.
  • Utilize Big Data technologies to design, develop, and evolve scalable and fault-tolerant distributed components.