Teradata Big Data Engineer in Prague, Czech Republic

Data Engineers define the architecture and design of data ingestion and ETL / ELT processing to meet functional and non-functional requirements and objectives. Data Engineers will generally be expected to be “T-shaped” professionals who have a broad understanding of the data acquisition and integration space and to be able to weigh the pros and cons of different architectures and approaches.

A Data Engineer works on implementing complex big data projects with a focus on ingesting, parsing, integrating, and managing large sets of structured and unstructured data. Data Engineers embrace the challenge of dealing with petabytes of structured and unstructured data and associated metadata daily. Data Engineers are involved in the design of big data solutions, leveraging the experience they have with Teradata and Hadoop based technologies such as Hive, Cassandra, and MapReduce. Strong communication skills are required to work in a team environment and with business sponsors of programs.

Key Areas of Responsibility:

  • Clarify the client’s business problem for data analysis

  • Translate the business requirements into technical requirements

  • Design and develop code, scripts, and data pipelines that leverage structured and unstructured data

  • Data modelling, database design and the design of non-relational data structures

  • Data ingestion pipelines and ETL processing, including low-latency data acquisition and stream processing

  • Oversee the design, evaluation, and selection of major databases and metadata structures

  • Develop project deliverable documentation

  • Must be adept at sizing and estimating new projects, as well as supporting new business development

Job Qualifications:

  • Proven expertise in production software development

  • Experience with large data sets consisting of structured and unstructured data and associated metadata and distributed computing with Teradata, Hadoop, Hive, HBase, Pig, etc.

  • Proficiency in SQL, NoSQL, relational database design and methods

  • Hadoop ecosystem technologies (e.g.: HIVE);

  • Message bus and broker technologies, real-time data pipeline and streaming technologies (e.g.: Kafka, Kylo, Listener)

  • High-performance data processing platforms (e.g.: SPARK)

  • Scripting languages and related libraries (e.g.: Python/Pandas)

  • Linux / Unix experience

  • Experience with Avro, Thrift

  • JMS: ActiveMQ, RabbitMQ, JBoss, etc.

  • Experience with structured and unstructured data, and metadata

  • Work with the appropriate project management methodology (Agile or Waterfall) based upon customer and project requirements

  • Excellent verbal and written communication skills

  • Knowledge of Architecture Principles

We offer:

  • 25 days of holiday a year

  • Private medical care

  • Meal vouchers in amount of 110 CZK/ day (Teradata contributes 83 CZK/day)

  • Company’s contribution for the Pension fund (up to 3% of monthly gross salary)

  • Life insurance

  • Company’s assistance in case of sickness (25% of your gross base salary) in addition to local regulations

  • Employee referral program (4,000$ USD)

  • Sports and activities membership program

  • Employee stock purchase program

  • Contribution of 300$ USD for a mobile phone of your choice (every 2 years)