Back to offers

Senior Data Engineer with PySpark

Bright Coders' Factory — our name speaks for us, as our software sits in the hearts of global companies. We provide customers with state-of-the-art technologies. Our potential still grows, which is proven by the Forbes Diamond and Great Place to Work Awards.

We're writing code to make people's lives easier. In BCF, you will find your place and see that your work matters. Our portfolio includes projects from more than 15 industries - so depending on your preferences and stage of career, we're definitely going to find the right one for you.

About position

We are thrilled to announce a fantastic job opportunity for a Senior Data Engineer with a strong background in Python, PySpark, and AWS. Our company is a dynamic and rapidly growing organization in the field of data-driven solutions, and we are seeking an experienced professional like you to join our exceptional team.

Requirements

  • Strong proficiency in Python programming and hands-on experience with PySpark for large-scale data processing and analytics
  • In-depth knowledge of AWS cloud services, including but not limited to S3, Glue, Redshift, Athena, EMR, and Lambda
  • Proficiency in SQL and experience with relational and NoSQL databases
  • Solid understanding of data modeling, schema design, and data warehousing concepts
  • Experience with version control systems (e.g., Git) and deployment tools (e.g., Jenkins, Airflow)
  • Familiarity with data visualization tools such as Tableau, Power BI, or similar platforms
  • Excellent problem-solving and analytical skills with the ability to troubleshoot and resolve complex data-related issues
  • Strong communication skills and the ability to work effectively in a collaborative team environment

Your responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL workflows using Python, PySpark, and AWS services
  • Collaborate with cross-functional teams to understand data requirements and translate them into efficient data engineering solutions
  • Perform data cleansing, transformation, and aggregation to ensure data integrity and accuracy
  • Optimize and fine-tune data pipelines to improve performance and scalability
  • Implement and maintain data governance practices, including data quality monitoring, metadata management, and security measures
  • Stay up-to-date with emerging technologies and industry trends related to data engineering and cloud computing
Similar offers for you
Marketing Intern
Posted on 03/10/2023
other
Hybrid (Poznan, Wroclaw, Opole, Warsaw, Other)
SAP SD/MM Migration Specialist
Posted on 02/10/2023
other
Hybrid (Opole, Wroclaw, Poznan, Warszawa) | Remote
SERVICE DELIVERY TRAINEE
Posted on 10/08/2023
other
Hybrid (Opole)
experience

Experience

5 - 7 years
8+ years
location

Location

Hybrid (Wroclaw, Opole, Warsaw, Poznan)
Share:shareshare to linkedin

Senior Data Engineer with PySpark