This site uses cookies. To find out more, see our Cookies Policy

Data Engineer (NYC) in New York, NY at HUNTER Technical Resources

Date Posted: 5/8/2019

Job Snapshot

  • Employee Type:
    Full-Time
  • Location:
    New York, NY
  • Job Type:
  • Experience:
    At least 4 year(s)
  • Date Posted:
    5/8/2019
  • Job ID:
    4374420

Job Description


Data Engineer, Data Cloud, Consumer Data

The Data Engineer of Data Cloud will be responsible for partnering with Revenue Analytics to onboard and optimize datasets which will be used in the Looker digital analytics platform. To be successful in the role you’ ll need to be intellectually curious, detail-oriented, open to new ideas, and possess a strong aptitude for quantitative methods. The role requires a strong statistical background, familiarity with modern data warehouses, experience with SQL-like tools, and knowledge of scripting languages (e.g., python). Experience with predictive modeling or machine learning a plus.

Responsibilities:
  • Thorough understanding of business, ad tech, and data
  • Create solutions to transform data from various sources and load it into platforms such as Hadoop to create a data lake
  • Create and maintain transformations to summarize/aggregate data and load these so users can consume this data using various BI/Analytics tools.
  • Develop and maintain standards for administration and operation including the scheduling, running, monitoring, logging, management of errors, recovery from failures, and validation of outputs.
  • Contribute to the project planning process by estimating tasks and deliverables.
  • Work closely with Revenue Analytics team members to understand user requirements.
  • Be at the cutting edge of utilizing data about consumers in the media industry to improve audience experience.
  • Help out with complex analytical tasks.

Requirements:
  • 4+ years solid experience as a database developer in OLAP environment.
  • BS in computer science, math, physics or equivalent education/training/experience.
  • Experience with ETLs & Data Pipelines
  • Experience performing QC & cleaning up dirty data
  • Experience with “ big data” platform such as Hadoop, NoSQL DBs or cloud based tools such as Amazon Redshift.
  • Experienced and comfortable with Unix/Linux operating systems
  • Expert knowledge in SQL; knowledge of HiveQL, SparkQL, Postgress SQL a plus
  • Expert in Python and shell
  • Ability to work independently and take on projects
  • Experience with media, Web analytics and consumer data systems will be a plus.
  • Preference for prior technology or media company experience