求人詳細

求人コード 060-210
求人企業 外資系損害保険
求人タイトル Data-Engineer
職務内容 Data engineers working in the datalake team carry out a wide variety of business intelligence tasks in a largely AWS based cloud computing environment.
Typical tasks include:

◆Building high quality and sustainable data pipelines and ETL processes to extract data from a variety of APIs and ingest into cloud based services.
◆Efficiently developing complex SQL queries to aggregate and transform data for analytics team and general users.
◆Maintaining accurate and error-free data bases and datalake structures
◆Conducting quality assessment and integrity checks on both new and existing queries and processes.
◆Monitoring existing solutions and working pro-actively to rapidly resolve errors and identify future problems before they occur.
◆Using data visualization tools such as Power BI, SSRS, Tableau, Looker etc to develop high quality dashboards and reports.
◆Consulting with a variety of stakeholders to gather new project requirements and transform these into well-defined tasks and targets
応募要件
(必須)
最終学歴:四年制大学卒以上
The right candidate will be an innovative and adaptable data expert with a strong desire to succeed.
You'll have demonstrated experience working in a high performing business intelligence or data warehouse environment, excellent communication skills and a passion for problem solving and learning new technologies.
You'll be exposed to a large variety of tasks, tools and programming languages so the desire and ability to constantly learn new skills is essential.
We’re looking for people who are passionate about data with an emphasis on quality programming and building the best solution possible.

◆3-5 years of practical experience in data / analytics with at least 1 year working in an engineering / B.I role.
◆At least 1 year practical experience working on data pipelines or analytics projects with languages such as Python, Scala or Node.JS
◆At least 2 years practical experience working on data pipelines or analytics projects with SQL / NoSql databases (ideally in a Hadoop based environment).
◆Strong knowledge and practical experience working with at least four of the following AWS services: (s3, EMR, ECS/EC2, Lambda, Glue, Athena, Kinises/Spark Streaming, Step Functions, Cloudwatch, Dynamo DB ).
◆Strong Experience working with data processing and ETL systems such as Oozie, Airflow, Azkaban, Luigi, SSIS.
◆Experience developing solutions inside a Hadoop stack using tools such as (Hive, Spark, Storm, Kafka, Ambari, Hue etc).
◆Ability to work with large volumes of both raw and processed data in a variety of formats including (JSON, ORC, Parquet, CSV).
◆Ability to work in a Linux /Unix environment (predominately via EMR & AWS CLi / Hadoop File System).
◆Experience with DevOps solutions such as (Jenkins, GitHub, Ansible, Docker, Kubernetes).
◆Minimum undergraduate level qualifications in a technical discipline such as (Computer Science, Data Science, Analytics, Machine Learning, Statistics etc). Post Graduate qualifications preferred.

【Additional Points】
◆Demonstrated experience and expertise on setting up and maintaining cloud data solutions and AWS infrastructure will be highly regarded.
◆Strong knowledge of cloud based data security, encryption and protection methods will also be highly regarded.

【Language skills】
◆Business Level English
◆Business Level Japanese
応募要件
(尚可、その他)
勤務地 東京
年収 900万程度まで(応相談)