AMVETS Jobs

Job Information

Takeda Pharmaceuticals Platform Engineer - Data Solutions in Tokyo, Japan

By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use . I further attest that all information I submit in my employment application is true to the best of my knowledge.

Job Description

Please note that this job requires business level Japanese language command not only in speaking, but also in business writing and reading.

About Takeda

”Better Health for People, Brighter Future for the World” is the purpose of a company. We aim to create a diverse and inclusive organization where people can thrive, grow and realize their own potential while enabling our purpose. We continue to innovate and drive changes that will transform the lives of patients. We’re looking for like-minded professionals to join us.

Takeda is a global values-based, R&D-driven biopharmaceutical leader. We are guided by our values of Takeda-ism, which has been passed down since the company’s founding. Takeda-ism incorporates Integrity, Fairness, Honesty, and Perseverance, with Integrity at the core. They are brought to life through actions based on Patient-Trust-Reputation-Business, in this order.

The Opportunity

As a Data Platforms Engineering leader, you'll have a business Impact and direct alignment to the Head of Data Platforms and Architecture vision. The role is a key enabler for Takeda strategy to become a Data Driven Enterprise. By connecting with Business Units and Business Functions within Takeda’s Global Business and with their data teams, the data platform lead will strategically architect data, processes, and technology to achieve faster time to market for life saving products. Ultimately, help Takeda to make better decisions that improve the quality and efficiency of care for patients. You will develop data driven solutions utilizing current and next generation technologies to meet evolving business needs. You will quickly identify opportunities and recommend possible technical solutions and you will develop application systems that comply with the standard system development methodology and concepts for design, programming, backup, and recovery to deliver solutions that have superior performance, reliability, and integrity.

As part of our transformational journey on Data & AI in Operations, we are taking the steps to advance to Data Mesh architecture. The current Datalake exists to give all Operations units access to critical data and analytic tools at pace, accelerating their work on life saving medicines. The vision of EDS is also accelerating Operations’ data strategy of making our data Findable, Accessible, Interoperable, and re-useable. This is being achieved through the creation of a distributed data architecture, management of our data and data products which will sit as a centerpiece of this strategy and the future evolution of Data Science.

Responsibilities

  • Create best practices and thought leadership content to be used by the federated delivery teams building data solutions and data products on Enterprise Data platforms that cater to batch, streaming and real-time data.

  • Influence stakeholders at all levels through complex engagement models with the wider cloud ecosystem not limited but inclusive of AWS foundations for Infrastructure and data technologies, Databricks, Informatica, Kafka, Managed File Transfer, and 3rd party applications, ensuring they are excited by the Enterprise Data Services vision and solution strategy.

  • Be a 'champion’ for both customers and colleagues by operating as an expert Engineer and trusted advisor for significant data analytics architecture, design, and adoption and scaling of the Datalake platform.

  • Provide a roadmap for modernizing legacy capabilities inherent to the current platform. Support all data platform initiatives – Data Lake Strategy, Data Engineering and Platform development, Data Governance, Security Models, and Master Data Management.

  • Establish a collaborative engineering culture based on trust, innovation, and a mindset of continuous improvement. Utilize Industry best practices and agile methodologies to deliver solution and extract efficiency through automations in Continuous Development and Continuous Integration. Manages efforts to problem solve for engineering challenges and coordinate with project consultants and delivery/engagement managers.

  • As a leading technical contributor who can consistently take a poorly defined business or technical problem, work it to a well-defined data problem/specification, and execute it at a high level. Have a strong focus on metrics, both for the impact of their work and for its engineering and operations.

  • Understand the Data Platforms investments and create data tools for consumption of services and uncover opportunities for cost optimization to assist the team in building and optimizing our platforms into an innovative unit within the company.

Skills and Qualifications

  • Bachelor’s degree or higher in Computer Science/Information technology; or relevant work experience.

Must Requirements:

  • Data Engineering Experience ;

  • Business level English/Japanese

  • 8+ years of relevant work experience in data platforms, solutions, and delivery methodologies (Java, Python, Spark, Hadoop, Kafka, SQL, NoSQL, Postgres and/or other modern programming language and tools such as JIRA, Git, Jenkins, Bitbucket, Confluence).

  • Familiarity with the core technology stack, including Databricks Lakehouse (Delta Lake) or equivalent such as Big Query/Snowflake, SQL/Python/Spark, AWS, Prefect/Airflow,

  • Deep Specialty Expertise in at least one of the following areas:

  • Experience scaling big data workloads that are performant and cost-effective.

  • Experience with Development Tools for CI/CD, Unit and Integration testing, Automation and Orchestration, REST API, BI tools, and SQL Interfaces (e.g., Jenkins)

  • Experience designing data solutions on cloud infrastructure and services, such as AWS, Azure, or GCP using best practices in cloud security and networking.

- Software Engineering Experience;

  • 5+ years’ experience in a customer-facing technical role with expertise in at least one of the following:

  • Software Engineer/Data Engineer: data ingestion, streaming technologies - such as Spark Streaming and Kafka, performance tuning, troubleshooting, and debugging Spark or other big data solutions.

Nice to have;

  • Experience with ETL/Orchestration tools (e.g., Informatica, and Airflow, etc.)

  • Industry level experience of working with public cloud environments (AWS, GCP, or Azure), and associated deep understanding of failover, high-availability, and high scalability.

  • Data ingestion using one or more modern ETL compute and orchestration frameworks (e.g., Apache Airflow, Luigi, Spark, Apache Nifi, Flink and Apache Beam).

  • 3+ years of experience with SQL or NoSQL databases: PostgreSQL, SQL Server, Oracle, MySQL, Redis, MongoDB, Elasticsearch, Hive, HBase, Teradata, Cassandra, Amazon Redshift, Snowflake.

  • Advanced working SQL knowledge and experience working with relational databases, authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience building and optimizing 'big data' data pipelines, architectures, and data sets.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Working knowledge of message queuing, pub/sub stream processing, and highly scalable 'big data' data stores.

  • Outstanding communication and relationship skills, ability to engage with a broad range of partners, capable of leading by influence.

Locations

Tokyo, Japan

Worker Type

Employee

Worker Sub-Type

Regular

Time Type

Full time

DirectEmployers