J.P. Morgan Chase is a global institution that prides itself in the power of scale – offering first class financial products and services to its clients (and its clients’ clients) across the spectrum of consumer, commercial and institutional needs. For the Corporate & Investment Bank (CIB) specifically, clients range from hedge funds, governments, institutional investors and corporations around the world – each made up of individuals who interact with our employees in large volumes by phone, email and digital chats every day.
- Design and architect the next generation data pipelines, lake and warehouse for the Client Intelligence team. Build and communicate a technical vision to the team and the stakeholders
- Build large-scale batch, ETL and real-time data pipelines using cloud and on-premises data technologies, such as Redshift, Athena, DBT, Python, Apache Airflow and Apache Kafka
- Design best practices for data processing, data modeling and warehouse development throughout our team and group
- Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data
- Understand end to end data interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions.
- Coach and mentor team members as applicable
- Expertise in data warehouse / data lake architectures like Redshift, Snowflake, Big Query, Impala, Presto, Athena
- Experience with workflow orchestration tools such as Apache Airflow
- Knowledge of data transformation and collection tools such as DBT or Fivetran
- Hands-on experience with stream processing platforms such as Kafka, Kinesis, Flink, Beam, Dataflow
- Advanced knowledge of data columnar and serialization formats such as JSON, XML, Arrow, Parquet, Protobuf, Thrift, Avro
- Strong experience with container technologies such as Docker and Kubernetes
- Experience writing infrastructure as code with Terraform
- Experience with CI/CD systems e.g. Jenkins and automation / DevOps best practices
- Advanced knowledge of AWS ecosystem, including S3, Glue, Redshift, Athena, Kinesis, MSK, IAM, Batch, ECS, EKS etc
- BS/BA degree or equivalent experience in computer science or engineering
- Significant hands-on experience in building a data warehouse / data lake and data pipelines
- Expert level skills in SQL, data integration, data modeling and data architecture
- Expert level skills in Python, its standard library and its package ecosystem – Pytest, Tox, Pandas, Requests, Pylint, Boto3, Jinja…
- Leadership and ability to influence the team’s direction
- Mentoring: help your junior teammates achieve their goals and grow
- Curiosity, creativity, resourcefulness and a collaborative spirit
- Clear and effective verbal and written communication skills
- Demonstrated ability to work on multi-disciplinary teams with diverse backgrounds
- Interest in problems related to the financial services domain (specific past experience in the domain is not required)JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management.
We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as any mental health or physical disability needs.
Equal Opportunity Employer/Disability/Veterans
Edureka’s Data Science using Python programming certification course enables you to learn data science concepts from scratch. This Python Course will also help you master important Python programming concepts such as data operations, file operations, object-oriented programming and various Python libraries such as Pandas, Numpy, Matplotlib which are essential for Data Science. Edureka’s Python Certification Training course is also a gateway towards your Data Science career.
This course extends your existing Python skills to provide a stronger foundation in data visualization in Python. You’ll get a broader coverage of the Matplotlib library and an overview of seaborn, a package for statistical graphics. Topics covered include customizing graphics, plotting two-dimensional arrays (like pseudocolor plots, contour plots, and images), statistical graphics (like visualizing distributions and regressions), and working with time series and image data.
pandas is the world’s most popular Python library, used for everything from data manipulation to data analysis. In this course, you’ll learn how to manipulate DataFrames, as you extract, filter, and transform real-world datasets for analysis. Using pandas you’ll explore all the core data science concepts. Using real-world data, including Walmart sales figures and global temperature time series, you’ll learn how to import, clean, calculate statistics, and create visualizations—using pandas to add to the power of Python!
A vital component of data science involves acquiring raw data and getting it into a form ready for analysis. It is commonly said that data scientists spend 80% of their time cleaning and manipulating data, and only 20% of their time actually analyzing it. This course will equip you with all the skills you need to clean your data in Python, from learning how to diagnose problems in your data, to dealing with missing values and outliers. At the end of the course, you’ll apply all of the techniques you’ve learned to a case study to clean a real-world Gapminder dataset.
MicroPython Projects: A do-it-yourself guide for embedded developers to build a range of applications using Python
Explore MicroPython through a series of hands-on projects and learn to design and build your own embedded systems using the MicroPython Pyboard, ESP32, the STM32 IoT Discovery kit, and the OpenMV camera module.
Delve into MicroPython Kernel and learn to make modifications that will enhance your embedded applications
Design and implement drivers to interact with a variety of sensors and devices
Build low-cost projects such as DIY automation and object detection with machine learning
Python: 2 Books in 1: The Crash Course for Beginners to Learn Python Programming, Data Science and Machine Learning + Practical Exercises Included. (Artifical Intelligence, Numpy, Pandas)
Python programming is one of the most popular programming languages today, which means you made the right choice in picking this book to learn the basics.
Python is a simplistic language, however, without something to guide you through the fundamental concepts of programming, you can easily learn everything the wrong way and someday anger all of your programmer friends.
Learning Python Programming: This Book Includes 3 Manuscripts. Learn Python Programming, Python Machine Learning & Python Machine Learning For Beginners
Book 1: Learn Python Programming
The purpose of this book is to guide you step by step through the most important concepts behind programming with Python.
Book 2: Python Machine Learning
The purpose of this book is to guide you step by step through the entire process of working with various machine learning algorithms.
Book 3: Python Machine Learning for Beginners
The purpose of this book is to guide you step by step through the entire process of working with various machine learning algorithms by using the power of Python combined with a number of tools and libraries.
Does your business have large volumes of data that nobody knows how to use? Do you collect data from various sources to perform the analysis? Have you always wondered what you should do with incorrect data sets? If you answered yes, then you have come to the right place.
Businesses often collect information from different devices and sources. Therefore, it is important to understand, interpret, and analyze that data. Businesses can use this data to make sound decisions to improve processes and efficiency. That said, businesses must hire professionals who can work with large volumes of data. If you are a budding data analyst or want to brush up your concepts, this book is for you.