JPMorgan Chase & Co. (NYSE: JPM) is a leading global financial services firm with assets of $2.6 trillion and operations in more than 60 countries. The firm is a leader in investment banking, financial services for consumers, small business and commercial banking, financial transaction processing, asset management, and private equity.
Global Technology Infrastructure (GTI) is the technology infrastructure organization for the firm, delivering a wide range of products and services, and partnering with all lines of business to provide high quality service delivery, exceptional project execution and financially disciplined approaches and processes in the most cost effective manner. The objective of GTI is to balance business alignment and the centralized delivery of core products and services. GTI is designed to address the unique infrastructure needs of specific lines of business and the demand to leverage economies of scale across the firm.
The Core Foundation Services team (CFS) is responsible for providing end to end support for critical technologies that are used across the company. This includes Configuration and Orchestration, Identity Management, Name Services, Enterprise Monitoring Solutions, and automation tools used to manage these technologies.
The BIFrost platform team within Core Services is seeking an infrastructure developer who will help implement a centralized, firm-wide data-mining solution for unstructured and semi-structured data.
The candidate will develop Hadoop and messaging bus solutions leveraging micro-services and APIs across on-premise and off-premise environments. The candidate will also have platform onboarding responsibilities, and be required to integrate unified data services, frameworks, and user defined functions already in the Hadoop eco-system.
Candidate should independently manage design, development, Testing, deployment, monitoring and continuous performance improvement of BIFrost technologies (Kafka, HDFS, NiFi, Spark etc.). Candidate should be someone who can flourish in a high speed product development and support environment and go extra mile to take the team and its deliverables to the next level.
The Senior Software Developer/Platform Reliability Engineer will work as part of an Agile scrum team analyzing, planning, designing, developing, testing, debugging, optimizing, improving, documenting, and deploying complex/scalable/highly available software applications on Distributed Platforms
Providing support to development and client teams in DEV/UAT/PROD environments with ITSM standards and Automation practices.
Excellent programming skill, algorithm, ability to pick up any new programming languages and deliver/implementation of application code till production environment.
Strong knowledge about design & architecture fundamentals of Databus, Dataflow and Data Reservoir systems of high throughput, low latency enterprise grade systems
Strong working experience with distributed computing, concurrent processing is a must
Proven experience of delivering enterprise grade solutions utilizing Big Data platforms (Cloudera), technologies, tools, patterns, and frameworks (Spark, Scala/Java).
Excellent understanding of Agile/CI/CD and SDLC processes and automated tools, spanning requirements/issue management, defect tracking, source control, build automation, test automation and release management.
Ability to collaborate and partner with high-performing diverse teams and individuals throughout the firm to accomplish common goals by developing meaningful relationships.
8+ years of experience in developing complex Java & Microservices/API
Excellent communication skills and work as team player across multiple development tracks
8+ years of experience in Core Java with excellent grasp on Networking, Threading, IO & core java APIs, OOPS concepts & implementations
Experience in building Big Data applications using Scala 2.x & Akka framework is good to have
4+ years of sound experience working in Distributed Architectures like Lambda & Kappa
Solid Distributed fundamentals around CAP theorems, Filesystem & compression, BASE/ACID Data Stores, Resource Managers (YARN), Computational frameworks (e.g. Streaming/Batch/Interactive/Real time), Coordination services, Schedulers, Data Integration Frameworks (Messaging/Workflow/Metadata/Serialization), Data Analysis Tools & Operational Frameworks (Monitoring Benchmarking etc.)
4+ years of experience working in Big data technologies such as Hadoop, Spark, Kafka, Hive, Hbase, Sqoop, other NoSQL solutions.
Experience in developing data pipelines, metadata management, data transformation using Spark, Kafka/Hadoop/NiFi
Good project experience working in HDFS, Hive, Spark, Yarn & Map reduce
Good project experience working in Full text search technologies like Elasticsearch and building reporting & Analytics platform
Good experience working in Linux environments, onsite/offshore model, Performance engineering and Tuning for above listed technologies
Good understanding of security frameworks & protocols like Kerberos, SSL, SASL, etc.
Ability to quickly learn and work on new cutting edge technologies
Please note that J.P. Morgan will not accept unsolicited approaches or speculative CVs, nor will J.P. Morgan be responsible for any related fees, from Third Party Firms who are not preferred suppliers.
The firm invites all interested and qualified candidates to apply for employment opportunities.
If you are a US or Canadian applicant with a disability who is unable to use our online tools to search and apply for jobs, please contact us by calling (US and Canada Only) 1-866-777-4690. Please indicate the specifics of the assistance needed.