From strategy and proof of concept to implementation and optimization; our solutions get the information output you need.

Data is the new digital battlefield for corporate differentiation. Taking a proactive approach to data management and analysis allows you to uncover the hidden insights and build organization-wide success.

Apache Hadoop and Spark as ETL solution!

hTRUNK’s rapid application development platform enables organizations to leverage the open source Apache Hadoop ecosystem for distributed storage and Apache spark for high performance data processing to save operational costs.

Unique Consulting for Unique Opportunities

Apex’s professional services are built to help support organizations address the unique challenges that come with meeting their data processing needs. As an hTrunk consulting partner, our experts can provide insight and assistance in every part of the design and implementation of “Data Lake” and ETL solutions in Apache Hadoop using the hTrunk platform.

Use Hadoop to manage your ETL with hTrunk.

hTrunk Platform Features

hTrunk is a suite of components, built to allow sophisticated Hadoop application development without having to maintain complicated backend code. hTrunk allows enterprises to tackle the challenges of Hadoop ecosystem, big data application development, cost effectively and efficiently.


An intuitive collection of data processing, ingestion and egress components that mask the complexity of Hadoop data application development.


A unified graphical interface to author, execute build and migrate projects. Once defined, a project can only be run either via the graphical interface or through scheduler


Combines multiple sequential jobs into one logical unit of work. It integrates with YARN and enables users to orchestrate and pipeline processing workflows jobs to run on it


A user friendly graphical interface to manage projects and repositories, user accounts and hTRUNK server configurations

Apex’s Services

Our Hadoop consultants solve enterprise’s data management challenges – whether using Hadoop as a data hub, data warehouse, staging environment or analytic sandbox. Our consultants specialize in delivering scalable Hadoop based solutions by utilizing their expertise in the Hadoop ecosystem with tools like: Hbase, Pig, Flume, Hive, Sqoop, Oozie and Zookeeper.

We take a pragmatic approach to building solutions, our consultants enable success at every step from strategy & proof of concept to solution design and delivery. Working with proven experience, industry specific consulting veterancy and best-practices advice, our consultants are able to define the key starting elements of a Hadoop application development project.

Do you need a feature or integration not yet present in your software or the Hadoop ecosystem? Our consultants specialize in developing it for you, according to your custom specifications and integration guidelines. Our professional background in data ingestion and integration, business logic implementation, testing and quality assurance & post-install support allow us to tackle a wide variety of cutting-edge Hadoop problems.

Rapid Application Development with no Special Skills

  • Rapid application development — An intuitive interface with pre-built components to process data.
  • Best of the world combination — Architected from ground up for Apache Hadoop and Apache Spark. Natively uses advantage of Hadoop’s scalability and Spark’s execution power
  • No Special skills — The user friendly generic interface need no or very less knowledge of hadoop and eco-system.
  • Various size solutions- Single to multi node deployments — configured to go from pilot to production with no change to jobs.
  • Implementation time — Requires very minimal effort to build the Hadoop applications, can get going in minutes.
  • In-Built scheduler — Schedule and monitor real time.

  • Deploy instantly — Move the projects between DEV, UAT and productions instantly.
  • Co-ordinate with team — Create environments and shared projects to efficiently co-ordinate and develop.
  • Local execution — Create and unit test jobs locally on a smaller data set.

Unlock your Big Data opportunities. Find out how hTrunk can help.

Get in Touch