Coding Portfolio

Hi, I am Ananya. I am a software engineer based in the US with over 5 years of experience. Please reach out at or to learn more about my work and how we can help your business.

Recent projects I have worked on as a software engineering consultant- These range from application and infrastructure development to DevOps and Security.

Please follow the links in each project to learn more about the project.

Project: SCIDAS,  March 2019-present

  • Created Deployment files for Galaxy for Kubernetes on GCP and extended the Created Helm Charts for external application deployment on organization maintained GCP stack Modified Nextflow pipelines to run on Kubernetes on GCP Set up NFS server for storing analysis data, created data/user specific persistent volume and persistent volume claims
  • Successfully launched and optimized several Nextflow workflows for data analysis
  • Project Link:

Project: Gen3 Data Commons,   February 2019-April 2019

  • Created dockerised Jupyterhub notebooks for multi-user profiles with machine learning frameworks (like Keras, Tensorflow, PyTorch) and machine learning python code
  • Created pipeline for Machine learning software and server integration with Gen3’s infrastructure
  • Integration with Gen3’s infrastructure
  • Vetted containers using Anchore Engine for security
  • Replicated in-house Kubernetes set-up for Gen3 deployment

Project:HeLxSciDAS,    March 2019-present

  • Instrumental in creating cloud agnostic infrastructure for data analysis and command line bioinformatics tools
  • Undertook Dockerisation and container vetting using Anchore Engine
  • Managed early-user database and implemented some authentication and authorization steps
  • Created the code and infrastructure pipeline in python from CommonsShare (in-house software platform) to Kubernetes on GCP via extension of the Kubernetes API using a python K8S client. This allowed us to directly launch dockerised application files (deployment and services YAML) for Kubernetes from CommonsShare to servers on GCP
  • Project Link:

Project: DataSTAGE,  October 2018-February 2019

  • Created an infrastructure to support active scientific research for command line bioinformaticians and COPD gene researchers during the alpha and beta user integration
  • The infrastructure was created on AWS using EC2 instances (CPU and GPU based) and the data was hosted on a secure NFS server.
  • Onboarded users had access to the COPD gene data and the Machine Learning tools (written in Python and served by Jupyter and RStudio)
  • Created the Docker container for the Machine Learning Code, provision users and set up security
  • Set up AuditD on user machines to track the use of secure COPD gene data (based on policy) and transfer the logs via Filebeat to an ELK server (set up on EC2 ). Set up the ELK server (10 users per server) to take snapshots via cronjobs and transfer the processed data to S3 buckets from where it could be retrieved back to kibana for field specific searches

Project Link:

Project: DCPPC ,   June 2018-October 2018

  • As a part of Team Helium (Renaissance Computing Institute-RENCI at UNC-CH and Duke University) on the NIH-Data. Commons Pilot Phase Consortium, my task was the creating and implementation of a CHIP-seq workflow via a Workflow Execution Service (WES) a REST service for running CWL workflows
  • The resulting data files were registered using the in-house data registration service and transferred to the researcher via DOS-API
  • The analytical workflow (version-controlled using GIT) was written in Common Workflow Language using JavaScript and run on Toil, a workflow engine. Toil was engineered to run on Mesos using our inhouse middleware ‘Pivot’. Several programs written in C++ were dockerised and ported into the workflow as a part of the execution process
  • Project Link:

Please reach out for a 15-45 minute discussion to learn how our services can help your business!

References and testimonials available upon request.

Book a free consultation today.

%d bloggers like this: