- I am software engineer who likes to build and work with highly scalable systems. Currently I work with the analytics and serviceability teams at Nutanix.
- I was a Masters in Computer Science student at Stanford. My concentration was - Databases, Web and Data Mining/Machine learning and Systems.
- I have experience of working in research labs, product development as well as in webscale/ large scale distributed companies like Linkedin and Nutanix.
- Before coming to Stanford I worked at IBM's India Software Lab. I was part of the DB2 product organization. I was lucky to have worked with some brilliant engineers and I was an inventor with them on a few patents.
I have good know how of: Databases, Systems, Distributed systems and also have some experience in Data Mining, Web Programming, Parallel Programming.
Sr. Member Of Technical Staff @ Software Engineer developing scalable, distributed, analytics infrastructure.
Also developing time series analysis and machine learning algorithms.
Design and implement:
================================================================
Predictive analytics + distributed systems for capacity planning and system optimization
http://www.nutanix.com/products/features/operational-insights/
================================================================
Build a distributed resilient health check engine.
Cluster Health/ NCC (Nutanix cluster checker)
http://nutanix.blogspot.com/2014/07/40-feature-cluster-health-framework.html
http://www.nutanix.com/blog/2014/04/16/nos-4-0-cluster-health-slices-dices/
================================================================
Provide rolling upgrades for the cluster checker.
1 click upgrade for NCC:
http://aakashjacob.blogspot.in/2015/01/ncc-one-click-upgrade.html
================================================================
Mentored an intern to develop a failure injection framework and automating tests From August 2013 to Present (2 years 3 months) San Francisco Bay AreaSoftware Engineer @ Engineer in Project Voldemort. Worked in a critical infrastructure team.
Some major things i worked on :
-Map reduce jobs for Hadoop and Avro for the read-only stores:
https://github.com/abh1nay/voldemort/tree/new-build-and-push
-Faust: Hadoop -> Kafka - > Voldemort pipeline
development/maintenance/operations for faust
Highly available Streaming for faust
-JNA layer for mmaping and mlocking indexes in memory From July 2012 to August 2013 (1 year 2 months) San Francisco Bay AreaResearch Assistant @ I was a part of Professor Monica Lam's mobisocial group
Publications:
- Effective Browsing and Serendipitous Discovery with an Experience-Infused Browser
Sudheendra Hangal, Abhinay Nagpal, and Monica S. Lam
To appear in Proceedings of the 2012 International Conference on Intelligent User Interfaces (IUI)
- Friends, Romans, Countrymen: Lend me your URLs.
Using Social Chatter to Personalize Web Search.
Abhinay Nagpal, Sudheendra Hangal, Rifat Reza Joyee, and Monica S. Lam
To appear in Proceedings of the ACM Conference on Computer Supported Cooperative Work (CSCW) From January 2011 to June 2012 (1 year 6 months) Graduate Student - Masters in Computer Science @ Specialisation: Databases
Courses : Database Systems, Parallel/Distributed Databases, Operating Systems, Database System Implementation, Machine Learning, Data Mining, Parallel programming, Programming Languages , Object oriented Modeling and Simulation, Research project in social and mobile networks
I am pursuing a research study in Professor Monica Lam's group in the areas of decentralized social networks and data mining
You can see some of the work I was been involved in at:
http://mobisocial.stanford.edu/musemonkey/RNN
http://mobisocial.stanford.edu/musegroups
http://mobisocial.stanford.edu/socialflows From September 2010 to June 2012 (1 year 10 months) Intern @ Worked with a really early stage stealth mode cloud storage security startup. Duties involved designing and prototyping of the alpha releases.
Technologies: Java/ Android/Html5 From December 2011 to March 2012 (4 months) San Francisco Bay AreaEngineering Intern @ Worked as an intern in the data services group.
Worked on a couple of projects
The first project involved data ETL using Aster Data, Java, Hadoop, Pig, Avro and Hive. Wrote a couple of UDFs for the transformations.
The second project involved a proof of concept dashboard for capturing and visualizing key metrics .
The visualization layer involved state of the art pseudo 3d transforms for generating 3d cubes,protovis and jquery svg for generating the charts and Tag Clouds. The services layer was implemented using Java and Jersey. Implemented a thread safe, cached store for holding the query and results to boost performance. From June 2011 to September 2011 (4 months) Research Assistant @ - Implementing the front end / uncertainty module for the Stanford Geostatistical Modeling Software (SGeMS).
http://sgems.sourceforge.net/
-Spatial uncertainty for stochastic processes is modeled using kernel techniques to aid scientific computing and oil reservoir modeling.
-Models generated by stochastic spatial simulation are mapped from metric space onto feature space via multidimensional scaling, then clustered for model selection. From September 2010 to December 2010 (4 months) Software Engineer DB2 Advanced Support @ This job role involves DBA skill set and system admin skill set to some extent ( good understanding and knowhow of DB2 on Linux Unix and windows). This role includes interacting with customers directly to solve different types of problems
• Working in highly technical environments, operating systems, and information warehouses
• Evaluating diagnostic information to design, develop and test computer-based systems; develop data and streamline processes to optimize architecture and to evaluate the performance and reliability
• Provide remote technical support to IBM’s global customers in the most complex and demanding environments, focusing on defect/stability aspects of the product as customers deploy their database solutions
• Deal with a wide range of customers including IT personnel and executives, IBM business partners, GBS resources and Lab Services team members From December 2007 to August 2010 (2 years 9 months) Research Intern at Tata Research Design and Development Center (TRDDC) @ Was offered a much coveted annual internship. I worked with senior scientists in a project to automatically generate test cases , test data and managing heterogenous data sources.
This has been implemented using Java and Eclipse framework From June 2006 to April 2007 (11 months)
Masters, Computer science @ Stanford University From 2010 to 2012 BE, Computer Science @ Vishwakarma Institute of Technology From 2003 to 2007 B.C.A Abhinay Nagpal is skilled in: Java, Algorithms, Hadoop, Distributed Systems, C, Software Engineering, Unix, SQL, Databases, Python, DB2, Linux, C++, Machine Learning, jQuery, Database Administration, Data Mining, HTML, Business Intelligence, MySQL, Java Enterprise Edition, Innovation, REST, Shell Scripting, JSP, Cloud Computing, Android, MapReduce, Parallel Computing, Mobile Applications, JavaScript, Eclipse, CSS, HTML5, PHP, Data Visualization, Software Design, Programming, Invention, Intellectual Property, Computer Science, Patents, Web Development, Perl, Parallel Programming, Parallel Algorithms, R
Websites:
http://www.ibm.com