MyNew Webpage.

Devashree Tripathy | Postdoctoral Fellow at Harvard University

Devashree Tripathy
Postdoctoral Fellow at Harvard University


Harvard John A. Paulson School Of Engineering And Applied Sciences
Harvard University
Boston, MA 02134

Email: dtrip003 [at] ucr [dot] edu

Google Scholar dblp Linked in Facebook CV



About Me

I am a Postdoctoral Fellow in Computer Science at Harvard University, working with Dr. David Brooks in Harvard Architecture, Circuits, and Compilers Group. I graduated from University of California, Riverside with PhD in Computer Science and was advised by Distinguished Prof. Laxmi N. Bhuyan and Prof.Daniel Wong. My interest lies in Computer Architecture , GPGPU Architecture design, High Performance Computing, Fault-Tolerance systems. I have worked on multiple projects on Data- Dependent Applications on GPGPU, Low power Design of GPGPU Execution units and have achieved notable improvements in terms of Performance gain and Power and Area saving.



News

August 25, 2021
Successfully defended my PhD dissertation titled "Improving Performance and Energy Efficiency of GPUs through Locality Analysis". Officially a Doctor now :-). I shall be joining Harvard University as Postdoctoral Fellow in Computer Architecture/Systems and VLSI this Fall.
July 15, 2021
Two of our papers accepted to appear in NAS 2021.
February 22, 2021
Passed PhD Dissertation Proposal Defense Exam .
February 16, 2021
Our paper on GPU Data-Locality and Thread-Block Scheduling accepted to TACO 2021.
June 15, 2020
Selected for GHC 2020 student scholarship.
May 22, 2020
Two of our papers accepted to ISLPED 2020.
June 2, 2019
Won student travel grant to attend ISCA 2019 and HPDC 2019.
May 2, 2019
Our paper on GPU Undervolting and Reliability accepted to ICS 2019.
Oct 5, 2018
Our book on BCI System Design is online now!
May 10, 2018
Our book on BCI System Design approved to be published under Series SpringerBriefs in Computational Intelligence.
Apr 14, 2018
Won student travel grant to attend ISCA 2018.
Feb 16, 2018
I will be joining Samsung Austin R&D Center as a GPU Modelling Intern for the summer'18, Austin,TX.
Feb 5, 2018
Won student travel grant to attend Grad Cohort for Women 2018.
Sept 14, 2017
Won student travel grant to attend MICRO 2017.
Sept 9, 2017
Won student travel grant to attend Third Career Workshop for Women and Minorities in Computer Architecture.
July 2, 2017
Our paper on Data Dependency Support in GPU accepted to MICRO 2017.
March 15, 2016
Passed Oral Qualifying exam. PhD Candidate now!
August 8, 2016
Won student travel grant to attend NAS 2016.


Teaching

CS005
Introduction To Computer Programming(Fall 2019)

CS203
Advanced Computer Architecture(Winter 2018, Winter 2019)

CS213
Multiprocessor Architecture and Programming(Spring 2018, Winter 2020)


Publications

Google Scholar; DBLP

Conference

C7

LocalityGuru: A PTX Analyzer for Extracting Thread Block-level Locality in GPGPUs NAS '21

Devashree Tripathy, Amirali Abdolrashidi, Quan Fan, Daniel Wong and Manoranjan Satpathy
15th IEEE International Conference on Networking, Architecture, and Storage (NAS 2021).

Exploiting data locality in GPGPUs is critical for efficiently using the smaller data caches and handling the memory bottleneck problem. This paper proposes a thread block-centric locality analysis, which identifies the locality among the thread blocks (TBs) in terms of a number of common data references. In LocalityGuru, we seek to employ a detailed just-in-time (JIT) compilation analysis of the static memory accesses in the source code and derive the mapping between the threads and data indices at kernel-launch-time. Our locality analysis technique can be employed at multiple granularities such as threads, warps, and thread blocks in a GPU Kernel. This information can be leveraged to help make smarter decisions for locality-aware data-partition, memory page data placement, cache management, and scheduling in single-GPU and multi-GPU systems. The results of the LocalityGuru PTX analyzer are then validated by comparing with the Locality graph obtained through profiling. Since the entire analysis is carried out by the compiler before the kernel launch time, it does not introduce any timing overhead to the kernel execution time.
To Appear after the Conference
C6

ICAP: Designing Inrush Current Aware Power Gating Switch for GPGPU NAS '21

Hadi Zamani, Devashree Tripathy, Ali Jahanshahi, and Daniel Wong
15th IEEE International Conference on Networking, Architecture, and Storage (NAS 2021).

The leakage energy of GPGPU can be reduced by power gating the idle logic or undervolting the storage structures; however, the performance and reliability of the system degrade due to large wake-up time and inrush current at the time of activation. In this paper, we thoroughly analyze the realistic Break-Even Time (BET) and inrush current for various components in GPGPU architecture considering the recent design of multi-modal Power Gating Switch (PGS). Then, we introduce a new PGS which covers the current PGS drawbacks. Our redesigned PGS is carefully tailored to minimize the inrush current and BET. GPGPUSim simulation results for various applications show that by incorporating the proposed PGS into GPGPU-Sim, we can save leakage energy up to 82%, 38%, and 60% for register files, integer units, and floating units respectively.
To Appear after the Conference
C5

Slumber: Static-Power Management for GPGPU Register Files ISLPED '20

Devashree Tripathy, Hadi Zamani, Debiprasanna Sahoo, Laxmi Narayan Bhuyan, and Manoranjan Satpathy
ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), 2020.

The leakage power dissipation has become one of the major concerns with technology scaling. The GPGPU register file has grown insize over last decade in order to support the parallel execution of thousands of threads. Given that each thread has its own dedicated setof physical registers, these registers remain idle when correspondingthreads go for long latency operation. Existing research shows that the leakage energy consumption of the register file can be reduced by under volting the idle registers to a data-retentive low-leakage voltage (Drowsy Voltage) to ensure that the data is not lost while not in use. In this paper, we develop a realistic model for determining the wake-up time of registers from various under-volting and power gating modes. Next, we propose a hybrid energy saving technique where a combination of power-gating and under-volting can be usedto save optimum energy depending on the idle period of the registers with a negligible performance penalty. Our simulation shows that the hybrid energy-saving technique results in 94% leakage energy savings in register files on an average when compared with the conventional clock gating technique and 9% higher leakage energy saving compared to the state-of-art technique.
@inproceedings{tripathy2020slumber, title={Slumber: static-power management for gpgpu register files}, author={Tripathy, Devashree and Zamani, Hadi and Sahoo, Debiprasanna and Bhuyan, Laxmi N and Satpathy, Manoranjan}, booktitle={Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design}, pages={109--114}, year={2020} }
C4

SAOU: Safe Adaptive Overclocking and Undervolting for Energy-Efficient GPU Computing ISLPED '20

Hadi Zamani, Devashree Tripathy , Laxmi Narayan Bhuyan and Zhizhong Chen
ACM/IEEE International Symposium on Low Power Electronics and Design (ISLPED), 2020.

The current trend of ever-increasing performance in scientific ap-plications comes with tremendous growth in energy consumption. In this paper, we present a framework for GPU applications, which reduces energy consumption in GPUs through Safe Overclocking and Undervolting (SAOU) without sacrificing performance. The idea is to increase the frequency beyond the safe frequency f_safeMax and undervolt below V_safeMin to get maximum energy saving. Since such overclocking and undervolting may give rise to faults, we employ an enhanced checkpoint-recovery technique to cover the possible errors. Empirically, we explore different errors and derive a fault model that can set the undervolting and overclocking level for maximum energy saving. We target cuBLAS Matrix Multiplication (cuBLAS-MM) kernel for error correction using the checkpoint and recovery(CR) technique as an example of scientific applications. In case of cuBLAS, SAOU achieves up to 22% energy reduction through undervolting and overclocking without sacrificing the performance.
@inproceedings{zamani2020saou, title={SAOU: safe adaptive overclocking and undervolting for energy-efficient GPU computing}, author={Zamani, Hadi and Tripathy, Devashree and Bhuyan, Laxmi and Chen, Zizhong}, booktitle={Proceedings of the ACM/IEEE International Symposium on Low Power Electronics and Design}, pages={205--210}, year={2020} }
C3

GreenMM: Energy-Efficient GPU Matrix Multiplication Through Undervolting ICS '19

Hadi Zamani, Yuanlai Liu, Devashree Tripathy, Laxmi Narayan Bhuyan, and Zizhong Chen
International Conference on Supercomputing (ICS), 2019. (acceptance rate: 23.3%)

The current trend of ever-increasing performance in scientific applications comes with tremendous growth in energy consumption. In this paper, we present GreenMM framework for matrix multiplication, which reduces energy consumption in GPUs through undervolting without sacrificing the performance. The idea in thispaper is to undervolt the GPU beyond the minimum operating voltage (Vmin) to save maximum energy while keeping the frequency constant. Since such undervolting may give rise to faults, we design an Algorithm Based Fault Tolerance (ABFT) algorithm to detectand correct those errors. We target cuBLAS Matrix Multiplication(cuBLAS-MM), as a key kernel used in many scientific applications. Empirically, we explore different errors and derive a fault model as a function of undervolting levels and matrix sizes. Then,using the model, we configure the proposed FT-cuBLAS-MM algorithm. We show that energy consumption is reduced up to 19.8%. GreenMM also improves the GFLOPS/Watt by 9% with negligible performance overhead.
@inproceedings{DBLP:conf/ics/ZamaniLTBC19, author = {Hadi Zamani and Yuanlai Liu and Devashree Tripathy and Laxmi N. Bhuyan and Zizhong Chen}, title = {GreenMM: energy efficient {GPU} matrix multiplication through undervolting}, booktitle = {Proceedings of the {ACM} International Conference on Supercomputing, {ICS} 2019, Phoenix, AZ, USA, June 26-28, 2019}, pages = {308--318}, year = {2019}, crossref = {DBLP:conf/ics/2019}, url = {https://doi.org/10.1145/3330345.3330373}, doi = {10.1145/3330345.3330373}, timestamp = {Wed, 19 Jun 2019 08:40:19 +0200}, biburl = {https://dblp.org/rec/bib/conf/ics/ZamaniLTBC19}, bibsource = {dblp computer science bibliography, https://dblp.org} }
C2

WIREFRAME: Supporting Data-dependent Parallelism through Dependency Graph Execution in GPUs. MICRO '17

AmirAli Abdolrashidi, Devashree Tripathy, Mehmet Esat Belviranli, Laxmi Narayan Bhuyan, and Daniel Wong
The 50th International Symposium on Microarchitecture (MICRO), 2017. (acceptance rate: 18.6%)

GPUs lack fundamental support for data-dependent parallelism and synchronization. While CUDA Dynamic Parallelism signals progress in this direction, many limitations and challenges still remain. This paper introduces WIREFRAME, a hardware-software solution that enables generalized support for data-dependent parallelism and synchronization. Wireframe enables applications to naturally express execution dependencies across different thread blocks through a dependency graph abstraction at run-time, which is sent to the GPU hardware at kernel launch. At run-time, the hardware enforces the dependencies specified in the dependency graph through a dependency-aware thread block scheduler. Overall, Wireframe is able to improve total execution time up to 65.20% with an average of 45.07%.
@inproceedings{abdolrashidi2017wireframe, title={Wireframe: supporting data-dependent parallelism through dependency graph execution in GPUs}, author={Abdolrashidi, Amir Ali and Tripathy, Devashree and Belviranli, Mehmet Esat and Bhuyan, Laxmi Narayan and Wong, Daniel}, booktitle={Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture}, pages={600--611}, year={2017}, organization={ACM} }
C1

Design and Implementation of Brain Computer Interface Based Robot Motion Control Springer'14

Devashree Tripathy, and Jagdish Lal Raheja
Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA), 2014.

In this paper, a Brain Computer Interactive (BCI) robot motion control system for patients’ assistance is designed and implemented. The proposed system acquires data from the patient’s brain through a group of sensors using Emotiv Epoc neuroheadset. The acquired signal is processed. From the processed data the BCI system determines the patient’s requirements and accordingly issues commands (output signals). The processed data is translated into action using the robot as per the patient’s requirement. A Graphics user interface (GUI) is developed by us for the purpose of controlling the motion of the Robot. Our proposed system is quite helpful for persons with severe disabilities and is designed to help persons suffering from spinal cord injuries/ paralytic attacks. It is also helpful to all those who can’t move physically and find difficulties in expressing their needs verbally.
@Inbook{Tripathy2015, author="Tripathy, Devashree and Raheja, Jagdish Lal", editor="Satapathy, Suresh Chandra and Biswal, Bhabendra Narayan and Udgata, Siba K. and Mandal, J. K.", title="Design and Implementation of Brain Computer Interface Based Robot Motion Control", bookTitle="Proceedings of the 3rd International Conference on Frontiers of Intelligent Computing: Theory and Applications (FICTA) 2014: Volume 2", year="2015", publisher="Springer International Publishing", address="Cham", pages="289--296", abstract="In this paper, a Brain Computer Interactive (BCI) robot motion control system for patients' assistance is designed and implemented. The proposed system acquires data from the patient's brain through a group of sensors using Emotiv Epoc neuroheadset. The acquired signal is processed. From the processed data the BCI system determines the patient's requirements and accordingly issues commands (output signals). The processed data is translated into action using the robot as per the patient's requirement. A Graphics user interface (GUI) is developed by us for the purpose of controlling the motion of the Robot. Our proposed system is quite helpful for persons with severe disabilities and is designed to help persons suffering from spinal cord injuries/ paralytic attacks. It is also helpful to all those who can't move physically and find difficulties in expressing their needs verbally.", isbn="978-3-319-12012-6", doi="10.1007/978-3-319-12012-6_32", url="https://doi.org/10.1007/978-3-319-12012-6_32" }

Journal

J6

PAVER: Locality Graph-based Thread Block Scheduling for GPUs ACM TACO'21

Devashree Tripathy, Amirali Abdolrashidi, Laxmi Bhuyan, Liang Zhou and Daniel Wong
ACM Transactions on Architecture and Code Optimization. (Impact Factor: 1.309(2019), SCImago Journal Rank (SJR): 0.263)

The massive parallelism present in GPUs comes at the cost of reduced L1 and L2 cache sizes per thread, leading to serious cache contention problems such as thrashing. Hence, the data access locality of an application should be considered during thread scheduling to improve execution time and energy consumption. Recent works have tried to use the locality behavior of regular and structured applications in thread scheduling, but the difficult case of irregular and unstructured parallel applications remains to be explored. We present PAVER, a priority-aware vertex scheduler, which takes a graph-theoretic approach towards thread scheduling. We analyze the cache locality behavior among thread blocks (TBs) through a just-in-time (JIT) compilation, and represent the problem using a graph representing the TBs and the locality among them. This graph will then be partitioned to TB groups that display maximum data sharing, which are then assigned to the same SM by the locality-aware TB scheduler. Through exhaustive simulation in Fermi, Pascal and Volta architectures using a number of scheduling techniques, we show that our graph theoretic-guided TB scheduler reduces L2 accesses by 43.3%, 48.5%, 40.21% and increases the average performance benefit by 30%, 50.4%, 40.2% for the benchmarks with high inter-TB locality.

@Article{ }

J5

An improved load‑balancing mechanism based on deadline failure recovery on GridSim EC, Springer'16

Deepak Kumar Patel, Devashree Tripathy, and C.R Tripathy
Engineering with Computers April 2016, SPRINGER, Volume 32, Issue 2, pp 173–188 (Impact Factor: 7.963 (2020)).

Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high performance orientation. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. For a Grid, a dynamic, distributed load balancing scheme provides deadline control for tasks. Due to the condition of deadline failure, developing, deploying, and executing long running applications over the grid remains a challenge. So, deadline failure recovery is an essential factor for Grid computing. In this paper, we propose a dynamic distributed load-balancing technique called “Enhanced GridSim with Load balancing based on Deadline Failure Recovery” (EGDFR) for computational Grids with heterogeneous resources. The proposed algorithm EGDFR is an improved version of the existing EGDC in which we perform load balancing by providing a scheduling system which includes the mechanism of recovery from deadline failure of the Gridlets. Extensive simulation experiments are conducted to quantify the performance of the proposed load-balancing strategy on the GridSim platform. Experiments have shown that the proposed system can considerably improve Grid performance in terms of total execution time, percentage gain in execution time, average response time, resubmitted time and throughput. The proposed load-balancing technique gives 7 % better performance than EGDC in case of constant number of resources, whereas in case of constant number of Gridlets, it gives 11 % better performance than EGDC.

@Article{Patel2016, author="Patel, Deepak Kumar and Tripathy, Devashree and Tripathy, Chitaranjan", title="An improved load-balancing mechanism based on deadline failure recovery on GridSim", journal="Engineering with Computers", year="2016", month="Apr", day="01", volume="32", number="2", pages="173--188", abstract="Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high performance orientation. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. For a Grid, a dynamic, distributed load balancing scheme provides deadline control for tasks. Due to the condition of deadline failure, developing, deploying, and executing long running applications over the grid remains a challenge. So, deadline failure recovery is an essential factor for Grid computing. In this paper, we propose a dynamic distributed load-balancing technique called ``Enhanced GridSim with Load balancing based on Deadline Failure Recovery'' (EGDFR) for computational Grids with heterogeneous resources. The proposed algorithm EGDFR is an improved version of the existing EGDC in which we perform load balancing by providing a scheduling system which includes the mechanism of recovery from deadline failure of the Gridlets. Extensive simulation experiments are conducted to quantify the performance of the proposed load-balancing strategy on the GridSim platform. Experiments have shown that the proposed system can considerably improve Grid performance in terms of total execution time, percentage gain in execution time, average response time, resubmitted time and throughput. The proposed load-balancing technique gives 7 {\%} better performance than EGDC in case of constant number of resources, whereas in case of constant number of Gridlets, it gives 11 {\%} better performance than EGDC.", issn="1435-5663", doi="10.1007/s00366-015-0409-y", url="https://doi.org/10.1007/s00366-015-0409-y" }

J4

Survey of load balancing techniques for Grid JNCS, Elsevier'16

Deepak Kumar Patel, Devashree Tripathy, and C.R Tripathy
Journal of Network and Computer Applications, ELSEVIER Volume 65, April 2016, Pages 103-119 (Impact Factor: 6.281 (2020)).

In recent days, due to the rapid technological advancements, the Grid computing has become an important area of research. Grid computing has emerged a new field, distinguished from conventional distributed computing. It focuses on large-scale resource sharing, innovative applications and in some cases, high-performance orientation. A Grid is a network of computational resources that may potentially span many continents. The Grid serves as a comprehensive and complete system for organizations by which the maximum utilization of resources is achieved. The load balancing is a process which involves the resource management and an effective load distribution among the resources. Therefore, it is considered to be very important in Grid systems. The proposed work presents an extensive survey of the existing load balancing techniques proposed so far. These techniques are applicable for various systems depending upon the needs of the computational Grid, the type of environment, resources, virtual organizations and job profile it is supposed to work with. Each of these models has its own merits and demerits which forms the subject matter of this survey. A detailed classification of various load balancing techniques based on different parameters has also been included in the survey.

@article{PATEL2016103, title = "Survey of load balancing techniques for Grid", journal = "Journal of Network and Computer Applications", volume = "65", number = "", pages = "103 - 119", year = "2016", note = "", issn = "1084-8045", doi = "http://dx.doi.org/10.1016/j.jnca.2016.02.012", url = "http://www.sciencedirect.com/science/article/pii/S1084804516000953", author = "Deepak Kumar Patel and Devashree Tripathy and C.R. Tripathy", keywords = "Grid computing", keywords = "Distributed systems", keywords = "Load balancing" }

J3

Fault Tolerance in Interconnection Network-a Survey Fault IC

Laxminath Tripathy, Devashree Tripathy, and C.R Tripathy
Research Journal of Applied Sciences, Engineering and Technology Volume 11 Issue 2, Sept 2015, Pages 193-214 (Impact Factor: 0.22(2016)).

Interconnection networks are used to provide communication between processors and memory modules in a parallel computing environment. In the past years, various interconnection networks have been proposed by many researchers. An interconnection network may suffer from mainly two types of faults: link faults and/or switch fault. Many fault tolerant techniques have also been proposed in the literature. This study makes an extensive survey of various methods of fault tolerance for interconnection networks those are used in large scale parallel processing.

@article{AL:20407467-201509-201512090023-201512090023-198-214, title ={Fault Tolerance in Interconnection Network-a Survey}, author ={Laxminath Tripathy and Devashree Tripathy and C.R. Tripathy}, keywords ={DFA property; dynamic fault-tolerance; fault model; interconnection network; MIN; static fault tolerance}, journal ={Research Journal of Applied Sciences, Engineering and Technology}, volume ={11}, number ={2}, year ={2015}, month ={Sep}, abstract ={Interconnection networks are used to provide communication between processors and memory modules in a parallel computing environment. In the past years, various interconnection networks have been proposed by many researchers. An interconnection network may suffer from mainly two types of faults: link faults and/or switch fault. Many fault tolerant techniques have also been proposed in the literature. This study makes an extensive survey of various methods of fault tolerance for interconnection networks those are used in large scale parallel processing.}, pages ={198-214}, language ={英文}, ISSN ={2040-7467}, publisher ={Maxwell Science Publishing}, }

J2

The Crossed cube-Mesh: A New Fault-Tolerant Interconnection Network Topology for Parallel Systems Crossed Cube mesh

Bhaskar Jyoti Das, Devashree Tripathy, and Bibek Mishra
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) Volume 14, 2014, Pages 211-219.

Recently, the Cube based networks have emerged as attractive interconnection structures in parallel computing systems. In this paper we propose a new interconnection network called crossed cube-mesh which is a product graph of crossed cube and mesh topology. The various topological properties of the new network are derived. The embedding properties, fault-tolerance, node disjoint paths, routing, cost and other performance aspects of the new network are discussed in detail. Based on the comparison, the proposed topologyis proved to be an attractive alternative to the existing Hyper-mesh topology.

@inproceedings{Das2014TheCC, title={The Crossed cube-Mesh: A New Fault-Tolerant Interconnection Network Topology for Parallel Systems}, author={Bhaskar Jyoti Das and Devashree Tripathy and Bibek Mishra}, year={2014} }

J1

Star-Mobius Cube: A New Interconnection Topology for Large Scale Parallel Processing Star-Mobius Cube

Debasmita Pattanayak, Devashree Tripathy, and C.R Tripathy
International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) Volume 14, 2014, Pages 62-68.

The interconnection topology plays a vital role in parallel computing systems. In this paper a new interconnection network topology named as Star-mobius cube (SMQ) is introduced. The various topological and performance parameters such as diameter, cost, average distance, and message traffic density are discussed. The embedding and broadcasting aspects of the new network are also presented. Based on the performance analysis, the proposed topology SMQ is proved to be a better alternative to its contemporary networks.

@MISC{Pattanayak_star-mobiuscube:, author = {Debasmita Pattanayak and Devashree Tripathy and C. R. Tripathy}, title = {Star-Mobius Cube: A New Interconnection Topology for Large Scale Parallel Processing}, year = {} }

Book

B1

Real-Time BCI System Design to Control Arduino Based Speed Controllable Robot Using EEG Springer '18

Swagata Das, Devashree Tripathy, Jagdish Lal Raheja
SpringerBriefs in Computational Intelligence, 2018.

@book{das2018real, title={Real-Time BCI System Design to Control Arduino Based Speed Controllable Robot Using EEG}, author={Das, Swagata and Tripathy, Devashree and Raheja, Jagdish Lal}, year={2018}, publisher={Springer} }

Academic Professional Service


Awards/Honors

2020
Grace Hopper Celebration 2020 Scholar.
2019
Student Travel Grant for ISCA 2019 and HPDC 2019.
2018
Student Travel Grant for ISCA 2018.
2018
Student Travel Grant for 2018 CRA-W Grad Cohort
2017
Student Travel Grant for MICRO 2017.
2017
Student Travel Grant for Third Career Workshop for Women and Minorities in Computer Architecture.
2016
Student Travel Grant for NAS 2016.
2015
Dean’s Distinguished Fellowship, Bourns College of Engineering, University of California, Riverside.
2012-2014
Quick-Hire Fellowship by Government of India
2012
Ranked first among all undergraduate students of VSSUT Burla.
2009
Golden Jubilee Meritorious Girls Scholarship, VSSUT Burla. (Awarded for being ranked first among 300+ students among all disciplines of college of engineering in the freshman year.)

Others

Welcome! you are the free counterth vistor of my homepage.

Free Visitor Maps at VisitorMap.org
Get a FREE visitor map for your site!