IT Track

Dr. Sanjay K Madria
Missouri University of Science and Technology, USA

Prof. Thomas Pasquier
University of British Columbia, Canada

Dr. P Radha Krishna
National Institute of Technology Warangal, India

Prof. Mukesh K. Mohania
Indraprastha Institute of Information Technology, India

DC Track

Dr. Neeraj Mittal
The University of Texas, Dallas, USA

Prof. Maurice Herlihy
Brown University, USA

Dr. Rajkumar Kettimuthu
Argonne National Laboratory, USA

Dr. John Augustine
IIT Madras, India

Speaker Profiles

IT Track

Sanjay Madria

Dr. Sanjay K Madria

Missouri University of Science and Technology, USA

Topic – Machine Learning for Emotion Prediction, Ideology Detection and Polarization Analysis using COVID-19 Tweets

Sanjay K Madria is a Curators’ Distinguished Professor in the Department of Computer Science at the Missouri University of Science and Technology (formerly, University of Missouri-Rolla, USA).  He has published over 290 Journal and conference papers in the areas of mobile and sensor computing, Big data and cloud computing, data analytics and cyber security.

He won five IEEE best papers awards in conferences such as IEEE MDM and IEEE SRDS. He is a co-author of a book (published with his two PhD graduates) on Secure Sensor Cloud published by Morgan and Claypool in Dec. 2018. He has graduated 20 PhDs and 33 MS thesis students, with 9 current PhDs. NSF, NIST, ARL, ARO, AFRL, DOE, Boeing, CDC-NIOSH, ORNL, Honeywell, and others have funded his research projects of over $18M. He has been awarded JSPS (Japanese Society for Promotion of Science) invitational visiting scientist fellowship, and ASEE (American Society of Engineering Education) fellowship.  In 2012 and in 2019, he was awarded NRC Fellowship by National Academies, US. He is ACM Distinguished Scientist, and served/serving as an ACM and IEEE Distinguished Speaker, and is an IEEE Senior Member as well as IEEE Golden Core Awardee.

The adversarial impact of the Covid-19 pandemic has created a health crisis globally all over the world. This unprecedented crisis forced people to lockdown and changed almost every aspect of the regular activities of the people. Thus, the pandemic is also impacting everyone physically, mentally, and economically, and it, therefore, is paramount to analyze and understand emotional responses during the crisis affecting mental health.

Negative emotional responses at fine-grained labels like anger and fear during the crisis might also lead to irreversible socio-economic damages. In this talk, I will discuss a neural network model trained using manually labeled data to detect various emotions at fine-grained labels in the Covid-19 tweets automatically. I will discuss about a manually labeled tweets dataset on COVID-19 emotional responses along with regular tweets data. A custom Q&A roBERTa model to extract phrases from the tweets that are primarily responsible for the corresponding emotions has been designed. None of the existing datasets and work currently provide the selected words or phrases denoting the reason for the corresponding emotions. Further, we propose a deep learning model leveraging the pre-trained BERT-base to detect the political ideology from the tweets for political polarization analysis. The
experimental results show a considerable improvement in the accuracy of ideology detection when we use emotion as a feature.

Thomas Pasquier

Prof. Thomas Pasquier

University of British Columbia, Canada

Prof. Thomas Pasquier is a Tenure-Track Assistant Professor in the Computer Science Department, University of British Columbia. He is a part of the Systopia Lab working on Systems research in a broad sense. His research focuses on building more observable and transparent systems, on topics such as Digital Provenance, Auditing, Accountability, Intrusion Detection, and Systems Optimization. He is a PhD from the University of Cambridge (Jesus College). He is a Honorary Senior Lecturer in the Department of Engineering Mathematics at the University of Bristol. His research interests are Digital Provenance, Operating Systems, Distributed Systems, Data Protection and Privacy, and Intrusion Detection.

P radhakrishna

Dr. P Radha Krishna

National Institute of Technology Warangal, India

Topic – Attention-based Representational Learning for Social Network Analysis

Dr. Radha Krishna has been in the profession of research, development and technology adoption for about thirty years. He is currently working as a Professor at the Department of Computer Science and Engineering, National Institute of Technology (NIT) Warangal. His research interests include data mining, big data, machine learning and databases and workflow systems. Prior to joining NIT, he served as Principal Research Scientist at Infosys Labs, Infosys Limited, Hyderabad, where he was associated with research projects leading to futuristic intelligent systems and analytical solutions. He also served as adjunct faculty at NIT-Warangal and IIIT-Hyderabad. Dr. Krishna also served as a faculty at the Institute for Development and Research in Banking Technology (IDRBT – a research arm of the Reserve Bank of India & an associate Institute of the University of Hyderabad), and as a scientist at National Informatics Centre (Govt. of India), Bhopal. Krishna was also a member of the IT Advisory Committee, Insurance Regulatory and Development Authority (IRDA), India. He has been a member of the research advisory committees of several academic institutes. Krishna has double PhDs – the first one from Osmania University and the second one from IIIT-Hyderabad; 20 US granted patents, authored/co-authored six books, and over a hundred publications in refereed journals and conferences.  

Social networks carry high-level complex structural and semantic information in the form of nodes and edges. Network representations help in improving the analytical tasks such as community detection, link prediction and information propagation. Low dimensional feature representations generate features automatically and reduces the human’s manual efforts in feature extraction. In this talk, heterogeneous network representation learning models for influence propagation are discussed and some thoughts on research questions will be presented. A focus will be laid on the aggregating various types of semantic information based on their importance and weightage to avoid semantic confusion and also employing an attention-based mechanisms though meta-path learning.

mukesh mohania

Prof. Mukesh K. Mohania

Indraprastha Institute of Information Technology, India

Topic – AI for Personalized Education

Online courses and learning systems have gained tremendous popularity over the last few years. While their ease of access and availability make them a very useful medium for knowledge sharing and learning, they do not keep the learners and their learning abilities in mind. The “one size fits all” approach to learning content and the question paper does not work in a large virtual classroom consisting of diverse students with different skill profiles, learning styles, aptitudes and capabilities. In a traditional classroom, teachers who interact closely with students are in a position to evaluate the pace and depth of the curriculum being taught and can also suggest learning content to students not being able to cope with the general classroom teaching. Such suggestions and guidance are absent in current online learning systems. In this talk, we aim to address how AI can help in (1) making content smarter through learning content analytics and automatic content tagging, (2) generating diverse but semantically related questions for evaluating the student’s knowledge, (3) assisting in short answers evaluation, and finally (4) understanding the student’s learning style/capacity through learning data analytics, thus enabling the adaptive and personalized education on Big Data platform.

Mukesh Mohania is a Professor (CSE) and Dean (IRD) at IIIT Delhi. He has 20+ years of experience in IT Architect and Innovation and has held senior technical and business leadership roles in IBM Research in India and Australia. His innovations centre on Information (structured and unstructured data) integration, master data management, AI for entity analytics, blockchain data management, and developing complex systems and applications in these areas. Over the course of his career, he has led a succession of successful projects that produced technology and products in use across the industry today, as well as influential and frequently cited technical work and patents. He holds 60+ granted patents and published 100+ technical papers at International Conferences, and has widely participated in Industry forums. For these accomplishments, IBM recognized him as an “IBM Distinguished Engineer”, “Master Inventor”, “Member of IBM Academy of Technology”, “Best of IBM”. He has received several IBM corporate and research-level awards, such as “Excellence in People Management”, “Outstanding Innovation Award”, “Technical Accomplishment Award”, “Leadership By Doing”, and many more. He is a founding project director of DST sponsored Technology Innovation Hub (TIH) on ‘Cognitive Computing and Social Sensing’ at IIIT Delhi and received Rs100Cr for 2021-2025. He has held several visible positions, like ACM Distinguished Scientist (2011-), VLDB Conference Organizing Chair (2016), DASFAA General- co-chair (2022), ER PC co-chair (2022), ACM India Vice-President (2015-17), ACM Distinguished Service Award Committee chair (2017-2018), Adjunct Professors/Industrial R&D board at various top universities in India and Australia, and many more, and has received IEEE Meritorious Service Award and ACM Outstanding Service Award. 

DC Track

Neeraj mittal

Dr. Neeraj Mittal

The University of Texas, Dallas, USA

Topic: Harnessing Concurrency in Multicore Systems

Until two decades ago, general-purpose processor manufacturers were able to achieve regular improvements in CPU performance by using traditional approaches such as increasing the clock speed of the CPU, increasing the length of the instruction pipeline or increasing the size of the cache and/or the number of cache levels. These steady improvements in CPU performance, and to a lesser extent in memory and disk performances, enabled building of ever-faster mainstream computer systems. As a result, most classes of software applications enjoyed regular (and free) performance gains for several decades without even releasing new versions or doing anything special. Many of the traditional approaches for boosting CPU performance have now hit a Brick Wall, a term often used to describe the inherent physical limitations faced by hardware designers in boosting CPU performance further. The transistor count, which is the number of transistors in an integrated circuit chip, continues to increase as per the Moore’s Law. To make use of these large number of additional transistors available on a chip and due to traditional approaches offering only limited gains, major general-purpose processor manufacturers (Intel, AMD and PowerPC) have turned to hyper-threading and multi-core architectures to improve hardware performance. A consequence of this trend is that the free ride that software programs have enjoyed for around four decades is finally over, and most current software applications will not benefit from this enormous parallel processing power offered by a modern computing device unless they are rewritten in a way that enables a program to distribute its tasks across several cores. Even a program written for a multi-core system may fail to scale well with the number of cores if poorly designed and coded.

Even though concurrency has been around for many decades, writing a concurrent program that runs correctly on a multi-core system is still known to be very hard, let alone writing a concurrent program that scales well with the number of cores. Not surprisingly, concurrent programming is largely the skill set of elite programmers often with doctoral degree in concurrent computing or related area. In this talk, I will present the current research on designing high-performance concurrent programs suitable for multi-core systems. I will also talk about the current research on using the new memory technology, called persistent memory (Pmem), that combines the low latency of main memory and the persistence of hard disk to design fault-tolerant concurrent programs.

Neeraj Mittal received his B.Tech. degree in computer science and engineering from the Indian Institute of Technology, Delhi in 1995 and the M.S. and Ph.D. degrees in computer science from the University of Texas at Austin in 1997 and 2002, respectively.  He is currently a Professor and Associate Department Head of Undergraduate Education in the Department of Computer Science at the University of Texas at Dallas and a co-director of the Advanced Networking and Dependable System Laboratory (ANDES).  His research interests include multi-core computing, distributed computing, fault tolerant computing, and distributed algorithms for wireless networking. Several of his conference publications have been invited to special issues for publication in top journals.

Maurice herlihy

Prof. Maurice Herlihy

Brown University, USA

Topic Correctness Conditions for Cross-Chain Deals

Modern distributed data management systems face a new challenge: how can autonomous, mutually-distrusting parties cooperate safely and effectively? Addressing this challenge brings up questions familiar from classical distributed systems: how to combine multiple steps into a single atomic action, how to recover from failures, and how to synchronize concurrent access to data. Nevertheless, each of these issues requires rethinking when participants are autonomous and potentially adversarial.

We propose the notion of a *cross-chain deal*, a new way to structure complex distributed computations that manage assets in an adversarial setting. Deals are inspired by classical atomic transactions, but are necessarily different, in important ways, to accommodate the decentralized and untrusting nature of the exchange.

(Joint work with Barbara Liskov and Liuba Shrira)

Maurice Herlihy has an A.B. in Mathematics from Harvard University, and a Ph.D. in Computer Science from M.I.T. He has served on the faculty of Carnegie Mellon University and the staff of DEC Cambridge Research Lab. He is the recipient of the 2003 Dijkstra Prize in Distributed Computing, the 2004 Gödel Prize in theoretical computer science, the 2008 ISCA influential paper award, the 2012 Edsger W. Dijkstra Prize, and the 2013 Wallace McDowell award. He received a 2012 Fulbright Distinguished Chair in the Natural Sciences and Engineering Lecturing Fellowship, and he is fellow of the ACM, a fellow of the National Academy of Inventors, the National Academy of Engineering, and the National Academy of Arts and Sciences. In 2022, he won his third Dijkstra Prize.
rajkumar kettimuthu

Dr. Rajkumar Kettimuthu

Argonne National Laboratory, USA

Topic – From file transfers to streaming: Enabling distributed science in Exascale era

Extreme-scale simulations and experiments can generate large amounts of data, whose volume can exceed the compute and/or storage capacity at the simulation or experimental facility. Moreover, as scientific instruments are optimized for specific objectives, both the computational infrastructure and the codes are becoming more specialized with the proliferation of AI workloads and accelerators. Distributed science is now a norm rather than an exception and it requires the rapid and automated movement of large quantities of data between federated scientific facilities. Traditionally, file-based data movement formed the backbone of distributed science and is still the predominant mode of data exchange across facilities. Near real-time analysis of streaming data (from scientific instruments) at remote facilities is emerging as a key requirement with recent technological advances that allow scientific instruments to generate data at rates that can exceed tens of gigabytes per second. In this talk, I will discuss our work in high-speed data movement for enabling distributed science in exascale era — ranging from moving a petabyte (large number of files) between two scientific facilities in a day to memory-to-memory data streaming between federated scientific instruments at 100 gigabits per second.

Dr. Rajkumar Kettimuthu is a Computer Scientist and Group Leader at Argonne National Laboratory, a Senior Scientist at The University of Chicago and a Senior Fellow at Northwestern University. His research interests include AI for science, advanced wired and wireless communications for science, and Quantum networks. Data transfer protocol and tools developed by him and his colleagues at Argonne have become the de facto standard for file transfers in many science environments. With 60K+ installations in six continents, these tools perform 50M+ file transfers & move 5 Petabytes+ of data every day. AI for science tools developed by his team at Argonne are being used in many science environments. These tools have been highlighted by top scientific journals and have won multiple awards at prestigious venues. He has co-authored 150+ peer-reviewed articles most of which appeared in premier journals and top IEEE/ACM conferences, and several of which won best paper award. His work has featured in 20+ news articles. He is a recipient of the prestigious R&D 100 award. He is a distinguished member of ACM and a senior member of IEEE.

John augustine

Dr. John Augustine

IIT Madras, India

Title: Vignettes from the Distributed Trust Paradigm of Computing

Abstract: Byzantine fault tolerance has been studied extensively for over four decades. Much of the work is centered on Byzantine Agreement and related problems like State Machine Replication. These works have established mechanisms for trustworthy computation in distributed environments despite the presence of malicious nodes. Does this distributed trust paradigm make sense in broader contexts? In this talk, we will look at a few vignettes from disparate domains that provide us with affirmative evidence.

We will begin with the problem of gathering anonymous robots in a graph and show how good robots can gather despite the presence of malicious robots that may misinform good robots. We will then present techniques for rank aggregation wherein we obtain a global ranking by aggregating pair-wise comparisons of crowd-sourced voters over a set of objects. Our approach provides reliable ranking as long as the proportion of Byzantine voters is strictly less than a half. Time permitting, we will conclude with a fully decentralized mechanism to build sparse overlay networks that are resilient to Byzantine failures.

These are joint works with Soumyottam Chatterjee, Arnhav Datar, Gopal Pandurangan, Arun Rajkumar and Nischith Shadagopan. They have appeared in SPAA, Neurips, and AAMAS — all recently in 2022-23.

John Augustine is a professor in the Department of Computer Science and Engineering (CSE) at the Indian Institute of Technology Madras. He holds a PhD from the Donald Bren School of Information and Computer Sciences at UC Irvine. His research interests are in distributed algorithms specifically focusing on distributed trust issues that emerge in settings where participants may behave maliciously. He has co-authored many refereed articles that have appeared in highly reputed conferences (SODA, FOCS, PODC, NEURIPS, DISC, SPAA, IPDPS, etc.) and journals (Algorithmica, SICOMP, TCS, JPDC, TPDS, etc). He was the chair of the distributed computing track at ICDCN 2022 and is currently serving as an associate editor at the Journal of Parallel and Distributed Computing. At IIT Madras, he is a founding member of the Cryptography, Cybersecurity, and Distributed Trust group (CCD) as well as the Blockchain Innovation Centre (BiC). He is also affiliated with the Theory group in CSE.