I am a Final-Year CSE Undergrad from VIT Bhopal University
My name is Vardaan. I am currently enrolled in VIT Bhopal University as a final-year student, pursuing Computer Science Engineering. I undertook this domain out of my curiousity and interest in computers and how they function. Over the years at my university, I have had the privelege to study and explore various fields like Computer Vision, Data Analytics, Web-Dev etc.
I have been part of various team projects, wherein we focused on solving real-world problems. Apart from academics, I was a Core-Member of Meraki, the Fine-Arts club of my university, and had organised various cultural events too. I was also a Tech-Columnist for The Frontier Vedette. In my free-time, I love to watch movies and read books.
Studied 'Fundamentals of Programming & C' under Prof. Manikandan K. during Fall-Semester 2019-20 and 'Programming in C++' under Dr. Pon Harshavardhanan during Winter-Semester 2019-20. The courses focused on introducing us to programming concepts.
Studied 'Operating Systems' under Dr. Ashwin M. during Winter-Semester 2020-21. The course focused on introducing students to OS fundamentals, Processes, Threads, Scheduling, Deadlocks etc.
Studied 'Computer Networks' under Dr. Gaurav Soni during Winter-Semester 2021-22. The course focused on networking terminologies, devices, topologies and OSI model of networking.
Studied 'Internet & Web Programming' under Dr. J. Subash Chandra Bose during Winter-Semester 2021-22. The course focused on introducing us to concepts of Internet and basics of HTML and CSS.
Studied 'Fundamentals of Data Science' under Dr. Lakshmi during Fall-Semester 2021-22. The course focused on introducing us to concepts of R Programming and introduction to Data Science field.
Studied 'Software Engineering' under Dr. Priyanka during Winter-Semester 2020-21. The course focused on introducing students to concepts of SDLC and development workflows.
R-app that visualises flight data of Air-France Airlines in real-time.
Sentiment Analysis of the EMMA novel by Jane Austen using R and Natural Language Processing.
TCS iON internship - Analysis and modelling of classifiers to detect the side effects of a Drug.
Anon Web aims to be a kind of app that will facilitate WhatsApp Messaging and WhatsApp Calling without having to save the recipients contact information. This idea originates from people’s need for quick messaging to unknown contacts, on the go! The app makes work through using public WhatsApp API links available, that can be modified to contact any number that is registered as an active user on the platform.
It has a variety of use-cases like, we students can send any assignment or tutorial related work to anyone without saving the number as our app is meant for these purposes only. When WhatsApp will start digital transfer of money, then we can transfer money without saving the contact info, for faster transactions. Though, it comes with its own shortcomings like, people might misuse it for spamming unknown contacts.
In 2020-21, when the Covid-19 situation was at its peak, it was very important to maintain social distancing in public places. The safety measures taken by the governments around the world had failed in front of the deadly second wave of Covid-19, due to lack of social distancing practice. Newer variants of the virus had and keep on emerging, that are equivalent or more dangerous than the previous one. Thus, it becomes a monumental challenge to tackle the issue of social distancing in an overly populated country like India. Hence, me and my team, under the guidance of our guide Dr. Abha Trivedi came forward with our project, Doori, a social distancing tracker, based on Computer Vision. Our project’s main objective is to help authorities implement social-distancing in public places & offices.
Thus, we can conclude here that Doori Social distancing tracker is functional and can detect objects with 72% confidence score. It uses OpenCV, DNN and the application modules can also be used for various thresholds. We hope that Doori helps the world and reaches out to more and more people/organizations.
This project is mainly a fully functional and user-friendly web application, that uses web scrapping to get live data of Air France flights. The flights are plotted on a reactive map with a color distinction according to their provenance/destination. The concerned airports are also plotted and differentiated by size according to frequentation. The live flights data are taken from flightaware.com. Once the airline url obtained, the data is displayed in a table but limited to 20 rows per page. From the scrapped Data, some modifications where needed in order to plot what I intended to do.
The map is in fact a Mapbox, using Plotly. The main idea is to superimpose the various information layers, that is the map, the airports markers, and the flights segments. All that toying with available parameters to obtain a pleasant design. Every R script can be turn into a web app using shiny. Now you do not necessarily have to structure your app in several files. However, the R code must be separated in a UI and Server part. Concerning hosting environment there is a possibility to use shiny provided services, or to dockerize the app and deploy it elsewhere. I have done using the Shinyapps.io platform.
FaceOff is a Computer Vision Model that was built using OpenCV and deployed onto a Webpage using Streamlit and hosted via Heroku. As the name suggests, it does Facial Recognition and identifies people based on its own trained dataset. We start by creating a dataset of all the images. To do this, we import OpenCV library. Remember we use the OpenCV-contrib-python library as it contains all the main modules as well as the contributed/extra modules. OpenCV (OpenSource Computer Vision Library) is an open-source computer vision and machine learning software library.
I used the Haar-Cascade Face Classifier to detect faces in the images supplied. Haar cascades, first introduced by Viola and Jones in their seminal 2001 publication, Rapid Object Detection using a Boosted Cascade of Simple Features, are arguably OpenCV’s most popular object detection algorithm. The next step is to train the image dataset using the LBPH (Local Binary Pattern Histogram) face recognition algorithm. We finally use the Haar cascades to detect and then LBPH algorithm to recognize faces in the provided image with that to our stored data. If it matches, we get the name of the individual!
Sentiment analysis provides a way to understand the attitudes and opinions expressed in texts. In this project, we explored how to approach sentiment analysis using tidy data principles; when text data is in a tidy data structure, sentiment analysis can be implemented as an inner join. We can use sentiment analysis to understand how a narrative arc changes throughout its course or what words with emotional and opinion content are important for a particular text.
One way to analyze the sentiment of a text is to consider the text as a combination of its individual words and the sentiment content of the whole text as the sum of the sentiment content of the individual words.The tidytext package provides access to several sentiment lexicons. Three general-purpose lexicons are Afinn, Bing and NRC. All three of these lexicons are based on unigrams, i.e., single words. Next, we count how many positive and negative words there are in defined sections of each book. Now we can plot these sentiment scores across the plot trajectory of any novel.
The Purpose of this internship was to build a Classification Model. A Classification models are a subset of supervised machine learning. A classification model reads some input and generates an output that classifies the input into some category. Here we intend to use these tools to create a model that can categorize different side effects that can occur due to age, gender and race.
As an outcome, we have completed the importing of various libraries, cleaning and sanitizing of dataset, exploratory analysis of the dataset and further splitting of dataset into the training and testing sets. Then we created a simple classifier using logistic regression and trained it on our training set. Finally, we tested our model on the testing set and we got the accuracy of 69%.