ap.
HELLO

I'm Apala!

Student, Researcher, Dancer

I am currently a graduate research assistant at the University of Nebraska Lincoln. My research focuses on enhancing mmWave communication through advanced beamforming, beam tracking, and prediction techniques, integrating machine learning and sensory data for improved wireless connectivity


Projects.

Project Image

Edge Assisted SLAM with Human-Construction Collaborative Robot

We test the EdgeSLAM algorithm for generating a map to ensure that it can be used at a construction site building where the aim of the robot is to navigate to specific goals while avoiding humans and maintaining a safe distance of 1.25 meters from the human workers at all points in time. Further, our approach uses a YOLOv3 network to detect a human worker and then performs clustering and Kalman filtering for point-cloud data from a depth camera to estimate the relative human motion under the robot’s local coordination
Project Image

Spanish News Classification

This project focuses on text classification using a BERT-based model on a Spanish news dataset. The dataset is preprocessed and split into training, validation, and test sets. The model architecture includes input layers for tokenized text and attention masks, followed by a BERT layer and a softmax output layer. The model is trained using TensorFlow with Adam optimizer and categorical crossentropy loss. Visualization techniques, such as training and validation accuracy plots, training and validation loss plots, and a confusion matrix, are employed to analyze the model's performance.
Project Image

Markov Model for Voynich Manuscript

Markov model of the Voynich Manuscript provides us with useful information for analysing the Voynich script. We used the transition matrices generated by our model to compare the six sections of the Manuscript, to compare the Manuscript to several natural languages, and to calculate the most likely character for several unknown characters in this transcription of the Voynich Manuscript.
Project Image

Vision Transformer

This project centers on the re- implementation of the Vision Transformer (ViT) architecture, a pivotal development in computer vision. ViT utilizes self-attention mechanisms for image recognition, promising to surpass conventional convolutional neural networks (CNNs). Challenges to be addressed include scaling ViT for larger images and datasets while maintaining efficiency, devising efficient pretraining strategies in data-limited scenarios, optimizing fine-tuning techniques for specific tasks, and enhancing interpretability.
Project Image

Hand-Gesture Recognition

The Hand Gesture Recognition project utilizes the Sign Language MNIST dataset, a modification of the classic MNIST, containing 27,455 training cases and 7,172 test cases representing 24 classes of American Sign Language letters. The dataset was expanded using image processing techniques to create variations for better machine learning model training. This project aims to develop a robust visual recognition algorithm to assist the deaf and hard-of-hearing in better communicating through computer vision applications
Project Image

Classifying CIFAR100

This project, conducted in collaboration with a partner, focuses on implementing architectures for image classification using the CIFAR-100 dataset. The architectures consist of convolutional layers with pooling and at least one fully connected layer, followed by softmax for the output layer. TensorFlow is utilized for model implementation, with a main loop for training defined in separate Python files. Results and analysis are presented in a written report following a provided template, including experimental setup, results, and conclusions.
Project Image

Classifying Fashion MNIST

This project explores the design and implementation of two architectures for image classification using the Fashion MNIST dataset. The architectures consist of fully connected layers with ReLU activation for hidden nodes and softmax for the output layer. Hyperparameter selection is conducted independently of the test set, with validation data obtained through either a single training set and a single validation set or k-fold cross-validation. The models are optimized using Adam, with at least two sets of hyperparameters and one regularizer evaluated for each architecture.
P
U
B
L
I
C
A
T
I
O
N
S

PerM: Tool for Perception-based Runtime Monitoring for Human-Construction Robot Systems

DAC 2024 Workshop

Authors: Apala Pramanik, Kyungki Kim,Dung Hoang Tran

Vision-based Runtime Monitoring for Human-Construction Robot Systems

IROS 2023 Workshop : Formal methods techniques in robotics systems: Design and control

Authors: Apala Pramanik, Kyungki Kim,Dung Hoang Tran

ASTITVA: assistive special tools and technologies for inclusion of visually challenged

2021 international conference on computing, communication, and intelligent systems (ICCCIS)

Authors: Apala Pramanik, Rahul Johari, Nitesh Kumar Gaurav, Sapna Chaudhary, Rohan Tripathi

START: smart stick based on TLC algorithm in IoT network for visually challenged persons

2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC)

Authors: Rahul Johari, Nitesh Kumar Gaurav, Sapna Chaudhary, Apala Pramanik

SERI: SEcure Routing in IoT

International Conference on Internet of Things and Connected Technologies

Authors: Varnika Gaur, Rahul Johari, Parth Khandelwal, Apala Pramanik

Make
Work
Lifestyle
Everything
Awesome!
Image Description

News!


Upcoming DAC 2024 Workshop Poster Presentation - PerM: Tool for Perception-based Runtime Monitoring for Human-Construction Robot Systems (San Francisco, June 2024)

UpcomingGraduate Teaching Fellowship Program at UNL (Lincoln, April 2024)

IROS 2023 Formal Methods Workshop Poster Presentation- Vision-based Runtime Monitoring for Human-Construction Robot Systems (Detroit, Oct 2023)

Successfully passed PhD qualifying examination (Lincoln, May 2023)

Presented research at Graduate Student Symposium (Lincoln, March 2023)
_
Feel free to reach out to me using the following contact information!