Hello, I’m Gayane, a Ph.D. candidate in Engineering Management and Systems Engineering at Old Dominion University (ODU). I’m on track to complete my degree by December 2023.
My research focuses on the area of explainable artificial intelligence and its potential to enhance fairness, accountability, transparency, and trust in the decision-making process.
Prior to pursuing my Ph.D., I have worked as an instructor for Naval Sea Systems Command (NAVSEA), as well as a research assistant for Virginia Modeling, Simulation, and Analysis Center. I have also worked as data and research analyst positions at the New York City Department of Transportation (NYC DOT) and the International Economic Development Council (IEDC).
Norfolk, Virginia, USA
ggrigory@odu.edu
I am a Ph.D. student at Old Dominion University in Engineering Management and Systems Engineering department. I received an Master’s degree in Economics from ODU in 2017 and Bachelor’s degree from Armenian State University of Economics. Previously I have worked as a research assistant in Virginia Modeling, Analysis and Simulation center with Jose Padilla and Hamdi Kavak, where I have worked on cybersecurity human behavior analysis. My research interests are explainable artificial intelligence, feature selection, econometrics, and cooperative game theory applications to complex systems.
CRA-W Grad Cohort, San Francisco, CA.
Poster Presentation about Explainable AI Methods
Discussion on Explainable Artificial Intelligence methods
Discussions on Explainable Artificial Intelligence
13th Presented “The Need for Explainable Artificial Intelligence” during International MODSIM World Conference
Stony Brook 32nd International Conference on Game Theory, Workshop on Strategic Communication and Learning
Ph.D. colloquium – Association for Computing Machinery (ACM) – Special Interest Group on Simulation and Modeling (SIGSIM) Principles of Advanced and Distributed Simulation (PADS)
IEEE Systems Council Human System Integration (HSI)
Presented cybersecurity and game theory-related analysis, Nur-Sultan, Kazakhstan
In the scope of this project, we develop various cooperative-game theory-based explainable AI methods to evaluate the feature importance values for a regression model with a multicollinearity issue. This issue prevents describing the relationship between the features and the target accurately. We also look at the legal aspects, when the model does not generate accurate predictions.
In this project, we aimed to compare the performance of human participants and software agents in forming the ideal coalition. To do this, we conducted human subject experiments to collect data and developed a similar setting for the software agents. Our focus was on the “glove game, a commonly used cooperative game in human experiments. Our results showed that the software agents performed similarly to human players in terms of finding the optimal coalition. Specifically, we evaluated two trials: one with only software agents and another with a combination of human and sonftware agents. Overall, our study suggests that software agents can be effective in forming coalitions in cooperative games.
This project analyzed the impact of cyberloafing on an organization’s cyber risk. Employee behavior, specifically cyberloafing, affects productivity and creates an opportunity for malware to be introduced into the corporate system. Factors such as productivity, workload, and corporate sanctions have varying effects on cyberloafing and therefore on the cyber risk. While sanctions can help reduce cyberloafing, factors related to workload have a greater effect on employee tendencies toward cyberloafing and subsequently on the organization’s cyber risk.
This project analyzed the application of game theory to address systems engineering problems. Different engineering journals were reviewed to understand the game-theoretic models and the engineering problems discussed. This work is a great introduction to the basics of game theory and includes some examples of non-cooperative and cooperative game theoretic models.