Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

CV

More about me

Posts

Online Stochastic Matching

11 minute read

Published:

In this post, I want to explain a bit about my research experience in Online Stochastic Matching. But as my research for this project is not finished yet, I will just explain the preliminaries of my research and the way I (as a junior researcher) look at these type of problems.

Braess Paradox And Smartphone Navigator Applications

11 minute read

Published:

We used to believe that to have the best final product, we should make a competitive environment for all companies. This way they will do their best to provide us a high-quality product with the minimum cost. Although this intuition seems to be true all the time, there are some cases in which the best outcome happens when we restrict this type of competition between different companies. For instance, consider the competition between different navigation assistant applications such as Google Maps, Waze, etc. They are trying to always give you the best possible route. Otherwise, we probably will not use them again so they will become extinct! Although this competition seems very nice, in this post, I will explain how this competition can lead us to a bad outcome for the society of drivers!

Why Mean Squared Error

5 minute read

Published:

For me, a question arises when people use MSE as an objective function for their learning tasks. The question is: WHY?? Why?? But when you ask this question you probably get answers like:

  1. Since it works well on this dataset!
  2. Because we want to give more penalty for bad predictions (in comparison with l1-norm)
  3. Computing the derivation of MSE is simple (in comparison with l1-norm)

awards

Ranked Top 0.8% In The Nation-wide University Entrance Exam

Published:

Every year, we have a The Nation-wide University Entrance Exam. About 250000 students participate in the Mathematics and Engineering division. The admission for Bachelor of Science in our country would be based on our performance on the test.

Best Score in Mathematics Team Selection Exam

Published:

I audited in “Topics in Mathematics and Problem Solving” in the mathematics department. As part of the class, we participated in the Mathematics Team Selection Exam. I participated in that exam and gained the highest score among all participants in the Mathematics Department. Unfortunately, since I was not a student of the Mathematics Department, I was not allowed to continue the competition as a team member.

education

grades

An introduction to Stochastic Process

Published:

In this course we have learned about random walk, some properties of a continues time and discrete time markov chains and their stationary states.

Topics in Computer Science (Algorithm II)

Published:

In this course we have been introduced to Computational Geometry, Linear Programming, Online Algorithms, Approximation Algorithms, Some NP complete examples.

Introduction to Data Mining

Published:

In this course, we have learned different techniques to use data to for prediction. We have learned regression, decision trees, enrtopy concepts, perceptron, evalutaion metrics like ROC, and clustering techniques.

Statistical Machine Learning

Published:

This graduate level course is based on Wasserman’s All of The Statistics book. The first 14 Chapters of the book were covered. We have learned topics like: Different types of convergence, Parametric and Non Parametric learning, Bootstrap technique for finding confidence interval, parameter inference and model selections. I have audited this course.

Probabilistic Graphical Models

Published:

This graduate level course is based on Koller’s PGMs book. In representation part, inference part, and learning part of the book, the have learned mostly chapter 3-5, 9-13, and 17-18 respectively. Through this course, we have learned graphical representation and its properties, ways to estimate posterior distribution in reasonable time (e.g. MCMC), and how to predict parameters and graphical structures. I have audited this course.

Collective Decision Making

Published:

An intersting graduate course on Collective Decision Making, topics like Byzantine agreement, Computations social choice, Mechanism Design, etc.

portfolio

About Amirkabir Unversity

There are several top universities in Tehran. There is always debates about their rank. But in computer science the most accepted ranking is as follows:

projects

Text Summarization

Published:

Extract important sentence as a summary using page rank algorithm and word2vec.

Text Classification

Published:

Using different metrics (mutual information, information gain, etc.) for extracting important words for document classification task

Grade Prediction

Published:

Regression, Normalization, Visualization. using Python, Foundations of Data Mining

publications

Best-case lower bounds in online learning

Published in , 2021

Much of the work in online learning focuses on the study of sublinear upper bounds on the regret. In this work, we initiate the study of best-case lower bounds in online convex optimization, wherein we bound the largest improvement an algorithm can obtain relative to the single best action in hindsight. This problem is motivated by the goal of better understanding the adaptivity of a learning algorithm. Another motivation comes from fairness: it is known that best-case lower bounds are instrumental in obtaining algorithms for decision-theoretic online learning (DTOL) that satisfy a notion of group fairness. Our contributions are a general method to provide best-case lower bounds in Follow The Regularized Leader (FTRL) algorithms with time-varying regularizers, which we use to show that best-case lower bounds are of the same order as existing upper regret bounds: this includes situations with a fixed learning rate, decreasing learning rates, timeless methods, and adaptive gradient methods. In stark contrast, we show that the linearized version of FTRL can attain negative linear regret. Finally, in DTOL with two experts and binary losses, we fully characterize the best-case sequences, which provides a finer understanding of the best-case lower bounds.

talks

Can computers have emotion?

Published:

A presentation (in Persian) for Research Method and Technical Report Writings course based on Alan Turing’s famous paper “Computing machinery and intelligence” and some other psychological paper.

A brief summary to The Theory of Computation

Published:

A presentation (in Persian) for Students in Introduction to The Theory of Computation course. I explained why we see problems as a language and why it is important to categorize them by different sets.

teaching

Teaching Assistant in Algorithm Design

Undergraduate course, AmirKabir University, Computer Engineering And Information Technology Department, 2016

This course is taught by Dr.Mousavi. My main task was problem setting.

Teaching Assistant in Algorithm Design

course, Amirkabir University, Mathematics and Computer Science Department, 2017

This course was taught by Dr. Rahmati. I was in the problem setting team. This course had 5 assignments:

  • Stable Matching
  • Greedy Algorithm and Graph
  • Divide and conquer
  • Dynamic Programming
  • Computation Complexity

Class for Olympiad Preparation

Undergraduate course, AmirKabir University, Computer Engineering And Information Technology Department, 2017

Solving some example questions for Theory of Computation (from Sibser’s Book) and Algorithm Design from CLRS in multiple sessions.

Teaching Assistant in Probability and Statistics for Engineering

Undergraduate course, AmirKabir University, Computer Engineering And Information Technology Department, 2018

This course is taught by Dr.Heari. My main task includes:

  • Creating and designing assignments
  • Conducting classes for students in which extra contents and concepts were presented. Such as:
    • A Probabilistic Cache Schedule Method
    • Naive Bayes Classifier as a simple example of PGMs
    • Why do we use MSE? Showing the connection between Maximum Likelihood Estimation with Gaussian noise and minimizing the MSE
    • Bootstrap as a way to estimate a Statistic and then construct a confidence interval.
    • Random Walk