Mar 14, 2010

Why does nobody understand my work?


Mar 7, 2010

Academic vs Real world

The Ideas I got

token ring vs ethernet
z buffer
setero

anymore?

Try to prioritize your work based on what people's need.
In academic world, researchers admire the feature-rich methods.
For example: Token Ring vs Ethernet. (todo)

It's clear that Token ring is the winner. However, in reality, it's Ethernet whichs dominates
the world. The researchers seem to forget what they have learned in software engineering.
Some features are nice-to-have, while some others are must-have. In the token ring example,
bandwidth reserve is a nice-to-have, but easy-to-implement is a must have, since it direclty reflects the production cost of a hardware.

Don't forget that we don't research just for research. The purpose of research is to satisfy people's needs. Try to prioritize your objective, or your work will end up nothing but a pile of papers in the archive of a library.

next:胰島素vs tracking(survey on human) Enough is enough( what innovation's dilemma tells us)


Mar 6, 2010

Open current folder from command line

 Put the following into ex.bat and place it in C:\windows
and then you can type "ex ." to open the current folder from command line.

@echo off
start explorer %1
echo on 

open files in new tabs of GVIM from command line

Name the following file gvim.bat
and put it into C:\Windows

@echo off
start gvim.exe -p --remote-tab-silent "%1"
echo on

Mar 5, 2010

Token Ring VS Ethernet

Which method is more elegant?
Which method solves the problem?
Which method contributes more to the world?

to be continue...

Mar 2, 2010

UBC is an old school

 As a graduate student with a few years working experience, I felt like I am the oldest student in UBC. After staying here for two months, I gradually found that many students in UBC show extraordinary maturity, which is impossible for early-twenty young people to have. In the beginning, there are two first-year students told me they were software engineers before. Later, I found that many first year graduate students have almost  the same age as myself. Today, I was shocked by the fact that the three "young" girls in my class are  all above 27, and one of them is even above 30.  Why are there so many old students in UBC? They don't have to work!?

 

Machine Learning Summary

A grand tool box for vision people

chap 2
Linear Regression with Maximum likelihood estimation
Normal distribution
Student T distribution : SGD(Stochastic Gradient Descent), EM(with Gaussian Scale Mixture)
Lapalce distribution: Linear Programming, EM(with Gaussian Scale Mixture), or Huber Loss function

Censor regression (Kevin: not a big deal, since it only move the line a slightly up, why there are hundreds paper on it?)

chap 3
Logistic Regression
Objective: convex
Parameter Estimation: no close form solution
1. Newton method(IRLS)
2. minfunc in Matlab

always get optimal solution

Multidim Regression: no big deal
Probit Regression: convex objective, use Gradient Descent(minfunc) or EM(slow) to fit it.

chap 4
Model Selection
1. Baysian Approach: P(D|M). Average the all possible theta to protect from overfitting. (need concrete example)
2. BIC approximation. dof(M) can be estimated by minimum encoding of the model( information theory). Good if there are many models and there are some ways to get dof(m) from anther model dof(m')
3. cross validation: not suitable when there are many candidate models, takes too much time

L2 regularization:
QR
SVD
Gradient

L1 regularization(Lasso):
Problem: Laplace is not differentiable at origin
Sol: soft threshold to the point near origin
problem of sol: not a unbiased estimator anymore
sol of above: reestimate the nonzero w with Least Square( a unbiased estimator)

Linear Programming(not editor of choice)
LARS
SCAD( not editor of choice): just a adhoc approach. cannot be put into baysian framework
NEG: best but slow

chap5
Neural networks
Non-convex
Cascade linear and non-linear model( it has to be, or the different layer will collapse and become single linear layer)
Use gradient descent to do estimation( back propagation algorithm)


chap 12
Generative model
PI->Yi->Xi

Discriminant Analysis
p(x,y) pic here

Discriminative method(logistic regression)
p(x|y) pic here

chap 13
Feature selection:
Forward Feature Selection:
Greedy put one feature in( editor of choice, simple and better than stochastic approach like genetic algorithm, simulated annealing, ...)

More prior:
Normal Gamma: more spiky at origin, and flatter tail than Laplace

chap 14
Mixture Model
PI->Zi->Xi
Different from chap 12, since Zi is hidden(need to be inferred from EM), but Yi is given.


 
TEMPLATE HACKS AND TWEAKS BY [ METAMUSE ] BLACKCAT 1.1
/scripts/shBrushJScript.js'/>