Headhunters

adavydov7 Wrote: ------------------------------------------------------- > Can I ask why you went > into it initially? Since I am guessing your > feelings about what you wanted to do changed > rather dramatically in your time there. Doing it > differently would you have rather gone the MQF or > MFE route at a top school? The main reason I wanted to get a PhD in this field was because I loved it and I couldn’t see myself doing anything else. Just pure and simple love, just wanted to be a professor and didn’t care about such trivialities as money at all. Young, idealistic, and naive. :*( Though I’ve scored well on standardized tests and have some decent math aptitude, the average IQ is so high in quantitative finance that whatever I have is at the very best a tiny comparative advantage in a field inhabited by guys like mo34 and Mobius Striptease. Therefore, if I were to do it all over again and didn’t do the PhD, wouldn’t do the MFE, but would try to get into a top MBA program through an undergraduate major where it’s relatively easy to get a perfect GPA and land a good pre-MBA job for good experience.

^wow so you wouldn’t have even gone the quant route in the least! Rather some other job in finance! PS We have all made those young, idealistic, and naive decisions (mine was quite similar to yours was why I’m so curious). My best advice is let sunk costs be sunk costs and try to make the most out of what you have put in and what potential routes are available to you once you do cut those loses (which in your cases sounds like exactly what you did, and considering the pay in Math depts around the country may actually end up doing so at a gain!). mo34: so what are you using now that F90 is gone? How was the transition?

sublimity Wrote: ------------------------------------------------------- > Although C++ is the de facto standard now and that > protects it from becoming obsolete too quickly, I > wonder what’s going to happen with programming > languages in the future as multiple cores arrive > (terascale computing) and the continual arms race > heats up: better algorithms, better hardware, and > just faster faster faster. Unless someone finally proves that non-polynomial computational problems are equal to polynomial ones, and then some other dudes improve/create a new programming language which exploits that, it doesn’t matter if we have processors 1000X faster than what we have today since all are based on the same principles so far, IMO.

I only use Matlab, Mathematica and SAS. Life is good now that I don’t have to write code for doing mundane functions that can be incorporated with one liners in Matlab. I get to spend more time modeling and less time coding which is great. Plus most firms I interviewed with after the CFA were either using Matlab or SAS. It’s true that most recruiters expect Quants to know C/C++ but that was not a deal breaker for the jobs I was looking at ( more modelling / data mining oriented) .

I agree with you in an abstract and theoretical sense. However, it seems like in practice, if you can just use upgraded hardware to brute force your so your model can arrive at a conclusion just a tiny bit faster (or more accurate) and also can place an order a few milliseconds before your next competitor, this might lead to cumulative gains that may be all the difference - swamping any deficiencies in your strategy. Heck, (assuming you have a model that’s relatively accurate compared to your next competitor) you can have a crappier algorithm that does things only 10% as efficient, but if you have hardware that’s 1000x as fast as your next competitor, the advantage shifts to you on account of your hardware advantage since it is so outsized relative to your competitor’s hardware. Not absolute, theoretical, abstract computing gains that matter, but rather relative, practical, and effective computing gains.

mo: SAS is crap IMO (although, unfortunately the standard), ever try Stata?

sublimity Wrote: ------------------------------------------------------- > I agree with you in an abstract and theoretical > sense. > > However, it seems like in practice, if you can > just use upgraded hardware to brute force your so > your model can arrive at a conclusion just a tiny > bit faster (or more accurate) and also can place > an order a few milliseconds before your next > competitor, this might lead to cumulative gains > that may be all the difference - swamping any > deficiencies in your strategy. > > Heck, (assuming you have a model that’s relatively > accurate compared to your next competitor) you can > have a crappier algorithm that does things only > 10% as efficient, but if you have hardware that’s > 1000x as fast as your next competitor, the > advantage shifts to you on account of your > hardware advantage since it is so outsized > relative to your competitor’s hardware. > > Not absolute, theoretical, abstract computing > gains that matter, but rather relative, practical, > and effective computing gains. I agree, but that would imply that you must have a huge private R&D effort solely to develop that in-house 1000X processor and you can keep for your own long enough to avoid replication. In that 10% efficiency example, even a 20X would create that competitive advantage, right? The problem is to have that private R&D which effectively creates that 20X faster than every known processing power to date. Interesting: don’t try to beat the algorithm, but try to beat the processor. Maybe GS has that R&D team for processors now. It wouldn’t surprise me, lol.

^they do, did you not see the WSJ articles a few months back about the crazy computer-based trading they are doing? I’ll try to find it… …and here you are as promised: http://online.wsj.com/article/SB125665689267210559.html http://online.wsj.com/article/SB124908601669298293.html http://online.wsj.com/article/SB124743426310129241.html

@ adavydov7 Thanks. @ Part-time Crook Yeah, any tiny advantage is the dealbreaker as long it makes up for slightly more than your deficiencies in other places - adjusted for cost, risk, etc. I just used those huge differences to make the point very clear. If you are much much better in one area, that alone can be enough make you win the total game - even if it has nothing to do with financial markets or models. (obvious from a business point of view, but interesting to consider in terms of the various subcomponents of the technology used) So perhaps the competition is going to be defined in the following distinct and broad sub-areas. - hardware = processors, communications (fiber optics) - software = algorithms - analysis = models tied to financial markets Basically, you are shifting the competition from financial markets towards information technology. And within information technology, you are choosing whatever component you have the best advantage in: hardware, software, and analysis. Heck, another broad competitive advantage might be “second-order.” Basically, - how fast are you able to upgrade to faster hardware? - how fast can you change/adjust your strategy? (as opposed to coming up with the strategy itself) - how fast and efficiently can you learn? (as opposed to what you already know)

@ adavydov7 Thanks. I know GS have been doing high-frequency trading for a while, but while I don’t know the details, it seems to me that GS, Renaissance, et al heavily rely on their algorithms and models tied to financial markets, but use commercial mainframes. What I wonder is whether they are building their own “20X Clay”. @sublimity I think a second-order competitive advantage is more feasible though, specially in the faster hardware development since this has physical constraints. If you can get a progressive upgrade faster than your competitor, and you can sustain that trend long enough, you don’t even have to have that 20X Clay right away, and you’ll kick the sh!t out of everyone indeed, lol. How fast can you change and adjust your strategy is a function of your learning process, so maybe is more challenging to have a sustainable second-order improvement here, I believe.

adavydov7 Wrote: ------------------------------------------------------- > mo: SAS is crap IMO (although, unfortunately the > standard), ever try Stata? SAS is good enough for what we’re doing. Plus it’s the standard as you said, and I’ve learned my lesson. If it’s the standard, that’s what I’m using :slight_smile:

Part-time Crook Wrote: ------------------------------------------------------- > @sublimity > I think a second-order competitive advantage is > more feasible though, specially in the faster > hardware development since this has physical > constraints. If you can get a progressive upgrade > faster than your competitor, and you can sustain > that trend long enough, you don’t even have to > have that 20X Clay right away, and you’ll kick the > sh!t out of everyone indeed, lol. > > How fast can you change and adjust your strategy > is a function of your learning process, so maybe > is more challenging to have a sustainable > second-order improvement here, I believe. Fascinating to ponder the future development of market competition, when finance becomes more and more purely an information technological endeavor and baseline is subject to a Moore’s Law trajectory. Seems like a safe bet is to really think hard about how you are going to take advantage of multicore and the advent of computing at the terascale (10^15 flops), petascale (10^18 flops), and beyond! Pretty soon, it seems that you’ll need a medium-sized nuclear reactor to provide the energy for your computationally-intensive hedge fund. The fact that Google is now getting into the energy business probably shows how intimately related computation, information, and energy are. Seems like those with the best second-order competencies succeed, and eventually, you’re going to be approaching the point where even higher order derivatives will be more relevant. By this, I mean that not only will you have to change fast to remain competitive, but you’ll need to change at an accelerating rate (and higher order rates of change) to be competitive! Maybe getting ahead of myself, but definitely going to keep an eye out for (or a few neural pathways dedicated to, lol) how multicore hardware an the accompanying software is going to change the game and redefine the rules.

sublimity Wrote: ------------------------------------------------------- > Pretty soon, it seems that you’ll need a > medium-sized nuclear reactor to provide the energy > for your computationally-intensive hedge fund. > The fact that Google is now getting into the > energy business probably shows how intimately > related computation, information, and energy are. Hahaha. Physicist one: “What’s up, dude. I just had an interview at the CERN hadron collider there in Geneva. Pretty girls over there … what about you?” Physicist two: “Not much, I had an interview with those dudes at Goldman; they are having serious problems cooling their f*cking reactor. They told me that I could use TARP funds if necessary. Will see, I’m waiting for my Level 1 results next month. Drinks?”

Hey, this conversation has turned into something interesting. Davydov, are you a Stata fan? I’m fond of Stata too. Occasionally I try to do stats in Matlab or R, but that is a pain in the butt, especially if you have missing data. Never used SAS because the license is too expensive for my taste. PTC & sublimity, it’s impressive how much energy these server farms need. I went to a talk on carbon emissions stuff and the guy says that a server’s annual carbon footprint is equivalent to a typical SUV. I found that very difficult to believe, except that the server tends to be on 24/7, whereas the SUV is not running 24/7. Academe is a sad place these days. I actually enjoyed teaching, but there really was no way to get any kind of predictability/stability in ones life there. Lots of dead wood that can’t be moved out.

Yea I love Stata (particularly after having to do something in SAS), I had an unfair head start compared to most since I went to grad school at Texas A&M (which is where Stata is located) and my roommates ex-wife was one of the developers there. PS It has the best nonparametric functionality of any statistical software I have ever used!!!

bchadwick Wrote: ------------------------------------------------------- > PTC & sublimity, it’s impressive how much energy > these server farms need. I went to a talk on > carbon emissions stuff and the guy says that a > server’s annual carbon footprint is equivalent to > a typical SUV. I found that very difficult to > believe, except that the server tends to be on > 24/7, whereas the SUV is not running 24/7. At a deeper and more abstract level, no surprise since the following are highly and intimately related: information, computation, energy, entropy, work, and heat. > Academe is a sad place these days. I actually > enjoyed teaching, but there really was no way to > get any kind of predictability/stability in ones > life there. Lots of dead wood that can’t be moved > out. I have so many friends in so many fields that were bright-eyed and bush-tailed at the end of their undergrad career, ready to change the field and be a superstar. Then there’s an exponentially decay in the survival curve from there to professor. I certainly don’t feel accomplished now at the last stages of my PhD, but rather, a survivor who got very lucky during many stochastic events.

Interesting point on Stata and SAS. Nerver used either, when I don’t like matlab I usually do my stuff in SPSS, anyone with experience in all three having an opinion on how Stata and SAS compare to SPSS? We have some people who do SAS, but they neither like it nor have I seen it that often at clients. PS: I tried to push this to an other thread, and failed, as this has little to do with headhunters anymore :slight_smile:

Sometimes right after college you don’t really know what the best career path is. I would have taken the PhD --> Teaching route too if I hadn’t get that first job in strategy consulting. Then I looked at all the MBAs worked there and realized that’s the way I wanted to go, too. However I sometimes ask myself how the things would be different if I had go through the PhD route instead.

Oh yea I forgot about SPSS, I’ve used that some when I first started but my profs and advisors always downplayed as a fading statistical software (kind of like FORTRAN as a programming language) so I focused my attention on learning SAS (since it was the standard) and Stata (since it actually works). I like Stata because it uses your available computational power much more efficiently and because you don’t have to write miles of code to perform a simple procedure like you would in SAS; I also felt the coding was also always much more intuitive. Lastly, it works great with Windows 7 Virtual Machine.

adavydov7 Wrote: ------------------------------------------------------- > mo: SAS is crap IMO (although, unfortunately the > standard), ever try Stata? STATA is awesome. R is good too (and its free!) Definitely missed those great statistical packages on the quant part of L1 and L2! So boring to do everything by hand!