XBRL

How many of you know about it? -> How many of you understand the way it will impact your work (efficiency) -> How many of you are using it? -> How many of you understand that you will no longer get paid to be a data-collection grunt and will need to have actual high analytic skills and strong/accurate security projections? Otherwise, there’s the door? -> How many of you don’t realize that if you do not become more quantitative, your finished since this data standard will allow for better models, more factors, better software, etc…and without said quantitative skills to capitalize on this, see end sentence of third arrow…

Hey Keys, were we in that NYU VBA class together in Summer 2007. The name and XBRL topic makes me wonder if we know each other. If so, I sat behind you.

Yes Mr. Chadwick - we certainly do know each other; and you will also know my esteem for the combination of quantitative and fundamental hybrid individuals - something you with a physics background, programming, PhD and the CFA can appreciate as well. I’m just finishing up the CQF (a grind but a pleasure as well). I have initiated this thread due to the apparent lack of recognition on how this standard will impact the investment community. The corporate filers/accountants/corporate lawyers are fully knowledgeable about this but the analysts have been slow to say the least. Quite shocking as this will have as much impact as other things going on such as IFRS convergence, fair value measurement, etc…

Great to hear from you. I remember you mentioning the CQF. Glad to hear you’re finishing it up. I thought about doing it to fill in gaps in my knowledge without having to do a full MFE program, but it is just a bit pricey. I’m not sure how XBRL is evolving now, but it did strike me as a very valuable tool that programmers could use to automate modeling without having to wait (or, more likely, pay) for data providers. Are there any off-the-shelf XBRL readers. I think having something a programmer can take off the shelf and spit out some kind of balance sheet or income statement object might increase the uptake of the standard for financial analysis and quant analysis. Maybe there are some already. As for the effect on quantitative vs qualitative, I agree and disagree with you. I agree that the “searching through financial statements and filling out excel spreadsheets” part of a lot of junior work will become automated. However, I think that *good* qualitative analysis will actually become more valuable over time because of this. The real problem will be how to demonstrate the quality of qualitative thinking. So many in the industry just don’t know how to evaluate qualitative analysis other than by “gut feel” (gut feel may be good, or not, it just won’t be very easy to evaluate).

You can get the data from sec.gov. The 500 largest filers by public float have to file their detailed footnote data as a dataset now = no more trolling through the financial documents to find that little nugget of info tucked on page 545 in the ‘commitment and contingencies’ section… This is a link to Adobe’s recent 10Q data: http://sec.gov/cgi-bin/viewer?action=view&cik=796343&accession_number=0000796343-10-000011 They filed detail footnote tagging.

No responses eh… People must be busy with more important things like: - Who’s wilder - JWOwww or Snooki… - How to land that highly coveted bank teller job…

Keys - Good topic. Are you using it right now? In my current job (non-finance) I’ve had a little bit of exposure. I’ll have to do some searching and see what kind of readers are out there or read the specs and come up with my own reader that will pull the data I’m interested in. Based on your experience what percentage of firms out there are using it at a level where they see real benefit?

Yes, but the raw data is a little unstructured currently. Companies do not have to use the main data dictionary (taxonomy) for certain elements; they can create extensions. Now, this is highly unlikely for top level items such as ‘Total Assets’ but when getting into more granular items such as ‘taxes attributable to XYZ’, then the normalization starts to fade and data mapping is required. I’d speculate 70%+ of the primary financial statement tags are consistent between companies; the footnote data is more like 30-40% (many extensions, segment qualifiers and such). There are firms out there using it, but mostly at a standardized level (i.e. a mapping team does this to their internal taxonomy and then the security analysts use that in their models). Some get normalized XBRL data from third party sources. As stated, this will have huge impact after the standard starts to mature; the top 500 companies by Float are just starting to file detailed footnote tags for Q2. Also, the top 2000 companies by float need to file primary financial data in XBRL. Next year it rolls out to all companies. We’re a little behind the ball; China, Netherlands and a whole bunch of other companies have already mandated and implemented this standard.

How does the standard work across countries? Does US GAAP vs IFRS vs Local Stuff cause trouble in XBRL?

They are currently separate taxonomies - if you suck in native Chinese XBRL data from the Shanghai or Shenzhen exchanges, this is not immediately comparable. It requires mapping. FASB, with consultation from XBRL US, oversees the development of the us-gaap taxonomy. IASB has a release of the IFRS taxonomy which coordinates to the local IFRS taxonomies for individual countries. Convergence will start to occur as this standard matures. In addition to equities, the US has to file for the Risk/Return sections of Mutual Funds. There is also a corporate transactions taxonomy (M&A, IPO, etc…) and a proposed asset-backed taxonomy. This will spread to all asset classes.

How timely is this data released?

At the same time the company files. The rule is for concurrent filings.

Keys Wrote: ------------------------------------------------------- > No responses eh… > > People must be busy with more important things > like: > > - Who’s wilder - JWOwww or Snooki… > > - How to land that highly coveted bank teller > job… It’s summertime afterall… Keys, don’t most quants use data from Compustat and CRSP as their main sources. What’s the advantage of using this format instead of Compustat data?

More granular datapoints = more interesting models; I assume more quant/fundamental shops will arise. Compustat has ~ 500 (maybe an overstatement) normalized datapoints; disclaimer - last time I checked. This suits most quants today which rely heavily on prices, yields, and some high level fundamental data i.e EPS, Rev, etc… --> they need more tools (data) to model value better (especially if a HFT tax is implemented, or some other mechanism to slow the market). There is also a lag with the manual collection of data; possibly very short for large market caps due to a vendors priority list, but the small comps can get neglected for weeks during peak filing. XBRL is instantaneous.

Taking a class on it this fall - pretty excited about it. Don’t know that much about it yet.

There remains significant value add though from shops such as Compustat, Worldscope and Reuters. The XBRL taxonomies are country specific and contain industry specific data. For a user attempting to screen across a broad universe the data isn’t normalized. In the case of Japan they have 22 industries and COGS isn’t standardized across the industries, construction has one concept, transportation another, etc. By normalizing the data these vendors allow for cross sectional comparisons for top down selection. From an IB perspective XBRL will be a huge help once footnotes are tagged but in the case of Japan they only require the primary statements so data still needs to be collected from the PDF filings and is untagged. I have been following XBRL since at least 2000 and still do not have a great deal of confidence in it’s ultimate success due to the company specific extension which are a very high percentage in the U.S. market. Finally the SEC has mandated the use for filing and it is being implemented but footnotes and consistent tagging of them are still a ways off. As for the firms distributing tagged XBRL data they still have yet to realize the promise of timeliness as they can’t seem to validate the data quickly enough. In my comparisons of these vendors to traditional vendors the fastest among them was Bloomberg who at least had preliminary updates within an hour or two while the XBRL vendors averaged about 7 hours from announcement time to puclic release of the data. While an exciting concept and transformational it has yet to realize it’s potential and still requires standardization for cross sectional analysis. Finally the taxonomies change over time which will impact the comparability of data between schemas, I somehow doubt companies are going to restate their full history of results based on a new schema while a data vendor will normalize across the schemas as much as possible.

Good commentary; for the most part, on point. Vendors will play a strong piece to this puzzle for investors running screens on large baskets of comps.

And for quants doing time series analysis who required deep histories of normalized data ideally crossing at least a few economic cycles

Alternately they could use a fundamental factor model estimated similar to Barra. You really would have to do something like that for this sort of thing.

Most important thing when using a factor model such as BARRA is to understand their model and assumptions used in building it. The estimation universe for the US-E3 used to significantly bias it to large caps from my recollection. But to be hones I haven’t looked at it in a few years.