What is qtr profit var?
One of the ways to find out is to look at the quarterly numbers published by companies. Stocks with the highest Quarter on Quarter Growth in Profits. Quarterly Results; Profit & Loss; Compounded Sales Growth. Businesses that have income fluctuations or peak earnings at certain times may need to make seasonal adjustments or use a YOY metric to. Profit and loss statement; Statement of promoters' shareholding and pledging levels; Balance sheet, mainly in September quarter (half year) and.
Value at risk (VaR) is a measure of the risk of loss for investments. It estimates how much a set of investments might lose (with a given probability), given normal market conditions, in a set time period such as a day. VaR is typically used by firms and regulators in the financial industry to gauge the amount of assets needed to cover possible losses.
For a given portfolio, time horizon, and probability p, the p VaR can be defined informally as the maximum possible loss during that time after excluding all worse outcomes whose combined probability is at most p. This assumes mark-to-market pricing, and no trading in the portfolio.
For example, if a portfolio of stocks has a one-day 95% VaR of $1 million, that means that there is a 0.05 probability that the portfolio will fall in value by more than $1 million over a one-day period if there is no trading. Informally, a loss of $1 million or more on this portfolio is expected on 1 day out of 20 days (because of 5% probability).
More formally, p VaR is defined such that the probability of a loss greater than VaR is (at most) (1-p) while the probability of a loss less than VaR is (at least) p. A loss which exceeds the VaR threshold is termed a "VaR breach".
It is important to note that, for a fixed p, the p VaR does not assess the magnitude of loss when a VaR breach occurs and therefore is considered by some to be a questionable metric for risk management. For instance, assume someone makes a bet that flipping a coin seven times will not give seven heads. The terms are that they win $100 if this does not happen (with probability 127/128) and lose $12,700 if it does (with probability 1/128). That is, the possible loss amounts are $0 or $12,700. The 1% VaR is then $0, because the probability of any loss at all is 1/128 which is less than 1%. They are, however, exposed to a possible loss of $12,700 which can be expressed as the p VaR for any p ≤ 0.78125% (1/128).
VaR has four main uses in finance: risk management, financial control, financial reporting and computing regulatory capital. VaR is sometimes used in non-financial applications as well. However, it is a controversial risk management tool.
Important related ideas are economic capital, backtesting, stress testing, expected shortfall, and tail conditional expectation.
Common parameters for VaR are 1% and 5% probabilities and one day and two week horizons, although other combinations are in use.
The reason for assuming normal markets and no trading, and to restricting loss to things measured in daily accounts, is to make the loss observable. In some extreme financial events it can be impossible to determine losses, either because market prices are unavailable or because the loss-bearing institution breaks up. Some longer-term consequences of disasters, such as lawsuits, loss of market confidence and employee morale and impairment of brand names can take a long time to play out, and may be hard to allocate among specific prior decisions. VaR marks the boundary between normal days and extreme events. Institutions can lose far more than the VaR amount; all that can be said is that they will not do so very often.
The probability level is about equally often specified as one minus the probability of a VaR break, so that the VaR in the example above would be called a one-day 95% VaR instead of one-day 5% VaR. This generally does not lead to confusion because the probability of VaR breaks is almost always small, certainly less than 50%.
Although it virtually always represents a loss, VaR is conventionally reported as a positive number. A negative VaR would imply the portfolio has a high probability of making a profit, for example a one-day 5% VaR of negative $1 million implies the portfolio has a 95% chance of making more than $1 million over the next day.
Another inconsistency is that VaR is sometimes taken to refer to profit-and-loss at the end of the period, and sometimes as the maximum loss at any point during the period. The original definition was the latter, but in the early 1990s when VaR was aggregated across trading desks and time zones, end-of-day valuation was the only reliable number so the former became the de facto definition. As people began using multiday VaRs in the second half of the 1990s, they almost always estimated the distribution at the end of the period only. It is also easier theoretically to deal with a point-in-time estimate versus a maximum over an interval. Therefore, the end-of-period definition is the most common both in theory and practice today.
The definition of VaR is nonconstructive; it specifies a property VaR must have, but not how to compute VaR. Moreover, there is wide scope for interpretation in the definition. This has led to two broad types of VaR, one used primarily in risk management and the other primarily for risk measurement. The distinction is not sharp, however, and hybrid versions are typically used in financial control, financial reporting and computing regulatory capital.
To a risk manager, VaR is a system, not a number. The system is run periodically (usually daily) and the published number is compared to the computed price movement in opening positions over the time horizon. There is never any subsequent adjustment to the published VaR, and there is no distinction between VaR breaks caused by input errors (including IT breakdowns, fraud and rogue trading), computation errors (including failure to produce a VaR on time) and market movements.
A frequentist claim is made that the long-term frequency of VaR breaks will equal the specified probability, within the limits of sampling error, and that the VaR breaks will be independent in time and independent of the level of VaR. This claim is validated by a backtest, a comparison of published VaRs to actual price movements. In this interpretation, many different systems could produce VaRs with equally good backtests, but wide disagreements on daily VaR values.
For risk measurement a number is needed, not a system. A Bayesian probability claim is made that given the information and beliefs at the time, the subjective probability of a VaR break was the specified level. VaR is adjusted after the fact to correct errors in inputs and computation, but not to incorporate information unavailable at the time of computation. In this context, "backtest" has a different meaning. Rather than comparing published VaRs to actual market movements over the period of time the system has been in operation, VaR is retroactively computed on scrubbed data over as long a period as data are available and deemed relevant. The same position data and pricing models are used for computing the VaR as determining the price movements.
Although some of the sources listed here treat only one kind of VaR as legitimate, most of the recent ones seem to agree that risk management VaR is superior for making short-term and tactical decisions in the present, while risk measurement VaR should be used for understanding the past, and making medium term and strategic decisions for the future. When VaR is used for financial control or financial reporting it should incorporate elements of both. For example, if a trading desk is held to a VaR limit, that is both a risk-management rule for deciding what risks to allow today, and an input into the risk measurement computation of the desk's risk-adjusted return at the end of the reporting period.
VaR can also be applied to governance of endowments, trusts, and pension plans. Essentially, trustees adopt portfolio Values-at-Risk metrics for the entire pooled account and the diversified parts individually managed. Instead of probability estimates they simply define maximum levels of acceptable loss for each. Doing so provides an easy metric for oversight and adds accountability as managers are then directed to manage, but with the additional constraint to avoid losses within a defined risk parameter. VaR utilized in this manner adds relevance as well as an easy way to monitor risk measurement control far more intuitive than Standard Deviation of Return. Use of VaR in this context, as well as a worthwhile critique on board governance practices as it relates to investment management oversight in general can be found in Best Practices in Governance.
Let X {\displaystyle X} be a profit and loss distribution (loss negative and profit positive). The VaR at level α ∈ ( 0 , 1 ) {\displaystyle \alpha \in (0,1)} is the smallest number y {\displaystyle y} such that the probability that Y := − X {\displaystyle Y:=-X} does not exceed y {\displaystyle y} is at least 1 − α {\displaystyle 1-\alpha } . Mathematically, VaR α ( X ) {\displaystyle \operatorname {VaR} _{\alpha }(X)} is the ( 1 − α ) {\displaystyle (1-\alpha )} -quantile of Y {\displaystyle Y} , i.e.,
This is the most general definition of VaR and the two identities are equivalent (indeed, for any real random variable X {\displaystyle X} its cumulative distribution function F X {\displaystyle F_{X}} is well defined). However this formula cannot be used directly for calculations unless we assume that X {\displaystyle X} has some parametric distribution.
Risk managers typically assume that some fraction of the bad events will have undefined losses, either because markets are closed or illiquid, or because the entity bearing the loss breaks apart or loses the ability to compute accounts. Therefore, they do not accept results based on the assumption of a well-defined probability distribution. Nassim Taleb has labeled this assumption, "charlatanism". On the other hand, many academics prefer to assume a well-defined distribution, albeit usually one with fat tails. This point has probably caused more contention among VaR theorists than any other.
Value of Risks can also be written as a distortion risk measure given by the distortion function g ( x ) = { 0 if 0 ≤ x < 1 − α 1 if 1 − α ≤ x ≤ 1 . {\displaystyle g(x)={\begin{cases}0&{\text{if }}0\leq x<1-\alpha \\1&{\text{if }}1-\alpha \leq x\leq 1\end{cases}}.}
The term "VaR" is used both for a risk measure and a risk metric. This sometimes leads to confusion. Sources earlier than 1995 usually emphasize the risk measure, later sources are more likely to emphasize the metric.
The VaR risk measure defines risk as mark-to-market loss on a fixed portfolio over a fixed time horizon. There are many alternative risk measures in finance. Given the inability to use mark-to-market (which uses market prices to define loss) for future performance, loss is often defined (as a substitute) as change in fundamental value. For example, if an institution holds a loan that declines in market price because interest rates go up, but has no change in cash flows or credit quality, some systems do not recognize a loss. Also some try to incorporate the economic cost of harm not measured in daily financial statements, such as loss of market confidence or employee morale, impairment of brand names or lawsuits.
Rather than assuming a static portfolio over a fixed time horizon, some risk measures incorporate the dynamic effect of expected trading (such as a stop loss order) and consider the expected holding period of positions.
The VaR risk metric summarizes the distribution of possible losses by a quantile, a point with a specified probability of greater losses. A common alternative metrics is expected shortfall.
Supporters of VaR-based risk management claim the first and possibly greatest benefit of VaR is the improvement in systems and modeling it forces on an institution. In 1997, Philippe Jorion wrote:
Publishing a daily number, on-time and with specified statistical properties holds every part of a trading organization to a high objective standard. Robust backup systems and default assumptions must be implemented. Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are too-frequently down. Anything that affects profit and loss that is left out of other reports will show up either in inflated VaR or excessive VaR breaks. "A risk-taking institution that does not compute VaR might escape disaster, but an institution that cannot compute VaR will not."
The second claimed benefit of VaR is that it separates risk into two regimes. Inside the VaR limit, conventional statistical methods are reliable. Relatively short-term and specific data can be used for analysis. Probability estimates are meaningful because there are enough data to test them. In a sense, there is no true risk because these are a sum of many independent observations with a left bound on the outcome. For example, a casino does not worry about whether red or black will come up on the next roulette spin. Risk managers encourage productive risk-taking in this regime, because there is little true cost. People tend to worry too much about these risks because they happen frequently, and not enough about what might happen on the worst days.
Outside the VaR limit, all bets are off. Risk should be analyzed with stress testing based on long-term and broad market data. Probability statements are no longer meaningful. Knowing the distribution of losses beyond the VaR point is both impossible and useless. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not.
One specific system uses three regimes.
Another reason VaR is useful as a metric is due to its ability to compress the riskiness of a portfolio to a single number, making it comparable across different portfolios (of different assets). Within any portfolio it is also possible to isolate specific positions that might better hedge the portfolio to reduce, and minimise, the VaR.
VaR can be estimated either parametrically (for example, variance-covariance VaR or delta-gamma VaR) or nonparametrically (for examples, historical simulation VaR or resampled VaR). Nonparametric methods of VaR estimation are discussed in Markovich and Novak. A comparison of a number of strategies for VaR prediction is given in Kuester et al.
A McKinsey report published in May 2012 estimated that 85% of large banks were using historical simulation. The other 15% used Monte Carlo methods.
Backtesting is the process to determine the accuracy of VaR forecasts vs. actual portfolio profit and losses. A key advantage to VaR over most other measures of risk such as expected shortfall is the availability of several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen (1998), later generalized by Pajhede (2017), which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring. E.g. a 5% probability of a loss greater than VaR should be observed over time when using a 95% VaR, these hits should occur independently.
A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen and Pelletier (2004), Haas (2006), Tokpavi et al. (2014). and Pajhede (2017) As pointed out in several of the papers, the asymptotic distribution is often poor when considering high levels of coverage, e.g. a 99% VaR, therefore the parametric bootstrap method of Dufour (2006) is often used to obtain correct size properties for the tests. Backtest toolboxes are available in Matlab, or R—though only the first implements the parametric bootstrap method.
The second pillar of Basel II includes a backtesting step to validate the VaR figures.
The problem of risk measurement is an old one in statistics, economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late 1980s. The triggering event was the stock market crash of 1987. This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.
The crash was so unlikely given standard statistical models, that it called the entire basis of quant finance into question. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading, investment management and derivative pricing. These affected many markets at once, including ones that were usually not correlated, and seldom had discernible economic cause or warning (although after-the-fact explanations were plentiful). Much later, they were named "Black Swans" by Nassim Taleb and the concept extended far beyond finance.
If these events were included in quantitative analysis they dominated results and led to strategies that did not work day to day. If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result.
VaR was developed as a systematic way to segregate extreme events, which are studied qualitatively over long-term history and broad market events, from everyday price movements, which are studied quantitatively using short-term data in specific markets. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial.
Abnormal markets and trading were excluded from the VaR estimate in order to make it observable. It is not always possible to define loss if, for example, markets are closed as after 9/11, or severely illiquid, as happened several times in 2008. Losses can also be hard to define if the risk-bearing institution fails or breaks up. A measure that depends on traders taking certain actions, and avoiding other actions, can lead to self reference.
This is risk management VaR. It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust, before 1990, although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks.
The financial events of the early 1990s found many firms in trouble because the same underlying bet had been made at many places in the firm, in non-obvious ways. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. J. P. Morgan CEO Dennis Weatherstone famously called for a "4:15 report" that combined all firm risk on one page, available within 15 minutes of the market close.
Risk measurement VaR was developed for this purpose. Development was most extensive at J. P. Morgan, which published the methodology and gave free access to estimates of the necessary underlying parameters in 1994. This was the first time VaR had been exposed beyond a relatively small group of quants. Two years later, the methodology was spun off into an independent for-profit business now part of RiskMetrics Group (now part of MSCI).
In 1997, the U.S. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.
Worldwide adoption of the Basel II Accord, beginning in 1999 and nearing completion today, gave further impetus to the use of VaR. VaR is the preferred measure of market risk, and concepts similar to VaR are used in other parts of the accord.
VaR has been controversial since it moved from trading desks into the public eye in 1994. A famous 1997 debate between Nassim Taleb and Philippe Jorion set out some of the major points of contention. Taleb claimed VaR:
In 2008 David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review Einhorn compared VaR to "an airbag that works all the time, except when you have a car accident". He further charged that VaR:
New York Times reporter Joe Nocera wrote an extensive piece Risk Mismanagement on January 4, 2009, discussing the role VaR played in the Financial crisis of 2007–2008. After interviewing risk managers (including several of the ones cited above) the article suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators. A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood.
Taleb in 2009 testified in Congress asking for the banning of VaR for a number of reasons. One was that tail risks are non-measurable. Another was that for anchoring reasons VaR leads to higher risk taking.
VaR is not subadditive: VaR of a combined portfolio can be larger than the sum of the VaRs of its components.
For example, the average bank branch in the United States is robbed about once every ten years. A single-branch bank has about 0.0004% chance of being robbed on a specific day, so the risk of robbery would not figure into one-day 1% VaR. It would not even be within an order of magnitude of that, so it is in the range where the institution should not worry about it, it should insure against it and take advice from insurers on precautions. The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability. It does not pay for a one-branch bank to have a security expert on staff.
As institutions get more branches, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. At that point it makes sense for the institution to run internal stress tests and analyze the risk itself. It will spend less on insurance and more on in-house expertise. For a very large banking institution, robberies are a routine daily occurrence. Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case. A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business. As portfolios or institutions get larger, specific risks change from low-probability/low-predictability/high-impact to statistically predictable losses of low individual impact. That means they move from the range of far outside VaR, to be insured, to near outside VaR, to be analyzed case-by-case, to inside VaR, to be treated statistically.
VaR is a static measure of risk. By definition, VaR is a particular characteristic of the probability distribution of the underlying (namely, VaR is essentially a quantile). For a dynamic measure of risk, see Novak, ch. 10.
There are common abuses of VaR:
The VaR is not a coherent risk measure since it violates the sub-additivity property, which is
However, it can be bounded by coherent risk measures like Conditional Value-at-Risk (CVaR) or entropic value at risk (EVaR). CVaR is defined by average of VaR values for confidence levels between 0 and α.
However VaR, unlike CVaR, has the property of being a robust statistic. A related class of risk measures is the 'Range Value at Risk' (RVaR), which is a robust version of CVaR.
For X ∈ L M + {\displaystyle X\in \mathbf {L} _{M^{+}}} (with L M + {\displaystyle \mathbf {L} _{M^{+}}} the set of all Borel measurable functions whose moment-generating function exists for all positive real values) we have
where
in which M X ( z ) {\displaystyle M_{X}(z)} is the moment-generating function of X at z. In the above equations the variable X denotes the financial loss, rather than wealth as is typically the case.
Let’s start this chapter with a flashback. For many of us, when we think of the 70’s, we can mostly relate to all the great rock and roll music being produced from across the globe. However, the economists and bankers saw the 70’s very differently.
The global energy crisis of 70’s had drawn the United States of America into an economic depression of sorts. This lead to a high inflationary environment in the United States followed by elevated levels of unemployment (perhaps why many took to music and produced great music 🙂 ). It was only towards the late 70’s that things started to improve again and the economy started to look up. The Unites States did the right things and took the right steps to ease the economy, and as a result starting late seventies / early eighties the economy of United States was back on track. Naturally, as the economy flourished, so did the stock markets.
Markets rallied continuously starting from the early 1980s all the way to mid-1987. Traders describe this as one of the dream bull runs in the United Sates. Dow made an all-time high of 2,722 during August 1987. This was roughly a 44% return over 1986. However, around the same time, there were again signs of a stagnating economy. In economic parlance, this is referred to as ‘soft landing’ of the economy, where the economy kind of takes a breather. Post-August 1987’s peak, the market started to take a breather. The months of Aug, Sept, Oct 1987, saw an unprecedented amount of mixed emotions. At every small correction, new leveraged long positions were taken. At the same time, there was a great deal of unwinding of positions as well. Naturally, the markets neither rallied nor corrected.
While this was panning on the domestic front, trouble was brewing offshore with Iran bombing American super tankers stationed near Kuwait’s oil port. The month of October 1987, was one of its kind in the history of financial markets. I find the sequence of events which occurred during the 2nd week of October 1987 extremely intriguing, there were way too much drama and horror panning out across the globe –
The financial world had not witnessed such dramatic turn of events. This was perhaps the very first few ‘Black Swan’ events to hit word hard. When the dust settled, a new breed of traders occupied Wall Street, they called themselves, “The Quants”.
The dramatic chain of events of October 1987 had multiple repercussion across the financial markets. Financial regulators were even more concerned about system wide shocks and firm’s capability to assess risk. Financial firms were evaluating the probability of a ‘firm-wide survival’ if things of such catastrophic magnitude were to shake up the financial system once again. After all, the theory suggested that ‘October 1987’ had a very slim chance to occur, but it did.
It is very typical for financial firms to take up speculative trading positions across geographies, across varied counterparties, across varied assets and structured assets. Naturally, assessing risk at such level gets nothing short of a nightmarish task. However, this was exactly what the business required. They needed to know how much they would stand to lose, if October 1987 were to repeat. The new breed of traders and risk mangers calling themselves ‘Quants’, developed highly sophisticated mathematical models to monitor positions and evaluate risk level on a real-time basis. These folks came in with doctorates from different backgrounds – statisticians, physicist, mathematicians, and of course traditional finance. Firms officially recognized ‘Risk management’ as an important layer in the system, and risk management teams were inducted in the ‘middle office’ segment, across the banks and trading firms on Wall Street. They were all working towards the common cause of assessing risk.
Then CEO of JP Morgan Mr.Dennis Weatherstone, commissioned the famous ‘4:15 PM’ report. A one-page report which gave him a good sense of the combined risk at the firm-wide level. This report was expected at his desk every day 4:15 PM, just 15 minutes past market close. The report became so popular (and essential) that JP Morgan published the methodology and started providing the necessary underlying parameters to other banks. Eventually, JP Morgan, spun off this team and created an independent company, which goes by the name ‘The Risk Metrics Group’, which was later acquired by the MSCI group.
The report essentially contained what is called as the ‘Value at Risk’ (VaR), a metric which gives you a sense of the worst case loss, if the most unimaginable were to occur tomorrow morning.
The focus of this chapter is just that. We will discuss Value at Risk, for your portfolio.
At the core of Value at Risk (VaR) approach, lies the concept of normal distribution. We have touched upon this topic several times across multiple modules in Varsity. For this reason, I will not get into explaining normal distribution at this stage. I’ll just assume you know what we are talking about. The Value at Risk concept that we are about to discuss is a ‘quick and dirty’ approach to estimating the portfolio VaR. I’ve been using this for a few years now, and trust me it just works fine for a simple ”buy and hold’ equity portfolio.
In simple words, Portfolio VaR helps us answer the following questions –
Portfolio VaR helps us identify this. The steps involved in calculating portfolio VaR are very simple, and is as stated below –
Of course, for better understanding, let us apply this to the portfolio we have been dealing with so far and calculate its Value at Risk.
In this section, we will concentrate on the first two steps (as listed above) involved in calculating the portfolio VaR. The first two steps involve us to identify the distribution of the portfolio returns. For this, we need to deal with either the normalized returns or the direct portfolio returns. Do recall, we have already calculated the normalized returns when we discussed the ‘equity curve’. I’m just using the same here –
You can find these returns in the sheet titled ‘EQ Curve’. I’ve copied these portfolio returns onto a separate sheet to calculate the Value at Risk for the portfolio. At this stage, the new sheet looks like this –
Remember, our agenda at this stage is to find out what kind of distribution the portfolio returns fall under. To do this, we do the following –
Step 1 – From the given time series (of portfolio returns) calculate the maximum and minimum return. To do this, we can use the ‘=Max()’ and ‘=Min()’ function on excel.
Step 2 – Estimate the number of data points. The number of data points is quite straight forward. We can use the ‘=count ()’ function for this.
There are 126 data points, please do remember we are dealing with just last six months data for now. Ideally speaking, you should be running this exercise on at least 1 year of data. But as of now, the idea is just to push the concept across.
Step 3 – Bin width
We now have to create ‘bin array’ under which we can place the frequency of returns. The frequency of returns helps up understand the number of occurrence of a particular return. In simple terms, it helps us answer ‘how many times a return of say 0.5% has occurred over the last 126 day?’. To do this, we first calculate the bin width as follows –
Bin width = (Difference between max and min return) / 25
I’ve selected 25 based on the number of observations we have.
= (3.26% – (-2.82%))/25
=0.002431
Step 4 – Build the bin array
This is quite simple – we start form the lowest return and increment this with the bin width. For example, lowest return is -2.82, so the next cell would contain
= -2.82 + 0.002431
= – 2.58
We keep incrementing this until we hit the maximum return of 3.26%. Here is how the table looks at this stage –
And here is the full list –
We now have to calculate the frequency of these return occurring within the bin array. Let me just present the data first and then explain what is going on –
I’ve used the ‘=frequency ()’, function on excel to calculate the frequency. The first row, suggests that out of the 126 return observation, there was only 1 observation where the return was -2.82%. There were 0 observations between -2.82% and 2.58%. Similarly, there were 13 observations 0.34% and 0.58%. So on and so forth.
To calculate the frequency, we simply have to select all the cells next to Bin array, without deselecting, type =frequency in the formula bar and give the necessary inputs. Here is the image of how this part appears –
Do remember to hit ‘Ctrl + shift + enter’ simultaneously and not just enter. Upon doing this, you will generate the frequency of the returns.
Step 5 – Plot the distribution
This is fairly simple. We have the bin array which is where all our returns lie and next to that we have the frequency, which is the number of times a certain return has occurred. We just need to plot the graph of the frequency, and we get the frequency distribution. Our job now is to visually estimate if the distribution looks like a bell curve (normal distribution) or not.
To plot the distribution, I simply have to select the all the frequency data and opt for a bar chart. Here is how it looks –
Clearly what we see above is a bell-shaped curve, hence it is quite reasonable to assume that the portfolio returns are normally distributed.
Now that we have established that the returns are normally distributed, we proceed to calculate the Value at Risk. From here on, the process is quite straightforward. To do this, we have to reorganize the portfolio returns from the ascending to descending order.
I’ve used excels sort function to do this. At this stage, I will go ahead and calculate Portfolio VaR and Portfolio CVaR. I will shortly explain, the logic behind this calculation.
Portfolio VaR – is defined as the least value within 95% of the observation. We have 126 observation, so 95% of this is 120 observations. Portfolio VaR is essential, the least most value within the 120 observations. This works out to be -1.48%.
I take the average of the remaining 5% of the observation, i.e the average of the last 6 observation, and that is the Cumulative VaR of CVaR.
The CVaR works out to -2.39%.
You may have many questions at this stage, let me list them down here along with the answers –
I hope the above discussion makes sense, do apply this on your equity portfolio and I’m sure you will gain a greater insight into how your portfolio is positioned.
We have discussed quite a few things with respect to the portfolio and the risk associated with it. We will now proceed to understand risk with respect to trading positions.
Download the Excel workbook used in this chapter.
According to experts, reading quarterly earnings of a company is an art that needs to be cultivated over time with careful and deliberate effort. To any company, quarterly earnings report is like its inner compass that gives a sneak peek into its present and future performance. It also helps analyse the value of the company. Yet a lot of ordinary investors still cannot fathom a company’s quarterly earnings. How to read the quarterly results of a company? What do these results tell about the company? Why do companies publish their quarterly results in the first place?
As per SEBI (Security and Exchange Board of India) guidelines, every listed company must publish the quarterly reports of the company to the public to safeguard the interests of the investors.
As an investor of a company, a company’s quarterly result will help you assess the present and future performance and value of the company. The quarterly result also tells you whether you should invest long-term in the company. For short -term investors or intraday traders, the quarterly result of a big company could have a direct impact on the market. Every time a big company announces its quarterly result, the markets rise or fall, depending on the effect.
Gross sales are the total sales of a company within a stipulated time. A steady rise in gross sales is an indicator of growing demand and good business health.
Net sales are the sum of a company’s gross sales minus its discounts, returns and allowances. Net sales can often get factored in when reporting on the statement of income with the top-line revenues. This is a better indicator of business health than gross sales.
Operating income indicates the amount of profit realised from a business’s operations, after deducting operating expenses such as wages, depreciation, and cost of goods sold. It is a measure of the profitability of the company.
On the other hand, non-operating income is other-than-business income. It includes revenues made from dividends, rental income, income, among others.
A steady decrease in operating income could mean a declining market share or reduced demand for the company’s products or services.
Operating profit= Net sales – Operating expenses
Operating expenses include costs of running the business such as salaries, utility bills like rent, electricity, office expenses such as stationery, license cost. It also includes the cost for research and development, legal and bank charges, among others. Other fixed and variable expenses that form a part of the operating costs need to be deducted from net sales to arrive at the operating profit of the business. A high operating profit indicates a healthy business. The operating profit shows the ongoing business conditions as well as the efficiency of the management.
Margins point at the ‘safety net’ of the company. The profit should not ideally come at the cost of margin. When there’s a decrease in the EBIT margin of the company, it signifies that the company’s profitability has taken a hit.
Interest cost is the money paid for a loan amount, to run a business. Hence, an increase in the interest cost indicates an increase in the debt of the company.
Some other pointers
A company’s net profit is also called its bottom line. It refers to the operating profit minus tax minus loan repayment. It is one of the crucial indicators of a company’s financial health. Hence, it is the most sought-after pointer in a quarterly earnings report. The higher the company’s net profit, the higher is the company’s profitability.
When net profit is divided by a total number of outstanding shares, we get EPS. It indicates the profitability of the company. EPS is the part of a company’s profit that is allocated to every individual share of the stock. It is imperative for investors and people who trade in the stock market. The better the EPS of a company, the higher is the profitability. It is yet another important indicator of a company’s financial health. It is widely used in the industry.
For an investor, EPS is a very good indicator of the performance of the company. This, in turn, results in more earnings for the shareholders. For an investor who is interested in a steady source of income, the EPS ratio helps him understand the space a company has for increasing its current dividend. EPS of a company should always be considered concerning other companies.
What else to look for quarterly earnings report?
When it comes to banks, investors should also look at things like net interest margins and non-performing assets. Experts say that investors should also look at the company’s cash-in-hand and pledged shares. All companies may not be declaring their pledged shares quarterly. Investors should also check out the asset-liability statement with the September quarter result as it indicates half the financial year.
Parts of quarterly result
A company’s quarterly earnings report typically consists of an earnings statement, balance sheet, cash flow statement. Here are the details:
Earnings statement: This document consists of the company’s earnings performance within a stipulated time.
Balance sheet: This consists of the company’s assets, shareholder equity, and liabilities if any. It gives an idea of what the company owns and any outstanding items it owes.
Cash flow statement: This document provides information about the cash flow the company receives. This could come from both its current business operations and investor sources. This also includes details on the outgoing cash used to pay for business-related investments, and activities during the period.
These statements give you a glimpse into the financial status of the company. It edits and shortens the information into a simpler format.
Earning reports are often among the largest catalysts for stock movement. In the case of bigger stocks, earnings reports can shake the market. On the day the earnings reports are released, the stock market could be trading at record high or low.
When a company improves its sales yet fails to meet the expectations of the analysts, people will rush to sell their shares. Hence, estimates of the report are also as important as the report itself.
Other important information for investors
Risk Factor
An investor or trader should carefully go through the potential risk addressed by the company in its earnings report. The risk could be with regards to a new segment of the business, a change in the company management, among others.
Legal Proceedings
This section of the company report mentions any current legal proceedings or outstanding lawsuits. This does not necessarily mean that an investor has to avoid this company. It’s important to check out the details of the legal case. Small lawsuits are prevalent. However, one needs to tread carefully when it involves big lawsuits.
Unregistered sales of equity securities
This is the part of the report where the company must supply information about “all equity securities of the registrant sold by the registrant during the period covered by the report that was not registered under the Securities Act.”