Wednesday, October 30, 2019

The Central Intelligence Agancy (CIA) Assignment

The Central Intelligence Agancy (CIA) - Assignment Example The Central Intelligence Agency (CIA) is the leading independent intelligence agency of US government which is responsible to provide the national security concerns to US government. It was formed under National Security Act of 1947. The agency controls sensitive nature of work and therefore directly reports the security concerns to the senior policymakers of US government. Senate and President of United States decide the appointment for director of Central Intelligence agency, who is responsible for all the operating, budgeting and human resource matter of the Agency. The Director of CIA works as a National Human Source Intelligence Manager (HUMINT) (FAS, 2009, 1). The history of intelligence activities in United States started in the period of George Washington but since World War II these activities are properly coordinated and directed by government. For this purpose, a New York lawyer â€Å"William J. Donovan† was appointed as the first coordinator of information. He then became the head of the Office of strategic Services (OSS) in 1942 when US entered in World War II. The responsibilities of OSS were to collect and analyze information. However, it was dissolved after World War II along with other agencies and the functions of OSS were transferred to state and War departments (DNI, 2011, 7). After some time, President Truman (The US President of that time) felt the need of Intelligence organization and decided to create a separate intelligence agency of United State. Truman, under National Security Act 1947, established Central Intelligence Agency (CIA).... These staff includes the human resources, protocol, public affairs, legal issues, information management, and mission innovation (FIA, 2009, 1). VISION The Agency has a vision to secure the world from threats. The vision statement of CIA is â€Å"One Agency. One Community. An Agency unmatched in its core capabilities, functioning as one team, fully integrated into the Intelligence Community† (CIA Website) The statement clearly reveals the future direction of CIA. The agency wants to be the unmatched intelligence agency in the world. It wants to create secure and peaceful environment in United States. It wants second to be none. MISSION & GOALS OF CIA â€Å"We are the nation’s first line of defense. We accomplish what others cannot accomplish and go where others cannot go. We carry out our mission by: Collecting information that reveals the plans, intentions and capabilities of our adversaries and provides the basis for decision and action. Producing timely analysis tha t provides insight, warning and opportunity to the President and decision makers charged with protecting and advancing America’s interests. Conducting covert action at the direction of the President to preempt threats or achieve US policy objectives. † (CIA Website) The mission of CIA reflects the true picture of intelligence. The agency is number one in providing intelligence services. Its primary purpose is to collect the information which is against the peaceful environment and interest of America. The agency also analyze the information in order know the hidden threats and then convey it to the US official decision makers. Finally the agency implements the action plans according to the direction of decision

Human Resource Management Essay Example | Topics and Well Written Essays - 2250 words

Human Resource Management - Essay Example Various nations have been subjected to this problem, out of which the condition of South Africa is worth mentioning. The mismatch has been widening in economies over the years and is evolving as one of the primary causes of concern for organizations. Educational institutions and organizations have been jointly working towards increasing alignment between skills demanded and skills produced. Researchers have identified a number of ways organizations could devise policies and procedures for handling this skills shortage. Not only do they require modification in the job designs but also must consider altering the existing human resource strategies. The project seeks to explore the various possible alternative suggested by researchers and practitioners that can be applied for designing jobs in organizations confronting with labour market shortages (Dychtwald, Erickson, Morison, 2006, p.88). Impact of labour market shortages on organizations Organizations are faced with a number of dilemmas arising out skill shortages in the economy. Researchers and business practitioners are of the opinion that skill shortage has a direct impact on labour costs in organizations. Labour shortage happens to be one of the propelling factors for undertaking offshore recruitments as the local skill availability falls short of the demands. Researchers have identified the standard for cost effectiveness as the price of the available as well as qualified workers for both off shore and on shore (Atwater & Klass, 2007). In other words it can be said that declining skills are directly associated with climbing labour costs in organizations and firms. According to Atwater & Klass, (2007), skill shortage impacts businesses directly by ways of relocation of existing business processes like manufacturing, production and services from one region or country to another. This might include outsourcing or even subcontracting within the same country or shift ing business practices from one nation to another. Although firms might gain from the phenomenon in the way of availing of skilled personnel at lower prices, however, it involves great structural losses. The wide range of educational requirements along with the severity associated with the search costs happen to be so high that researchers have not yet been able to draw a conclusion as to whether hiring off shore skills is an option optimum for adopting (Atwater & Klass, 2007). Although organizations and industries today are in different stages of off-shoring their jobs and functional activities, Atwater & Klass, (2007) are of the opinion that organizations maintaining the view that availability of labour is a local issue are likely to face serious competitive disadvantage as compared to those who consider looking at labour as a global market (Atwater & Klass, 2007). Main considerations in designing jobs for organizations facing labour market shortages According to a research conduc ted by McKinsey Global Institute states that the looming shortage of local talents have been having serious implications for the nation’s multinationals by triggering the process of organizations recruiting offshore skills (Farrell &

Monday, October 28, 2019

The law of equity Essay Example for Free

The law of equity Essay By the end of the 13th century, the central authority had established its precedence at least partly through the establishment of the common law. The Courts of Exchequer was a court originally dealing with disputes involving revenue, taxation and revenue laws. The Court of Common Pleas was where pleas between subject and subject were brought. And the Kings Bench heard actions to which the King was a party. The common law however, had a number of defects. The inflexibility of the writ system appeared to lead to injustice because matters that were not within the scope of writes recognized by the common law were dismissed. Furthermore, the common law did not recognize rights in the property other than those of strict legal ownership. Nor did it recognize security for loans (mortgages) or the right of third parties in general. The common law courts had no power of enforcement. Also, it did not allow any form of oral evidence. The only remedy provided by the common law were damages, which were inappropriate in certain cases. This led to injustice and the need to remedy the perceived weaknesses in the common law system. The more general a rule, the less likely it is to do justice in all the particular cases to which it applies. Moreover, an attempt to construct in advance the qualifications to the rule necessary to do justice in all cases would lead to a system of rules too complex, even if all the problems could be foreseen. The Court of Chancery emerged as a solution to the common problems faced by the common law system by administering the law of equity. Proceedings before the Chancellor were simple, and were in other respects advantageous when compared with the proceedings of the common law courts. Plaintiffs unable to obtain access to the three common law courts would turn for help to the Chancellor. Moreover, the Chancellor developed several remedies which were not available in other courts, most notably injunction, specific performance, recessions and rectifications. Other improvements made by equity are the imploration of additional obligations on an individual while recognizing his or her rights at common law. By accepting that a trustee is the legal owner of property while requiring the individual to hold it benefit of another. Equity is concerned with individual justice. Therefore, it is only available at the discretion of the court. Also, this means that anyone who seeks equitable remedies must not themselves be guilty of misconduct in the case. The division between the common law courts and the Courts of Equity were eventually combined under the Judicature Acts 1873-1875. Matters of both law and equity is now determined in the course of one set of proceedings: if there is any conflict between rules or law and rules of equity, the latter are to prevail. Injunction is as an order that prevents a person from performing or continuing to perform a particular act. In the case of Kennaway Vs. Thompson, the plaintiff sought an injunction to restrain a motor boat racing club from committing nuisance by excessive noise. The Court of Appeal granted the injunction, holding that the rights of the plaintiff shouldnt be overridden by the interest of the club or the general public. In considering whether to grant an injunction or damages in lieu under Lord Chairns Act, the public interest does not prevail over private rights. In this case, damages wouldnt have satisfied the plaintiffs private rights. Specific performance is an order that requires a person to perform or continue to perform a particular act. In the case of Jones Vs. Lipman, the defendant entered into a binding contract to sell some land to the plaintiff. After the date of the contract, the defendant changed his mind, and sought to avoid specific performance by selling the land to a company acquired by him solely for this purpose and controlled by him. While specific performance would not normally have ordered against a vendor who no longer owned the property, here the defendant was still in a position to complete the contract, because the company was a sham in an attempt to avoid recognition by equity. Thus, specific performance was decreed against the vendor and the company. Recession is an order that returns parties to contractual agreement to the position they were in before the agreement was entered into. Cooper  Vs.Phibbs, Phibbs was the legal owner and trustee of land which, unknown to either party, belonged in equity to Cooper. Phibbs improved the land and agreed to let it to Cooper. On discovering the facts, Cooper sought to rescind the letting agreement. The House of Lords held that, subject to a lien for Phibbss expenditure, it should be set aside. If parties contract under a mutual mistake and misapprehension as to their relative and respective rights, the result is that agreement is liable to be set aside as having proceeded upon a common mistake. Rectification is an order that relates to the alteration, under extremely limited circumstances, of contractual documents. In A.Roberts and Co. Ltd. vs. Leicestershire County Council, the plaintiffs had undertaken to build a school for the defendants. The agreement provided that the school should be completed within the period of 18 months, but the officers of the Council altered the period to 30 months in the draft contract without making it clear to the company. The company signed the contract without noticing the change, and one of the defendants officials was aware of the mistake. Rectification was ordered. In conclusion, equity has greatly ameliorated the common law system. Various forms of remedies other than damages have been made available under specific circumstances such as, injunction, recessions, rectifications and specific performance. However, in most instances there are differences between the operation of law and equity rather than conflict. For example, different remedies may be available in respect of what both systems acknowledge to be wrong. In respect of a nuisance, damages and injunction come into conflict. Bibliography Gary Slapper David Kelly, The English Legal System 6th Edition. M.L Barron R.J.A Fletcher, Fundamentals of Business Law 4th Edition. Helena Wray. Smith, Bailey Gunn, Modern English Legal System, 4th Edition, London Sweet Maxwell. Brenda Barrett, Principles of Business Law, Helena Wray. Paul Latimer, Business Law, 1988 Edition, CCIT Editorial Staff. Clive Turner, Australian Commercial Law, 22nd Edition, LBC Information Services 1999. www.lectlaw.com/files/lws65.htm Jill E. Martin, Hanbury Martin Modern Equity 14th Ed, London Sweet Maxwell Ltd. 1993. Jill E. Martin, Hanbury Martin Modern Equity 13th Ed, London Sweet Maxwell Ltd. 1993.

Sunday, October 27, 2019

Torsional, Axial and Lateral Bottom Hole Assembly Vibrations

Torsional, Axial and Lateral Bottom Hole Assembly Vibrations Experimental investigation of torsional, axial and lateral bottom hole assembly vibrations 1. Introduction Introduction The oil and gas industry is one of the largest and the most globalized industry in the world. Petroleum products include plastics, fuels, ointments and many more. With increases in world population, consumption and demand of petroleum products have increased. Primarily petroleum products are used as energy sources. With an increase in demand, different, unconventional sources are being explored. Drilling in itself is a complex process due to the unknown formations in the earth. A hole is drilled in the earth with a bit, and tubulars are attached to it to provide axial force and rotation. The tubulars are hollow through which the drilling fluid is circulated to extract the cut rock. Once the hole is drilled, the bit and tubulars are taken out, and a larger tubular is pushed down the hole and cemented around the annulus to stop the hole from caving in. The process is the same as drilling a water well but with greater depth, pressures, temperatures and complexities. Some of the oil and gas sources are too deep or too complex to be explored, but with advanced technological development in drilling, extended reach, multilateral and horizontal wells, it is now possible to extract unconventional oil and gas. Vibrations When an entity oscillates around its equilibrium point, the entity is said to be in vibration. In most of the cases vibrations are undesirable, as they cause harm to the system and dissipate energy. When force or energy is imparted to a system, vibrations occur. In absence of external excitation, the vibrations are called free vibrations. Systems in state of free vibrations oscillate with natural frequencies, which are dependent on the properties of system. With the presence of external excitation, vibrations experienced by the system are called forced vibrations. Vibrations become increasingly large and are most damaging when the excitation frequency is close to one of the natural frequencies. This phenomenon is called resonance. When there is an energy dissipation from the system in terms of heat, sound, friction or any other method, the resulting vibrations are called damped vibrations. The drillstring assembly is a very long, slender system prone to excessive vibration due to the various forces acting on it. Primary forces on the BHA are torque due to rotation and bit rock interaction, axial forces due to gravity and lateral forces due to bending of the long pipe and hitting the walls of the borehole. Types of Drillstring Vibrations Drillstring vibrations are categorized based on the forces acting on it, which are Torsional, Axial and Lateral forces. These forces correspond to the three types of vibration: 1) Torsional vibrations, 2) Axial vibrations and 3) Lateral vibrations. Torsional Vibrations: Drillstring is rotated from the surface to provide torque or shear force to cut the rock. 3. Experimental Setup A lab-scale drilling rig was constructed for the purpose of competing in Drillbotics International Student Competition. OU Drillbotics team participated and won the competition in 2015. The budget restrictions limited the quality and quantity of sensors mounted on the rig. The rig was upgraded in 2016 with the additional budget. The following sections describe the rig setup and sensors installed, dividing them based on the systems: (i) Rig Structure, (ii) Hoisting System, (iii) Rotary System, (iv) Circulation System, (v) Measurement, Instrumentation and Control System and (vi) Drillstring Assembly. Rig Structure Rig Structure consists of three major parts: substructure, mast and travelling block. Rig Substructure In-house built structures have been found to be significantly cheaper than readymade structures in the market. Moreover, the former provides flexibility in the selection of dimensions, load ratings and design styles. This substructure was designed to pass through doors, so the rig could be used for future educational purposes. The rig substructure was constructed using 1ÂÂ ½ square-iron tubing with overall dimensions of 84 x 27 x 36. To allow rig mobility, five commercial grade caster wheels were installed, each with load capacity of 1000 lbs. A 47 x 27 shelf made of ÂÂ ¼ thick iron sheet was added for installation of circulation system and electrical box. This left the rig with a space of 37 x 27 x 36 to accommodate the rock sample. 3.1.2 Mast A mast of cantilever design was constructed out of Aluminium, as shown in Fig. 3.1. Figure 3.1: Rig mast laid down on the substructure top Constructing the mast with aluminium reduced the weight by 2.5 times to that made by steel. A 10-inch-wide C-Channel was supported by two 90 angle bars. The base was attached to the table with hinges for reclining and easy transport of the rig. 3.1.3 Travelling Block The travelling block slides on a pair of linear guide rails attached to the mast. Linear roller bearings or pillow blocks attached to the back of the travelling block provide near smooth motion. Two horizontal plates were bolted on the vertical plate. The upper plate acted as a mount for the AC motor and lower plate supported the swivel. A torque sensor was placed in between motor shaft and swivel. The total weight of the travelling block was measured to be 77.72 lbs. Figure 3.2: Travelling Block Assembly 3.2 Hoisting System Hoisting system components include a double acting air cylinder, pneumatic lines, a couple of pneumatic convertors (Fig. A.1 and Fig. A.2) and a compressed air supply line. Regulated compressed air-line up to 130 psi was hooked up to the pneumatic convertors. Two pneumatic lines from the convertors of maximum capacity 120 psi controlled the air flow and connected to the inlet ports of the dual acting piston. Dual acting air piston has a 1.125 inch bore and a 36-inch stroke length. The system has a capacity to hoist a load of 119.28 lbs. 3.3 Rotary System A top drive system was installed with a 1 HP motor and a maximum RPM of 1170 on the motor mount of the travelling block. The motor shaft is connected to the omega torque sensor via a spring coupling. The torque sensor has a rotating shaft to shaft configuration with an operating speed of 5000 RPM. The torque sensor is connected to the swivel via another spring coupling. The swivel was designed and fabricated in-house with pressure rating of 300 psi and brass outer body for corrosion resistance. The chrome plated rod is wear-resistant to the abrasion of the seals. Swivel rod is attached to an adapter at the base of the bottom plate. A four bolt flange mounted ball bearing prevents any load from being transmitted to the rotary assembly. 3.4 Circulation System It is important to remove cuttings from the hole to drill further ahead. To accomplish this, water, oil and foam based drilling fluids were taken into consideration. Water from the city line without any additives was chosen as drilling fluid after taking cost of a closed loop system for recirculation and cost of additives and base fluids into account. It was also assumed that the effect of drilling fluid on drillstring vibrations was negligible. A roller pump with a pressure rating of 300 psi was installed to circulate the water down the drillstring assembly. A 1.5 HP 3-phase motor powers the pump. The Omega digital display flow meter, which was installed after the pump, can monitor flowrate up to 15 GPM. Pressure monitoring is done by a pressure transducer of rating 500 psi. Pressure fluctuation of up to 50 psi was observed due to intermittent flow supplied by the roller pump. A pressure dampener was built with spare couplings and installed upstream to the flow meter. This provided smooth and stable flow. An analog pressure gauge was mounted atop the dampener to monitor fluctuation. Rubber hose with a pressure rating of 300 psi connects the flow meter with the swivel. Drilling fluid from the swivel then flows into the drillstring and comes out of the bit nozzles and out of the hole through the annulus. As the drilling fluid is just water, it is passed down the sewer line along with the cuttings and not recirculated. 3.5 Measurement, Instrumentation and Control System The Measurement, Instrumentation and Control system is the most important system in the automated rig. The sensors are mounted on the rig at various places for different functions. They provide analog data to the data acquisition module Omega DAQ-3001. An electrical box is mounted at the bottom shelf for shielding the card and other signal conditioners from electrical interference. The data from the DAQ module transfers into the desktop computer, which is installed on the rig structure for control of the automated rig and storage and display of data. Excel-based VBA program is used for the operation of the rig. 3.5.1 Measurement Sensors Following are the sensors installed on the rig to monitor performance of the rig and drilling process. 3.5.1.1 Displacement Laser Sensor An aluminum strip is attached to the top of the travelling block with a reflective tape stuck on it. A Banner laser sensor is mounted about 0.5 inches above the travelling block on the mast. It can measure maximum displacement up to 3.93 inches with an accuracy between 0.019 inch to 0.039 inch. 3.5.1.2 Lateral Vibration Laser Sensor To measure lateral vibrations of the drillstring, an xyz laser sensor is used. It can measure distance from 1.57 inch to 6.29 inch with an accuracy of less than 20 micrometer. The sensor was earlier mounted on an aluminium plate attached to the travelling assembly. But the strip was long and excessive vibrations due to bit rock interaction caused the strip to vibrate at high amplitude, providing inconclusive and erroneous data. Hence the mounting structure was made of a square iron tubing to give a sturdier structure. The vibration amplitude was then reduced and could be observed only at excessive vibrations due to higher RPM and WOB. 3.5.1.3 Optical RPM Sensor An LED-based, reflective type optical RPM sensor, which can measure up to 15000 RPM, is mounted on the cage of swivel. Reflective tape is stuck on the spring coupling between the swivel and torque sensor. The sensor is mounted at an angle so that the reflective area increases for better measurement. 3.5.1.4 Torque Sensor It is assumed that torque measured by the torque sensor is the torque due to bit-rock interaction. An Omega rotating shaft to shaft torque sensor has been mounted above the swivel with a torque rating of 64 inch-pounds. 3.5.1.5 Axial Vibration Sensor An axial vibration senor is installed at the bottom plate of the travelling block adjacent to the flange mounted ball bearing. The VBT-1 vibration sensor has a micro-electro-mechanical system which sends a voltage proportional to the vibration velocity to the data acquisition module. It measures vibration velocity from 0-25 mm/sec. 3.6 Drillstring Assembly The drillstring assembly comprises of 3 parts, Aluminum pipe, bit sub and bit. The pipe is made of Aluminum 6061 with an OD of 0.375 inch and a thickness of 0.035 inch. Both ends of the pipe have 3/8 NPT male compression fittings attached on it. It is connected on one end to 3/8 NPT female brass adapter which is connected to the swivel rod and other end is connected to 3/8 NPT female bitsub. The bit sub is made out of stainless steel for corrosion resistance. It has 3/8 female NPT threads on both ends. A roller sleeve with OD of 1.1 inch and ID of 0.9 inch is slide upon the bit sub to act as a stabilizer and provide smooth rotation. It has a counter bore to place constriction of various sizes to change pressure drop in the system. The bit is fabricated in house using stainless steel round bar and machined to replicate the baker huges bit provided for the competition. The cutters were bought from vendors and the OD of the cutters available was 0.5 inch. They are made out of carbide as opposed to the diamond cutters provided and also wears down faster. The cutters are screwed on the cutter faces and are replaceable. 4. Methodology This chapter describes the procedure of the experiments performed and the data collected. It also talks about some assumptions, sensor calibration and data analysis. 4.1 Experimental Procedure The rig and all its components are powered on and the Excel program initiated. The program has a separate sheet which takes the variables of the experimental run as an input. The only variables changed for the set of experiments are RPM and WOB. Another variable, which is the height at which the string starts to rotate was also been varied but it did not have any effect on the data. A pilot hole of 1.25-inch diameter and 1-inch depth was drilled into the rock sample using a coring bit and hammer-chisel to insert a guide shoe in the hole. The 6-inch long guide shoe acted as a borehole wall and prevented bit walking. Using a level indicator, the rock sample was adjusted to be horizontal. The drillstring was the attached to the swivel adapter and rig was then slid over the rock to align the drillstring and the guide shoe. Using the leveling screws the rig was jacked up to be horizontal. The inlet air pressure line and water line were opened up. Once all experimental variables were set, the program was initiated using the Start button. The first step of the program was to hoist the travelling block to the topmost position. At this point a safety bar used to keep the travelling block hoisted was taken away. The travelling block slowly lowered down and once the bit was inside the guide shoe, top-drive motor and pump motor got activated and string started to rotate along with water pressurized inside the pipe. The bit gradually touched the rock and drilling process began. A trial run was carried out to check if the systems were working properly and data was being collected. A couple of millimeters were drilled during the trial run so that the hole got initiated. After a trial run, experimental runs were carried out. Each experiment was run for 6 min and stopped using the Stop button in the program. The pump stopped pumping fluid and drillstring stopped rotation. Travelling block was gradually lifted up to the topmost position. After that new experimental variables were set and the next run was carried out. Experiments were performed on 2 different rock samples. First set on a very hard and compacted sandstone and the other on a very soft unconsolidated sandstone. UCS of the hard sandstone ranged from 6000 to 9500 psi while UCS for the soft sandstone ranged from 2000 to 5000 psi. 4.2 Data, Collection and Analysis Data was continuously collected by Omega data acquisition system module and stored in an excel sheet. The data of interest were WOB, RPM, Torque, Axial Vibrations, Lateral Vibrations and ROP. Different plots were generated against variables of interest to observe dependency and behavior of the variable under investigation. 4.2.1 WOB and RPM Data WOB is an independent variable with respect to our investigation. WOB was measured using a load cell attached to the back travelling block connecting the piston. The calibration of WOB was carried out in the following procedure. The rig was slid on a weighing scale. A set number of values were entered for voltage sent to the bottom pneumatic convertors. A constant voltage of 2 volts was sent to the top pneumatic convertor to provide a constant pressure of 20 psi resistance against erratic bouncing and to provide a constant friction between piston and cylinder walls in either direction. Reading on the weighing scale was recoded as WOB. Initially the WOB calibration was performed in a static condition. It was observed that WOB reading during the experiment was different than the expected values based on calibration. It was assumed that the change of conditions from static to dynamic was the cause of difference. Hence to simulate dynamic conditions while calibration, rig was constantly hammered down with a mallet to cause the rig to vibrate and negate static friction. The stabilized reading on the scale was used for calibration. But hammering still did not replicate the vibrations happening during the drilling process and hence the WOB measurement by the load cell was different from expected based on the calibration. Hence average value of the WOB was calculated for complete experimental run and considered as the WOB which is being exerted on the rock for cutting process. The average WOB observed had a change of 19.9 % to -20.6 % from the expected input values. In actual drilling process, the WOB is never constant. As the drilling proceeds the WOB decreases and driller lowers the drill string to increase and maintain the set WOB. Hence a ÂÂ ± 20 % change from the set point is acceptable. RPM data was obtained from the optical RPM sensor which was calibrated using a handheld RPM sensor. At lower RPM the error was around 8 % and at higher RPM it lowered down to 0.5 %. So at lower levels the set point was decreased by 4 to compensate for the error. 4.2.2 Torque, Axial and Lateral Vibration Data A rotating shaft to shaft torque sensor was placed in between the motor and swivel with a spring coupling on each end. Torque was calibrated using a torque wrench. When run at idle conditions without any drilling action, torque reading obtained was assumed to be friction. That extra torque of 1.114 inch pounds was assumed to be a side force or the friction inside the swivel and other rotating parts such as the flange mounted ball bearing. A laser displacement senor was attached to the mast to detect the magnitude of lateral vibrations. It was aimed at the center of the drillstring such the pipe was always in range of the laser. The laser sensor was kept 4 inches away from the center of pipe. NPT connections are inherently non concentric and causes non-alignment of pipe. The pipe wobbled due to non-alignment and it oscillated far and near to the sensor. Hence there was a negative and a positive value for displacement. The most negative value of the displacement was used as a reference zero and complete data was shifted towards positive with -0.24165 inch as a reference zero. Greater the magnitude, pipe travelled farther away from the sensor indicating higher lateral vibrations. An axial vibration sensor was attached to the bottom plate of the travelling block. It had micro-electro-mechanical system inside to detect the vibration speed and send a proportional voltage signal for measurement. The sensor came calibrated from the manufacturer and it had a direct vibration-velocity to voltage relationship provided by the manufacturer. 4.2.3 ROP data ROP was directly calculated by the program by dividing the depth drilled by the time it took to drill and stored in the Excel sheet. As the Hard sandstone was difficult to drill the sensor could not measure any significant change in drilled depth along with the vibrations of the assembly and the error in measurement. Hence ROP data for Hard sandstone is not taken into consideration. Soft sandstone data was collected and analyzed for effect of vibrations and other parameters. 4.3 Data Analysis Data was collected from the point where the bit touches the rock to the point when the program was stopped. All experiments were run for 6 minutes in which an average of 100 data points were collected in an excel sheet. An average of those data was calculated and stored. Average values of torque, axial vibrations and lateral vibrations were plotted against RPM and WOB separately. A trend of data was analyzed based on the plots. With increase of RPM, change of torque, axial and lateral vibration was observed. Same practice was carried out for WOB. The plots are shown in Appendix B. 5. Results and Discussions Torsional Vibrations Hard Sandstone: Looking at the plots (Fig. 5.1), with an increase in RPM at constant WOB, torque gradually increases. At around 300 RPM, there is a sudden increase in some cases which then decreases. This behavior is unexpected and no conclusion has been found for the reason for the abnormality. It can be assumed that there would be some abnormally hard layer during drilling due to which such an increase is observed as no such trend was observed in the uniform soft sandstone. But general trend is a gradual increase in torque with an increase of RPM at constant WOB. No oscillation of torque was observed indicating absence of stick-slip. Figure 5.1: Torque vs. RPM plot at constant WOB for experiments on hard sandstone At constant RPM, with increase in WOB there is a gradual increase in torque (Fig. 5.2). No specific trend for RPM is observed as some of the low RPM cases also have higher torque than high RPM cases. Figure 5.2: Torque vs. WOB plot at constant RPM for experiments on hard sandstone Soft sandstone: Observing the plot for soft sandstone (Fig. 5.3), a gradual increase in torque was observed with increase in ROM at constant WOB. No oscillation of torque was observed indicating absence of stick-slip. Figure 5.3: Torque vs. RPM plot at constant WOB for experiments on soft sandstone There is a clear and distinct trend for increase in torque with increase in WOB at constant RPM (Fig. 5.4). Figure 5.4: Torque vs. WOB plot at constant RPM for experiments on soft sandstone Lateral Vibrations Hard Sandstone: Plot (Fig. 5.5) shows that there is a gradual increase of torque with increase in RPM at constant WOB. Figure 5.5: Lateral Vibrations vs. RPM plot at constant WOB for experiments on hard sandstone No general trend is observed for initial low WOB experiments (Fig. 5.6). With increase in WOB the lateral vibrations decreased in low RPM case and increased in high RPM cases. Figure 5.6: Lateral Vibrations vs. WOB plot at constant RPM for experiments on hard sandstone Soft sandstone: There is a general trend of increase of lateral vibrations with increase in RPM (Fig. 5.7) but it is not as significant as in hard sandstone. Figure 5.7: Lateral Vibrations vs. RPM plot at constant WOB for experiments on soft sandstone The data is too scattered to find a general trend for effect of WOB on lateral vibrations at constant RPM (Fig. 5.8). However, the trend at higher WOB shows a decrease in lateral vibrations. This can be attributed to the stiffening of the pipe due to higher axial load OR the pipe is bent and misaligned is making it bend more on the sensor side decreasing the deflection. Figure 5.8: Lateral Vibrations vs. WOB plot at constant RPM for experiments on soft sandstone Axial Vibrations Hard Sandstone: Axial vibrations follow a similar trend to torque (Fig. 5.9). With an increase in RPM, axial vibrations increased. Similar to the trend for torque, some cases have abnormally high vibration magnitude at 300 RPM which can be attributed to abnormally hard layer of formation. Figure 5.9: Axial Vibrations vs. RPM plot at constant WOB for experiments on hard sandstone Following the general trend axial vibrations increased with an increase in WOB at constant RPM (Fig. 5.10). Figure 5.10: Axial Vibrations vs. WOB plot at constant RPM for experiments on hard sandstone Soft sandstone: With increase in RPM, axial vibrations increased (Fig. 5.11). However for soft sandstone the trend do not resemble the trend in torque. In fact, it resembles more to the trend in ROP (Fig. B.10). There is a sudden increase in axial vibrations at 700 rpm and then it decreases gradually. Figure 5.11: Axial Vibrations vs. RPM plot at constant WOB for experiments on soft sandstone The data is too scattered to find a general trend for relationship between WOB and axial vibrations (Fig. 5.12). Figure 5.12: Axial Vibrations vs. WOB plot at constant RPM for experiments on soft sandstone 5.4 Effect on ROP Hard Sandstone: No significant data available for any analysis. Soft sandstone: ROP increases with an increase in RPM at constant WOB (Fig. 5.13). It peaks at around 700 RPM and then it decreases. Figure 5.13: ROP vs. RPM plot at constant WOB for experiments on soft sandstone At higher RPM, higher WOB has an increased effect on ROP (Fig. 5.14). But no dependence of WOB can be seen at lower values of RPM. Figure 5.14: ROP vs. WOB plot at constant RPM for experiments on soft sandstone 6. Conclusions and Recommendations Conclusions At constant WOB, with increase in RPM, increase in lateral vibrations in hard sandstone is higher than in soft sandstone which indicates that lateral vibrations also depend on type of formation. Axial vibrations are highly dependent on torque. No matter what RPM, WOB or the formation type is taken into account if there a change in torque, corresponding change will be observed in axial vibrations. WOB has less effect on excitation of axial vibrations in soft rocks and more effect in hard rocks. Hence the setpoint WOB should be decreased for drilling into hard rocks. RPM of 700 is the highest RPM which can be used to obtain highest ROP without taking the increased vibrations into account. Increase of RPM further lowers the ROP which can be attributed to such increased vibrations and velocities that the bit does not get enough time to contact and drag the rock surface. 6.2 Recommendations and Future Work Although the design of the rig was optimized, there is always room for improvement. With an increase in budget and limited design constraints, the rig can be constructed better. Following are the recommendations for upgrading the rig. The software program used for the control algorithm can be upgraded to Labview or Dasylab which are more user friendly for programming the automation and control architechture. Using Labview or Dasy lab will allow to operate on a faster computer with a recent operating system which will help in faster data collection and storage. A vibrating element can be attached to the travelling block when WOB calibration is performed so that error due to change in friction values can be negated. The spring couplings attached to the torque can be upgraded with a higher torque rating to prevent failure at higher vibrations. A stable support structure for the torque sensor and laser deflection sensor can be provided. If the bit can be obtained or manufactured with diamond coated cutters, then a set of experiments can be designed where relationship could be examined between RPM, WOB and the depth of cut. Forward and backward whirl characterization experiments could be performed with improved sensors for detection of whirl rates. Hammering action can be included in the drilling action and its effect on ROP, lateral vibrations, torque and axial vibrations could be analyzed. References Braun, Simon G. Ewins, David J. Rao, Singiresu S.. (2002). Encyclopedia of Vibration, Volumes 1-

Saturday, October 26, 2019

Wolfgang Amadeus Mozart Essay -- Music Musician

Wolfgang Amadeus Mozart was born in Salzburg, Austria in January 1756 to Anna Maria and Leopold Mozart. He was the second and last child to survive of his seven other siblings. His sister, Anna Maria†Nannerl† Mozart shared some of her brother’s triumphs. Mozart was from the start a musical prodigy. He played the clavier and started composing little pieces of music at the age of five. Mozart astonished the world further when he played the keyboard perfectly during a tour with his father. In 1764, at the age of eight, Mozart and his sister, another musical prodigy, came to London with their father. Their music was such highly praised that king and queen of England, King George III and his Queen, invited the prodigies to play for them at the royal court. There, Mozart composed six sonatas and dedicated it to the Queen. Mozart left London in 1765 after staying over a year. He traveled through the European countries until he finally came to Italy. The Italians especially loved his music. The Pope even declared that Mozart’s music was â€Å"beautiful†. All this occurred when Mozar...

Friday, October 25, 2019

20,000 Leagues Under the Sea :: essays research papers

20,000 Leagues Under the Sea Jules Verne was born in France in 1828 and always had a love for the sea. He once tried to be a sea captain on a boat but things did not work out. Jules Verne has written many very famous books such as Journey To the Center of the Earth, Five Weeks in a balloon and Around the World in Eighty Days. I have written a review on one of his most famous books 20,000 Leagues Under the Sea. This book combines adventure, suspense and mystery throwing in a few pieces of information about life under the sea. The book begins with some great suspense, it begins with a boat chasing a giant monster that has destroyed some huge unsinkable ships. Every time they get close to this monster a giant stream of water shoots hundreds of feet into the air, causing the boat to back off. Once in a while the monster will disappear from sight for hours. While reading this part of the book the reader feels like he is on the boat chasing the monster also. A lot of times the boat gets close enough to the monster to catch it and thoughts of what you think the monster could be run through your head like crazy. When they finally make an attempt to capture it, it disappears beneath the depths of the ocean. One of the most suspenseful and mysterious parts of the book was when the characters were thrown into a big room inside the submarine that seemed to have no doors. At this point in the book the characters have no idea what was going on, neither does the reader. The only thing that happens during the time in this room is a man comes in and gives them some food, minutes later they all fell asleep. Why where they put to sleep, where is this room that seems to have no doors? This is just one of the hundreds of questions going through your head during these couple chapters of the book. When they wake up all the lights in the room are off and the submarine is shaking.

Thursday, October 24, 2019

The Difference Between Ancient China and Egypt

Ancient China and Egypt By: Jessica Isham Ancient Egypt and Ancient China were 2 of the biggest civilizations thousands of years ago. Both left its impact on history, they had many similar beliefs, and different ways of life. They had different rulers, religions, languages and forms of money. Both civilizations have different rulers. China is ruled by emperors and empires, and Egypt their people by pharos. In Egypt their people didn’t live as long so they had a lot more pharos. China’s on the other hand, lived a lot longer. So, therefore Chinese emperors and empires were around longer.The religion of ancient Egypt was polytheistic and centered on the divinity of the ruler and the eternity of the soul. The Chinese were polytheistic with the addition of ancestor worship. Over time, these beliefs were sometimes blended with Taoism, Buddhism or Confucianism. They both also had different beliefs. Egypt believed when people passes, if they were preserved than they might have an afterlife. The Egyptians would put the dead mummified people in solid gold coffins and filled the burial room with bizarre treasures. China believed in the burial method as well.They would bury the Chinese people in the ground and pour water on them, then put them in a hanging coffin. The Chinese also believed in cremation, were they burn the body into ashes. The ancient Egyptians believed that most non-physical ailments were caused by spells and curses. Overall treatments included amulets, potions and surgery depending on what was wrong. Chinese medicine was based on the idea of the balance of the energies yin and yang. Illness was the result of an imbalance and was treated with herbs, acupuncture and exercises.The Chinese were mainly ahead of the rest of their time period. They used cowrie shells and metal beans for money. They also had paper money and coins. The Egyptians traded goods and services. The two civilizations created their own languages. Egypt had hieroglyphs that w ere dated from 3400 BC. Hieroglyphs are a formal writing system that contained a combination of logographic and a alphabetic elements. China had a language called Man'yogana. Man'yogana is an ancient writing system that employs Chinese characters to represent the Japanese language.

Enron, the Smartest Guys in the Room Essay

Enron was involved in American’s largest corporate bankruptcy. It is a story about people, and in reality it is a tragedy. Enron made their stock sky rocket through unethical means, and in reality this company kept losing money. The primary value operating among the traders was greed, money, and how to make profits under any circumstance. The traders thought that a good trader is a creative trader and the creative trader can find any arbitrage opportunity. Arbitrage opportunity was defined for the trades as the opportunity to make abnormal profits. The traders rocked the prices of electricity over the roof on the consumers’ accounts. Traders discovered that they could create artificial shortages of electrical power so they could push the price of energy higher. With this strategy the west coast traders were able to make almost 2 billion dollars for Enron. The traders never stepped back and asked themselves if what they were doing was ethical; it is in their long term interest; does it help them if they totally defrauded California; does it advanced their goal in nationwide deregulation? Instead, they have pulled from every loop they could have to get the profit from California’s misery. It was released in the court that traders knew they are doing something wrong. The traders that were not comfortable with Enron’s style had only two options. They could have protected themselves from the guild by leaving the company or stay in the game, and blindly follow the orders from the authorities. Those traders would not ask any questions because they were afraid that they would only confirm what they suspected would be true. Therefore, they tried to protect themselves from remorse. We need to ask what the motivation of traders to behave this way was. It was the vision of fat bonuses and Enron’s ability to exploit the darker side of the traders. The traders lost their sense of morality. Once the traders accepted the idea of inhumanity it was acceptable for them to continue with their unethical behavior. The moral compass is our natural feeling that makes people know what is right and wrong and how they should behave. If the working environment does not have moral standards and the individual is not strong enough to step aside, he/she will be drag down and lose their moral compasses. Some people lose their moral compasses and might not feel any responsibility for their actions, because a higher authority approved their action. The traders felt no responsibility on their accounts and accepted their unethical behavior because they had an approval from their CEO Jeffrey Skilling. When I used to work for the banking industry I had an opportunity to see how people can change. The bonus and profit involvement was not at all similar to the ones of the traders from Enron, but the principle was same. Once there was involvement of power from high level management, threat, or reward, people were able to change drastically their behavior. They did not care about clients’ money, property, and well-being because they were threatened of losing their job or blind by the bonuses and career growth visions. I was always curious if they would do the same thing to their family members. How would they behave if there was no client sitting in front of them but their mom or dad? Would they still try to convince them to close some dirty deals by using those lying phrases that they were taught by their supervisors? I could have never understood how it is possible that some people are able to change their face so rapidly without any shame or guilt. I think everybody should treat people the same way how they want to be treaded. I did not care about the pay cut I had to take in my new job as long as I did not need to be involved in such unethical working environment like banking industry. It was my worst working experience ever and in the future I will do everything that I can to avoid working for an industry without moral standards. There is only one circumstance that might cause me to lose my moral compass, and that would be only if somebody would hurt a loved one. John Locke based his theory on moral rights. The people are free and equal and everybody owns their body and labor. The people own anything that was labored by them. However, people agree to form the government to protect their rights, liberty and property that would be otherwise be insecure and unsafe. In Locke’s theory Enron should have not been allowed to be involved in deregulation, because government should be there to protect people’s property and rights. If the government stayed involved in electricity power regulation in California, Enron would not have so easily ripped California of $30 billion dollars. The government should also protect people that invested in Enron, especially employees’ 401k plans. Locke’s natural rights are negative right and for Locke the negative rights do not conflict with positive rights. Those rights imply that the market should be free, which can cause inequality between people. For example large groups of society will stay poor compared to other groups that would grow even richer. Adam Smith’s view of free market derives from utilitarian agreement. The greatest benefits would be produced by free market and private property. The buyers will look for lowest price possible and producers will sell to the buyers anything they want to for the lowest possible price. The market competition would drive the self-interested individuals which would serve society. Enron created fake shortages of the electrical power and since there was not too many suppliers to create market competition that allowed Enron to boost the price of their stock and manipulate the stock market. The criticism of Smith’s argument is the fact that his vision is not accounting for the monopoly companies. Economist Keynes argued Smith’s invisible hand theory. He claimed that without government involvement the demand might not be high enough to absorb the supply. This approach would avoid unemployment and depression. We should not forget that government spending might not cure high unemployment but create inflation. Marx argued that capitalism is concentrating industrial power to the few individuals who organize workers for mass production. This can cause production surplus and economic depression, and replacement of workers by machines can create unemployment. The property should have served for the needs of all society. The social classes were determined by the way how society organized their workers. Enron would never be able to operate under capitalism. The company would be owned by the government and if there would be any discrepancy done it would never come up to the public. State should create mixed economy that would retain private property and market system. The government policies should remedy any deficiencies. The intellectual property system should tend towards Locke’s utilitarian system than Marx’s socialist system. There should be a healthy mix of Locke’s, Marx’s, Smith’s, and Keynes’ philosophy and political views. We should follow Smith and Locke in their low level of government interaction, which would keep strong competition between businesses and benefits for society. Keynes idea would balance the supply-demand equilibrium, unemployment and inflation. And Marx vision would equalize and decrease the gap between social classes and provide support for retirees and disabled society.

Wednesday, October 23, 2019

Role of Computer in Daily Life

Financial Crises and Bank Liquidity Creation Allen N. Berger †  and Christa H. S. Bouwman †¡ October 2008 Financial crises and bank liquidity creation are often connected. We examine this connection from two perspectives. First, we examine the aggregate liquidity creation of banks before, during, and after five major financial crises in the U. S. from 1984:Q1 to 2008:Q1. We uncover numerous interesting patterns, such as a significant build-up or drop-off of â€Å"abnormal† liquidity creation before each crisis, where â€Å"abnormal† is defined relative to a time trend and seasonal factors.Banking and market-related crises differ in that banking crises were preceded by abnormal positive liquidity creation, while market-related crises were generally preceded by abnormal negative liquidity creation. Bank liquidity creation has both decreased and increased during crises, likely both exacerbating and ameliorating the effects of crises. Off-balance sheet guarantees such as loan commitments moved more than on-balance sheet assets such as mortgages and business lending during banking crises.Second, we examine the effect of pre-crisis bank capital ratios on the competitive positions and profitability of individual banks during and after each crisis. The evidence suggests that high capital served large banks well around banking crises – they improved their liquidity creation market share and profitability during these crises and were able to hold on to their improved performance afterwards. In addition, high-capital listed banks enjoyed significantly higher abnormal stock returns than low-capital listed banks during banking crises.These benefits did not hold or held to a lesser degree around marketrelated crises and in normal times. In contrast, high capital ratios appear to have helped small banks improve their liquidity creation market share during banking crises, market-related crises, and normal times alike, and the gains in market shar e were sustained afterwards. Their profitability improved during two crises and subsequent to virtually every crisis. Similar results were observed during normal times for small banks. †  University of South Carolina, Wharton Financial Institutions Center, and CentER – Tilburg University.Contact details: Moore School of Business, University of South Carolina, 1705 College Street, Columbia, SC 29208. Tel: 803-576-8440. Fax: 803-777-6876. E-mail: [email  protected] sc. edu. †¡ Case Western Reserve University, and Wharton Financial Institutions Center. Contact details: Weatherhead School of Management, Case Western Reserve University, 10900 Euclid Avenue, 362 PBL, Cleveland, OH 44106. Tel. : 216-368-3688. Fax: 216-368-6249. E-mail: christa. [email  protected] edu. Keywords: Financial Crises, Liquidity Creation, and Banking. JEL Classification: G28, and G21.The authors thank Asani Sarkar, Bob DeYoung, Peter Ritchken, Greg Udell, and participants at presentations at the Summer Research Conference 2008 in Finance at the ISB in Hyderabad, the International Monetary Fund, the University of Kansas’ Southwind Finance Conference, and Erasmus University for useful comments. Financial Crises and Bank Liquidity Creation 1. Introduction Over the past quarter century, the U. S. has experienced a number of financial crises. At the heart of these crises are often issues surrounding liquidity provision by the banking sector and financial markets (e. . , Acharya, Shin, and Yorulmazer 2007). For example, in the current subprime lending crisis, liquidity seems to have dried up as banks seem less willing to lend to individuals, firms, other banks, and capital market participants, and loan securitization appears to be significantly depressed. This behavior of banks is summarized by the Economist: â€Å"Although bankers are always stingier in a downturn, [†¦] lots of banks said they had also cut back lending because of a slide in their current or expe cted capital and liquidity. 1 The practical importance of liquidity during crises is buttressed by financial intermediation theory, which indicates that the creation of liquidity is an important reason why banks exist. 2 Early contributions argue that banks create liquidity by financing relatively illiquid assets such as business loans with relatively liquid liabilities such as transactions deposits (e. g. , Bryant 1980, Diamond and Dybvig 1983). More recent contributions suggest that banks also create liquidity off the balance sheet through loan commitments and similar claims to liquid funds (e. g. Holmstrom and Tirole 1998, Kashyap, Rajan, and Stein 2002). 3 The creation of liquidity makes banks fragile and susceptible to runs (e. g. , Diamond and Dybvig 1983, Chari and Jagannathan 1988), and such runs can lead to crises via contagion effects. Bank liquidity creation can also have real effects, in particular if a financial crisis ruptures the creation of liquidity (e. g. , Dellâ⠂¬â„¢Ariccia, Detragiache, and Rajan 2008). 4 Exploring the relationship between financial crises and bank liquidity creation can thus yield potentially interesting economic insights and may have important policy implications.The goals of this paper are twofold. The first is to examine the aggregate liquidity creation of 1 â€Å"The credit crisis: Financial engine failure† – The Economist, February 7, 2008. According to the theory, another central role of banks in the economy is to transform credit risk (e. g. , Diamond 1984, Ramakrishnan and Thakor 1984, Boyd and Prescott 1986). Recently, Coval and Thakor (2005) theorize that banks may also arise in response to the behavior of irrational agents in financial markets. 3James (1981) and Boot, Thakor, and Udell (1991) endogenize the loan commitment contract due to informational frictions. The loan commitment contract is subsequently used in Holmstrom and Tirole (1998) and Kashyap, Rajan, and Stein (2002) to show how banks can provide liquidity to borrowers. 4 Acharya and Pedersen (2005) show that liquidity risk also affects the expected returns on stocks. 2 1 banks around five financial crises in the U. S. over the past quarter century. 5 The crises include two banking crises (the credit crunch of the early 1990s and the subprime lending crisis of 2007 – ? and three crises that can be viewed as primarily market-related (the 1987 stock market crash, the Russian debt crisis plus the Long-Term Capital Management meltdown in 1998, and the bursting of the dot. com bubble plus the September 11 terrorist attack of the early 2000s). This examination is intended to shed light on whether there are any connections between financial crises and aggregate liquidity creation, and whether these vary based on the nature of the crisis (i. e. , banking versus market-related crisis). A good nderstanding of the behavior of bank liquidity creation around financial crises is also important to shed light on whether banks create â€Å"too little† or â€Å"too much† liquidity, and whether bank behavior exacerbates or ameliorates the effects of crises. We document the empirical regularities related to these issues, so as to raise additional interesting questions for further empirical and theoretical examinations. The second goal is to study the effect of pre-crisis equity capital ratios on the competitive positions and profitability of individual banks around each crisis.Since bank capital affects liquidity creation (e. g. , Diamond and Rajan 2000, 2001, Berger and Bouwman forthcoming), it is likely that banks with different capital ratios behave differently during crises in terms of their liquidity creation responses. Specifically, we ask: are high-capital banks able to gain market share in terms of liquidity creation at the expense of low-capital banks during a crisis, and does such enhanced market share translate into higher profitability? If so, are the high-capital banks able t o sustain their improved competitive positions after the financial crisis is over?The recent acquisitions of Countrywide, Bear Stearns, and Washington Mutual provide interesting case studies in this regard. All three firms ran low on capital and had to be bailed out by banks with stronger capital positions. Bank of America (Countrywide’s acquirer) and J. P. Morgan Chase (acquirer of Bear-Stearns and Washington Mutual’s banking operations) had capital ratios high enough to enable them to buy their rivals at a small fraction of what they were worth a year before, thereby gaining a potential competitive advantage. 6 The recent experience of IndyMac Bank provides 5Studies on the behavior of banks around financial crises have typically focused on commercial and real estate lending (e. g. , Berger and Udell 1994, Hancock, Laing, and Wilcox 1995, Dell’Ariccia, Igan, and Laeven 2008). We focus on the more comprehensive notion of bank liquidity creation. 6 On Sunday, Mar ch 16, 2008, J. P. Morgan Chase agreed to pay $2 a share to buy all of Bear Stearns, less than onetenth of the firm’s share price on Friday and a small fraction of the $170 share price a year before. On March 24, 2008, it increased its bid to $10, and completed the transaction on May 30, 2008.On January 11, Bank of America announced it would pay $4 billion for Countrywide, after Countrywide’s market capitalization had plummeted 85% during the preceding 12 months. The transaction was completed on July 1, 2008. After a $16. 4 billion ten-day bank 2 another interesting example. The FDIC seized IndyMac Bank after it suffered substantive losses and depositors had started to run on the bank. The FDIC intends to sell the bank, preferably as a single entity but if that does not work, the bank will be sold off in pieces.Given the way the regulatory approval process for bank acquisitions works, it is likely that the acquirer(s) will have a strong capital base. 7 A financial cris is is a natural event to examine how capital affects the competitive positions of banks. During â€Å"normal† times, capital has many effects on the bank, some of which counteract each other, making it difficult to learn much. For example, capital helps the bank cope more effectively with risk,8 but it also reduces the value of the deposit insurance put option (Merton 1977). During a crisis, risks become elevated and the risk-absorption capacity of capital becomes paramount.Banks with high capital, which are better buffered against the shocks of the crisis, may thus gain a potential advantage. To examine the behavior of bank liquidity creation around financial crises, we calculate the amount of liquidity created by the banking sector using Berger and Bouwman’s (forthcoming) preferred liquidity creation measure. This measure takes into account the fact that banks create liquidity both on and off the balance sheet and is constructed using a three-step procedure. In the f irst step, all bank assets, liabilities, equity, and off-balance sheet activities are classified as liquid, semi-liquid, or illiquid.This is done based on the ease, cost, and time for customers to obtain liquid funds from the bank, and the ease, cost, and time for banks to dispose of their obligations in order to meet these liquidity demands. This classification process uses information on both product category and maturity for all activities other than loans; due to data limitations, loans are classified based solely on category (â€Å"cat†). Thus, residential mortgages are classified as more liquid than business loans regardless of maturity because it is generally easier to securitize and sell such mortgages than business loans.In the second step, weights are assigned to these activities. The weights are consistent with the theory in that maximum liquidity is created when illiquid assets (e. g. , business loans) are transformed into liquid liabilities (e. g. , transactions deposits) and maximum liquidity is destroyed when liquid assets (e. g. , treasuries) are transformed into illiquid liabilities â€Å"walk†, Washington Mutual was placed into the receivership of the FDIC on September 25, 2008. J. P. Morgan Chase purchased the banking business for $1. 9 billion and re-opened the bank the next day.On September 26, 2008, the holding company and its remaining subsidiary filed for bankruptcy. Washington Mutual, the sixth-largest bank in the U. S. before its collapse, is the largest bank failure in the U. S. financial history. 7 After peaking at $50. 11 on May 8, 2006, IndyMac’s shares lost 87% of their value in 2007 and another 95% in 2008. Its share price closed at $0. 28 on July 11, 2008, the day before it was seized by the FDIC. 8 There are numerous papers that argue that capital enhances the risk-absorption capacity of banks (e. g. , Bhattacharya and Thakor 1993, Repullo 2004, Von Thadden 2004). (e. g. , subordinated debt) or equity. In the third step, a â€Å"cat fat† liquidity creation measure is constructed, where â€Å"fat† refers to the inclusion of off-balance sheet activities. Although Berger and Bouwman construct four different liquidity creation measures, they indicate that â€Å"cat fat† is the preferred measure. They argue that to assess the amount of liquidity creation, the ability to securitize or sell a particular loan category is more important than the maturity of those loans, and the inclusion of off-balance sheet activities is critical. We apply the â€Å"cat fat† liquidity creation measure to quarterly data on virtually all U. S. commercial and credit card banks from 1984:Q1 to 2008:Q1. Our measurement of aggregate liquidity creation by banks allows us to examine the behavior of liquidity created prior to, during, and after each crisis. The popular press has provided anecdotal accounts of liquidity drying up during some financial crises as well as excessive liquidity p rovision at other times that led to credit expansion bubbles (e. g. , the subprime lending crisis).We attempt to give empirical content to these notions of â€Å"too little† and â€Å"too much† liquidity created by banks. Liquidity creation has quadrupled in real terms over the sample period and appears to have seasonal components (as documented below). Since no theories exist that explain the intertemporal behavior of liquidity creation, we take an essentially empirical approach to the problem and focus on how far liquidity creation lies above or below a time trend and seasonal factors. 10 That is, we focus on â€Å"abnormal† liquidity creation.The use of this measure rests on the supposition that some â€Å"normal† amount of liquidity creation exists, acknowledging that at any point in time, liquidity creation may be â€Å"too much† or â€Å"too little† in dollar terms. Our main results regarding the behavior of liquidity creation around f inancial crises are as follows. First, prior to financial crises, there seems to have been a significant build-up or drop-off of â€Å"abnormal† liquidity creation. Second, banking and market-related crises differ in two respects.The banking crises (the credit crunch of 1990-1992 and the current subprime lending crisis) were preceded by abnormal positive liquidity creation by banks, whereas the market-related crises were generally preceded by abnormal negative liquidity creation. In addition, the banking crises themselves seemed to change the trajectory of aggregate liquidity creation, while the market-related crises did not appear to do so. Third, 9 Their alternative measures include â€Å"cat nonfat,† â€Å"mat fat,† and â€Å"mat nonfat. † The â€Å"nonfat† measures exclude offbalance sheet activities, and the â€Å"mat† measures classify loans by maturity rather than by product category. 0 As alternative approaches, we use the dollar amo unt of liquidity creation per capita and liquidity creation divided by GDP and obtain similar results (see Section 4. 2). 4 liquidity creation has both decreased during crises (e. g. , the 1990-1992 credit crunch) and increased during crises (e. g. , the 1998 Russian debt crisis / LTCM bailout). Thus, liquidity creation likely both exacerbated and ameliorated the effects of crises. Fourth, off-balance sheet illiquid guarantees (primarily loan commitments) moved more than semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans) during banking crises.Fifth, the current subprime lending crisis was preceded by an unusually high positive abnormal amount of aggregate liquidity creation, possibly caused by lax lending standards that led banks to extend increasing amounts of credit and off-balance sheet guarantees. This suggests a possible dark side of bank liquidity creation. While financial fragility may be needed to induce banks to create liquidity (e. g. , Diamond and Rajan 2000, 2001), our analysis raises the intriguing possibility that the causality may also be reversed in the sense that too much liquidity creation may lead to financial fragility.We then turn to the second goal of the paper – examining whether banks’ pre-crisis capital ratios affect their competitive positions and profitability around financial crises. To examine the effect on a bank’s competitive position, we regress the change in its market share of liquidity creation – measured as the average market share of aggregate liquidity creation during the crisis (or over the eight quarters after the crisis) minus the average market share over the eight quarters before the crisis, expressed as a proportion of the bank’s average pre-crisis market share – on its average pre-crisis capital ratio and a set of control variables. 1 Since the analyses in the first half of the paper reveal a great deal of heterogeneity in crises, we run these regressions on a per-crisis basis, rather than pooling the data across crises. The control variables include bank size, bank risk, bank holding company membership, local market competition,12 and proxies for the economic circumstances in the local markets in which the bank operates. Moreover, we examine large and small banks as two separate groups since the results in Berger and Bouwman (forthcoming) indicate that the effect of capital on liquidity creation differs across large and small banks. 13 11Defining market share this way is a departure from previous research (e. g. , Laeven and Levine 2007), in which market share relates to the bank’s weighted-average local market share of total deposits. 12 While our focus is on the change in banks’ competitive positions measured in terms of their aggregate liquidity creation market shares, we control for â€Å"local market competition† measured as the bank-level Herfindahl index based on local market deposit mar ket shares. 13 Berger and Bouwman use three size categories: large, medium, and small banks. We combine the large and medium bank categories into one â€Å"large bank† category. 5One potential concern is that differences in bank capital ratios may simply reflect differences in bank risk. Banks that hold higher capital ratios because their investment portfolios are riskier may not improve their competitive positions around financial crises. Our empirical design takes this into account. The inclusion of bank risk as a control variable is critical and ensures that the measured effect of capital on a bank’s market share is net of the effect of risk. We find evidence that high-capital large banks improved their market share of liquidity creation during the two banking crises, but not during the market-related crises.After the credit crunch of the early 1990s, high-capital large banks held on to their improved competitive positions. Since the current subprime lending crisis was not over at the end of the sample period, we cannot yet tell whether highcapital large banks will also hold on to their improved competitive positions after this crisis. In contrast to the large banks, high-capital small banks seemed to enhance their competitive positions during all crises and held on to their improved competitive positions after the crises as well.Next, we focus on the effect of pre-crisis bank capital on the profitability of the bank around each crisis. We run regressions that are similar to the ones described above with the change in return on equity (ROE) as the dependent variable. We find that high-capital large banks improved their ROE in those cases in which they enhanced their liquidity creation market share – the two banking crises – and were able to hold on to their improved profitability after the credit crunch. profitability after the market-related crises. They also increased theirIn contrast, for high-capital small banks, profitabilit y improved during two crises, and subsequent to virtually every crisis. As an additional analysis, we examine whether the improved competitive positions and profitability of high-capital banks translated into better stock return performance. To perform this analysis, we focus on listed banks and bank holding companies (BHCs). If multiple banks are part of the same listed BHC, their financial statements are added together to create pro-forma financial statements of the BHC.The results confirm the earlier change in performance findings of large banks: listed banks with high capital ratios enjoyed significantly larger abnormal returns than banks with low capital ratios during banking crises, but not during market-related crises. Our results are based on a five-factor asset pricing model that includes the three Fama-French (1993) factors, momentum, and a proxy for the slope of the yield curve. 6 We also check whether high capital provided similar advantages outside crisis periods, i. e. , during â€Å"normal† times.We find that large banks with high capital ratios did not enjoy either market share or profitability gains over the other large banks, whereas for small banks, results are similar to the smallbank findings discussed above. Moreover, outside banking crises, high capital was not associated with high stock returns. Combined, the results suggest that high capital ratios serve large banks well, particularly around banking crises. In contrast, high capital ratios appear to help small banks around banking crises, marketrelated crises, and normal times alike. The remainder of this paper is organized as follows.Section 2 discusses the related literature. Section 3 explains the liquidity creation measures and our sample based on data of U. S. banks from 1984:Q1 to 2008:Q1. Section 4 describes the behavior of aggregate bank liquidity creation around five financial crises and draws some general conclusions. Section 5 discusses the tests of the effects of pre crisis capital ratios on banks’ competitive positions and profitability around financial crises and â€Å"normal† times. This section also examines the stock returns of high- and low-capital listed banking organizations during each crisis and during normal† times. Section 6 concludes. 2. Related literature This paper is related to two literatures. The first is the literature on financial crises. 14 One strand in this literature has focused on financial crises and fragility. Some papers have analyzed contagion. Contributions in this area suggest that a small liquidity shock in one area may have a contagious effect throughout the economy (e. g. , Allen and Gale 1998, 2000). Other papers have focused on the determinants of financial crises and the policy implications (e. g. Bordo, Eichengreen, Klingebiel, and Martinez-Peria 2001, Demirguc-Kunt, Detragiache, and Gupta 2006, Lorenzoni 2008, Claessens, Klingebiel, and Laeven forthcoming). A second strand examines the e ffect of financial crises on the real sector (e. g. , Friedman and Schwarz 1963, Bernanke 1983, Bernanke and Gertler 1989, Dell’Ariccia, Detragiache, and Rajan 2008, Shin forthcoming). These papers find that financial crises increase the cost of financing and reduce credit, which adversely affects corporate investment and may lead to reduced 14Allen and Gale (2007) provide a detailed overview of the causes and consequences of financial crises. 7 growth and recessions. That is, financial crises have independent real effects (see Dell’Ariccia, Detragiache, and Rajan 2008). In contrast to these papers, we examine how the amount of liquidity created by the banking sector behaved around financial crises in the U. S. , and explore systematic patterns in the data. The second literature to which this paper is related focuses on the strategic use of leverage in product-market competition for non-financial firms (e. g. , Brander and Lewis 1986, Campello 2006, Lyandres 2006).This literature suggests that financial leverage can affect competitive dynamics. While this literature has not focused on banks, we analyze the effects of crises on the competitive positioning and profitability of banks based on their pre-crisis capital ratios. Our hypothesis is that in the case of banks, the competitive implications of capital are likely to be most pronounced during a crisis when a bank’s capitalization has a major influence on its ability to survive the crisis, particularly in light of regulatory discretion in closing banks or otherwise resolving problem institutions.Liquidity creation may be a channel through which this competitive advantage is gained or lost. 15 3. Description of the liquidity creation measure and sample We calculate the dollar amount of liquidity created by the banking sector using Berger and Bouwman’s (forthcoming) preferred â€Å"cat fat† liquidity creation measure. In this section, we explain briefly what this acronym stand s for and how we construct this measure. 16 We then describe our sample. All financial values are expressed in real 2007:Q4 dollars using the implicit GDP price deflator. 3. 1. Liquidity creation measureTo construct a measure of liquidity creation, we follow Berger and Bouwman’s three-step procedure (see Table 1). Below, we briefly discuss these three steps. In Step 1, we classify all bank activities (assets, liabilities, equity, and off-balance sheet activities) as liquid, semi-liquid, or illiquid. For assets, we do this based on the ease, cost, and time for banks to dispose of their obligations in order to meet these liquidity demands. For liabilities and equity, we do this 15 Allen and Gale (2004) analyze how competition affects financial stability. We reverse the causality and examine the effect of financial crises on competition. 6 For a more detailed discussion, see Berger and Bouwman (forthcoming). 8 based on the ease, cost, and time for customers to obtain liquid fund s from the bank. We follow a similar approach for off-balance sheet activities, classifying them based on functionally similar on-balance sheet activities. For all activities other than loans, this classification process uses information on both product category and maturity. Due to data restrictions, we classify loans entirely by category (â€Å"cat†). 17 In Step 2, we assign weights to all the bank activities classified in Step 1.The weights are consistent with liquidity creation theory, which argues that banks create liquidity on the balance sheet when they transform illiquid assets into liquid liabilities. We therefore apply positive weights to illiquid assets and liquid liabilities. Following similar logic, we apply negative weights to liquid assets and illiquid liabilities and equity, since banks destroy liquidity when they use illiquid liabilities to finance liquid assets. We use weights of ? and -? , because only half of the total amount of liquidity created is attrib utable to the source or use of funds alone.For example, when $1 of liquid liabilities is used to finance $1 in illiquid assets, liquidity creation equals ? * $1 + ? * $1 = $1. In this case, maximum liquidity is created. However, when $1 of liquid liabilities is used to finance $1 in liquid assets, liquidity creation equals ? * $1 + -? * $1 = $0. In this case, no liquidity is created as the bank holds items of approximately the same liquidity as those it gives to the nonbank public. Maximum liquidity is destroyed when $1 of illiquid liabilities or equity is used to finance $1 of liquid assets. In this case, liquidity creation equals -? $1 + -? * $1 = -$1. An intermediate weight of 0 is applied to semi-liquid assets and liabilities. Weights for off-balance sheet activities are assigned using the same principles. In Step 3, we combine the activities as classified in Step 1 and as weighted in Step 2 to construct Berger and Bouwman’s preferred â€Å"cat fat† liquidity creat ion measure. This measure classifies loans by category (â€Å"cat†), while all activities other than loans are classified using information on product category and maturity, and includes off-balance sheet activities (â€Å"fat†).Berger and Bouwman construct four liquidity creation measures by alternatively classifying loans by category or maturity, and by alternatively including or excluding off-balance sheet activities. However, they argue that â€Å"cat fat† is the preferred measure since for liquidity creation, banks’ ability to securitize or sell loans is more important than loan maturity, and banks do create liquidity both on the balance sheet and off the balance sheet. 17 Alternatively, we could classify loans by maturity (â€Å"mat†).However, Berger and Bouwman argue that it is preferable to classify them by category since for loans, the ability to securitize or sell is more important than their maturity. 9 To obtain the dollar amount of liq uidity creation at a particular bank, we multiply the weights of ? , -? , or 0, respectively, times the dollar amounts of the corresponding bank activities and add the weighted dollar amounts. 3. 2. Sample description We include virtually all commercial and credit card banks in the U. S. in our study. 18 For each bank, we obtain quarterly Call Report data from 1984:Q1 to 2008:Q1.We keep a bank if it: 1) has commercial real estate or commercial and industrial loans outstanding; 2) has deposits; 3) has an equity capital ratio of at least 1%; 4) has gross total assets or GTA (total assets plus allowance for loan and lease losses and the allocated transfer risk reserve) exceeding $25 million. We end up with data on 18,134 distinct banks, yielding 907,159 bank-quarter observations over our sample period. For each bank, we calculate the dollar amount of liquidity creation using the process described in Section 3. 1.The amount of liquidity creation and all other financial values are put in to real 2007:Q4 dollars using the implicit GDP price deflator. When we explore aggregate bank liquidity creation around financial crises, we focus on the real dollar amount of liquidity creation by the banking sector. To obtain this, we aggregate the liquidity created by all banks in each quarter and end up with a sample that contains 97 inflation-adjusted, quarterly liquidity creation amounts. In contrast, when we examine how capital affects the competitive positions of banks, we focus on the amount of liquidity created by individual banks around each crisis.Given documented differences between large and small banks in terms of portfolio composition (e. g. , Kashyap, Rajan, and Stein 2002, Berger, Miller, Petersen, Rajan, and Stein 2005) and the effect of capital on liquidity creation (Berger and Bouwman forthcoming), we split the sample into large banks (between 330 and 477 observations, depending on the crisis) and small banks (between 5556 and 6343 observations, depending on the crisis), and run all change in market share and profitability regressions separately for these two sets of banks.Large banks have gross total assets (GTA) exceeding $1 billion at the end of the quarter before a crisis 18 Berger and Bouwman (forthcoming) include only commercial banks. We also include credit card banks to avoid an artificial $0. 19 trillion drop in bank liquidity creation in the fourth quarter of 2006 when Citibank N. A. moved its credit-card lines to Citibank South Dakota N. A. , a credit card bank. 10 and small banks have GTA up to $1 billion at the end of that quarter. 19,20 4.The behavior of aggregate bank liquidity creation around financial crises This section focuses on the first goal of the paper – examining the aggregate liquidity creation of banks across five financial crises in the U. S. over the past quarter century. The crises include the 1987 stock market crash, the credit crunch of the early 1990s, the Russian debt crisis plus Long-Term Capital M anagement (LTCM) bailout of 1998, the bursting of the dot. com bubble and the Sept. 11 terrorist attacks of the early 2000s, and the current subprime lending crisis. We first provide summary statistics and explain our empirical approach.We then discuss alternative measures of abnormal liquidity creation. Next, we describe the behavior of bank liquidity creation before, during, and after each crisis. Finally, we draw some general conclusions from these results. 4. 1. Summary statistics and empirical approach Figure 1 Panel A shows the dollar amount of liquidity created by the banking sector, calculated using the â€Å"cat fat† liquidity creation measure over our sample period. As shown, liquidity creation has increased substantially over time: it has more than quadrupled from $1. 369 trillion in 1984:Q1 to $5. 06 trillion in 2008:Q1 (in real 2007:Q4 dollars). We want to examine whether liquidity creation by the banking sector is â€Å"high,† â€Å"low,† or at a à ¢â‚¬Å"normal† level around financial crises. Since no theories exist that explain the intertemporal behavior of liquidity creation or generate numerical estimates of â€Å"normal† liquidity creation, we need a reasonable empirical approach. At first blush, it may seem that we could simply calculate the average amount of bank liquidity creation over the entire sample period and view amounts above this sample average as â€Å"high† and amounts below the average as â€Å"low. However, Figure 1 Panel A clearly shows that this approach would cause us to classify the entire second half of the sample period (1996:Q1 – 2008:Q1) as â€Å"high† and the entire first half of the sample period (1984:Q1 – 1995:Q4) as â€Å"low. † We therefore do not 19 As noted before, we combine Berger and Bouwman’s large and medium bank categories into one â€Å"large bank† category. Recall that all financial values are expressed in real 2007:Q4 dol lars. 20 GTA equals total assets plus the allowance for loan and lease losses and the allocated transfer risk reserve.Total assets on Call Reports deduct these two reserves, which are held to cover potential credit losses. We add these reserves back to measure the full value of the loans financed and the liquidity created by the bank on the asset side. 11 use this approach. The approach we take is aimed at calculating the â€Å"abnormal† amount of liquidity created by the banking sector based on a time trend. It focuses on whether liquidity creation lies above or below this time trend, and also deseasonalizes the data to ensure that we do not base our conclusions on mere seasonal effects.We detrend and deseasonalize the data by regressing the dollar amount of liquidity creation on a time index and three quarterly dummies. The residuals from this regression measure the â€Å"abnormal† dollar amount of liquidity creation in a particular quarter. That is, they measure how far (deseasonalized) liquidity creation lies above or below the trend line. If abnormal liquidity creation is greater than (smaller than) $0, the dollar amount of liquidity created by the banking sector lies above (below) the time trend.If abnormal liquidity creation is high (low) relative to the time trend and seasonal factors, we will interpret this as liquidity creation being â€Å"too high† (â€Å"too low†). Figure 1 Panel B shows abnormal liquidity creation over time. The amount of liquidity created by the banking sector was high (yet declining) in the mid-1980s, low in the mid-1990s, and high (and mostly rising) in the most recent years. 4. 2. Alternative measures of abnormal liquidity creation We considered several alternative approaches to measuring abnormal liquidity creation. One possibility is to scale the dollar amount of liquidity creation by total population.The idea behind this approach is that a â€Å"normal† amount of liquidity creation may exi st in per capita terms. The average amount of liquidity creation per capita over our sample period could potentially serve as the â€Å"normal† amount and deviations from this average would be viewed as abnormal. To calculate per capita liquidity creation we obtain annual U. S. population estimates from the U. S. Census Bureau. Figure 2 Panel A shows per capita liquidity creation over time. The picture reveals that per capita liquidity creation more than tripled from $5. 8K in 1984:Q1 to $18. 8K in 2008:Q1.Interestingly, the picture looks very similar to the one shown in Panel A, perhaps because the annual U. S. population growth rate is low. For reasons similar to those in our earlier analysis, we calculate abnormal per capita liquidity creation by detrending and deseasonalizing the data like we did in the previous section. Figure 2 Panel B shows abnormal per capita liquidity creation over time. 12 Another possibility is to scale the dollar amount of liquidity creation by GD P. Since liquidity creation by banks may causally affect GDP, this approach seems less appropriate.Nonetheless, we show the results for completeness. Figure 2 Panel C shows the dollar amount of liquidity creation divided by GDP. The picture reveals that bank liquidity creation has increased from 19. 9% of GDP in 1984:Q1 to 40. 4% of GDP in 2008:Q1. While liquidity creation more than quadrupled over the sample period, GDP doubled. Importantly, the picture looks similar to the one shown in Panel A. Again, for reasons similar to those in our earlier analysis, we detrend and deseasonalize the data to obtain abnormal liquidity creation divided by GDP.Figure 2 Panel D shows abnormal liquidity creation divided by GDP over time. Since these alternative approaches yield results that are similar to those shown in Section 4. 1, we focus our discussions on the abnormal amount of liquidity creation (rather than the abnormal amount of per capita liquidity creation or the abnormal amount of liquid ity creation divided by GDP) around financial crises. 4. 3. Abnormal bank liquidity creation before, during, and after five financial crises We now examine how abnormal bank liquidity creation behaved efore, during, and after five financial crises. In all cases, the pre-crisis and post-crisis periods are defined to be eight quarters long. 21 The one exception is that we do not examine abnormal bank liquidity creation after the current subprime lending crisis, since this crisis was still ongoing at the end of the sample period. Figure 3 Panels A – E show the graphs of the abnormal amount of liquidity creation for the five crises. This subsection is a fact-finding effort and largely descriptive. In Section 4. , we will combine the evidence gathered here and interpret it to draw some general conclusions. Financial crisis #1: Stock market crash (1987:Q4) On Monday, October 19, 1987, the stock market crashed, with the S&P500 index falling about 20%. During the years before the cra sh, the level of the stock market had increased dramatically, causing some 21 As a result of our choice of two-year pre-crisis and post-crisis periods, the post-Russian debt crisis period overlaps with the bursting of the dot. com bubble, and the pre-dot. com bubble period overlaps with the Russian debt crisis.For these two crises, we redo our analyses using six-quarter pre-crisis and post-crisis periods and obtain results that are qualitatively similar to the ones documented here. 13 concern that the market had become overvalued. 22 A few days before the crash, two events occurred that may have helped precipitate the crash: 1) legislation was enacted to eliminate certain tax benefits associated with financing mergers; and 2) information was released that the trade deficit was above expectations. Both events seemed to have added to the selling pressure and a record trading volume on Oct. 9, in part caused by program trading, overwhelmed many systems. Figure 3 Panel A shows abnormal bank liquidity creation before, during, and after the stock market crash. Although this financial crisis seems to have originated in the stock market rather than the banking system, it is clear from the graph that abnormal liquidity creation by banks was high ($0. 5 trillion above the time trend) two years before the crisis. It had already dropped substantially before the crisis and continued to drop until well after the crisis, but was still above the time trend even a year after the crisis.Financial crisis #2: Credit crunch (1990:Q1 – 1992:Q4) During the first three years of the 1990s, bank commercial and industrial lending declined in real terms, particularly for small banks and for small loans (see Berger, Kashyap, and Scalise 1995, Table 8, for details). The ascribed causes of the credit crunch include a fall in bank capital from the loan loss experiences of the late 1980s (e. g. , Peek and Rosengren 1995), the increases in bank leverage requirements and implementation o f Basel I risk-based capital standards during this time period (e. g. Berger and Udell 1994, Hancock, Laing, and Wilcox 1995, Thakor 1996), an increase in supervisory toughness evidenced in worse examination ratings for a given bank condition (e. g. , Berger, Kyle, and Scalise 2001), and reduced loan demand because of macroeconomic and regional recessions (e. g. , Bernanke and Lown 1991). To some extent, the research supports virtually all of these hypotheses. Figure 3 Panel B shows how abnormal liquidity creation behaved before, during, and after the credit crunch. The graph shows that liquidity creation was above the time trend before the crisis, but declining.After a temporary increase, it dropped markedly during the crisis by roughly $0. 6 trillion, and the decline even extended a bit beyond the crunch period. After having reached a noticeably low level in the post-crunch period, liquidity creation slowly started to bottom out. This evidence suggests that the 22 E. g. , â€Å"R aging bull, stock market’s surge is puzzling investors: When will it end? † on page 1 of the Wall Street Journal, Jan. 19, 1987. 14 banking sector created (slightly) positive abnormal liquidity before the crisis, but created significantly negative abnormal liquidity during and fter the crisis, representing behavior by banks that may have further fueled the crisis. Financial crisis #3: Russian debt crisis / LTCM bailout (1998:Q3 – 1998:Q4) Since its inception in March 1994, hedge fund Long-Term Capital Management (â€Å"LTCM†) followed an arbitrage strategy that was avowedly â€Å"market neutral,† designed to make money regardless of whether prices were rising or falling. When Russia defaulted on its sovereign debt on August 17, 1998, investors fled from other government paper to the safe haven of U. S. treasuries.This flight to liquidity caused an unexpected widening of spreads on supposedly low-risk portfolios. By the end of August 1998, LTCMâ€⠄¢s capital had dropped to $2. 3 billion, less than 50% of its December 1997 value, with assets standing at $126 billion. In the first three weeks of September, LTCM’s capital dropped further to $600 million without shrinking the portfolio. Banks began to doubt its ability to meet margin calls. To prevent a potential systemic meltdown triggered by the collapse of the world’s largest hedge fund, the Federal Reserve Bank of New York organized a $3. billion bail-out by LTCM’s major creditors on September 23, 1998. In 1998:Q4, many large banks had to take substantial write-offs as a result of losses on their investments. Figure 3 Panel C shows abnormal liquidity creation around the Russian debt crisis and LTCM bailout. The pattern shown in the graph is very different from the ones we have seen so far. Liquidity creation was abnormally negative before the crisis, but increasing. Liquidity creation increased further during the crisis, countercyclical behavior by banks that may have alleviated the crisis, and continued to grow after the crisis.This suggests that liquidity creation may have been too low entering the crisis and returned to normal levels a few quarters after the end of the crisis. Financial crisis #4: Bursting of the dot. com bubble and Sept. 11 terrorist attack (2000:Q2 – 2002:Q3) The dot. com bubble was a speculative stock price bubble that was built up during the mid to late 1990s. During this period, many internet-based companies, commonly referred to as â€Å"dot. coms,† were founded. Rapidly increasing stock prices and widely available venture capital created an environment in which 15 any of these companies seemed to focus largely on increasing market share. At the height of the boom, it seemed possible for dot. com’s to go public and raise substantial amounts of money even if they had never earned any profits, and in some cases had not even earned any revenues. On March 10, 2000, the Nasdaq composite ind ex peaked at more than double its value just a year before. After the bursting of the bubble, many dot. com’s ran out of capital and were acquired or filed for bankruptcy (examples of the latter include WorldCom and Pets. com). The U. S. economy started to slow down and business nvestments began falling. The September 11, 2001 terrorist attacks may have exacerbated the stock market downturn by adversely affecting investor sentiment. By 2002:Q3, the Nasdaq index had fallen by 78%, wiping out $5 trillion in market value of mostly technology firms. Figure 3 Panel D shows how abnormal liquidity creation behaved before, during, and after the bursting of the dot. com bubble and the Sept. 11 terrorist attacks. The graph shows that before the crisis period, liquidity creation moved from displaying a negative abnormal value to displaying a positive abnormal value at the time the bubble burst.During the crisis, liquidity creation declined somewhat and hovered around the time trend by t he time the crisis was over. After the crisis, liquidity creation slowly started to pick up again. Financial crisis #5: Subprime lending crisis (2007:Q3 – ? ) The subprime lending crisis has been characterized by turmoil in financial markets as banks have experienced difficulty in selling loans in the syndicated loan market and in securitizing loans. Banks also seem to be reluctant to provide credit: they appear to have cut back their lending to firms and individuals, and have also been reticent to lend to each other.Risk premia have increased as evidenced by a higher premium over treasuries for mortgages and other bank products. Some banks have experienced massive losses in capital. For example, Citicorp had to raise about $40 billion in equity to cover subprime lending and other losses. Massive losses at Countrywide resulted in a takeover by Bank of America. Bear Stearns suffered a fatal loss in confidence and was sold at a fire-sale price to J. P. Morgan Chase with the Fed eral Reserve guaranteeing $29 billion in potential losses. Washington Mutual, the sixth-largest bank, became the biggest bank failure in the U.S. financial history. J. P. Morgan Chase purchased the banking business while the rest of the organization filed for bankruptcy. The Federal Reserve intervened in some 16 unprecedented ways in the market, extending its safety-net privileges to investment banks. In addition to lowering the discount rate sharply, it also began holding mortgage-backed securities and lending directly to investment banks. Subsequently, IndyMac Bank was seized by the FDIC after it suffered substantive losses and depositors had started to run on the bank. This failure is expected to cost the FDIC $4 billion – $8 billion.The FDIC intends to sell the bank. Congress also recently passed legislation to provide Freddie Mac and Fannie Mae with unlimited credit lines and possible equity injections to prop up these troubled organizations, which are considered too big to fail. Figure 3 Panel E shows abnormal liquidity creation before and during the first part of the subprime lending crisis. The graph suggests that liquidity creation displayed a high positive abnormal value that was increasing before the crisis hit, with abnormal liquidity creation around $0. 0 trillion entering the crisis, decreasing substantially after the crisis hit. A striking fact about this crisis compared to the other crises is the relatively high build-up of positive abnormal liquidity creation prior to the crisis. 4. 4. Behavior of some liquidity creation components around the two banking crises It is of particular interest to examine the behavior of some selected components of liquidity creation around the banking crises. As discussed above (Section 4. 3), numerous papers have focused on the credit crunch, examining lending behavior.These studies generally find that mortgage and business lending started to decline significantly during the crisis. Here we contrast the cr edit crunch experience with the current subprime lending crisis, and expand the components of liquidity creation that are examined. Rather than focusing on mortgages and business loans, we examine the two liquidity creation components that include these items – semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans). In addition, we analyze two other components of liquidity creation.We examine the behavior of liquid assets to address whether a decrease (increase) in semi-liquid assets and / or illiquid assets tended to be accompanied by an increase (decrease) in liquid assets. We also analyze the behavior of illiquid off-balance sheet guarantees (primarily loan commitments) to address whether illiquid assets and illiquid off-balance sheet guarantees move in tandem around banking crises and whether changes in one are more pronounced than the other. Figure 4 Panels A and B show the abnormal amount of four liquidity creation components around 17 h e credit crunch and the subprime lending crisis, respectively. For ease of comparison, the components are not weighted by weights of +? (illiquid assets and illiquid off-balance sheet guarantees), 0 (semiliquid assets), and –? (liquid assets). The abnormal amounts are obtained by detrending and deseasonalizing each liquidity creation component. Figure 4 Panel A shows that abnormal semi-liquid assets decreased slightly during the credit crunch, while abnormal illiquid assets and especially abnormal illiquid guarantees dropped significantly and turned negative.This picture suggests that these components fell increasingly below the trendline. The dramatic drop in abnormal illiquid assets and abnormal illiquid off-balance sheet guarantees (which carry positive weights) helps explain the significant decrease in abnormal liquidity creation during the credit crunch shown in Figure 3 Panel B. Figure 4 Panel B shows that these four components of abnormal liquidity creation were above the trendline before and during the subprime lending crisis.Illiquid assets and especially off-balance sheet guarantees move further and further above the trendline before the crisis, which helps explain the dramatic buildup in abnormal liquidity creation before the subprime lending crisis shown in Figure 3 Panel E. All four components of abnormal liquidity creation continued to increase at the beginning of the crisis. After the first quarter of the crisis, illiquid off-balance sheet guarantees showed a significant decrease, which helps explain the decrease in abnormal liquidity creation in Figure 3 Panel E.On the balance sheet, during the final quarter of the sample period (the third quarter of the crisis), abnormal semi-liquid and illiquid assets declined, while abnormal liquid assets increased. 4. 5. General conclusions from the results What do we learn from the various graphs in the previous analyses that indicate intertemporal patterns of liquidity creation and selected liquidi ty creation components around five financial crises? First, across all the financial crises, there seems to have been a significant build-up or drop-off of abnormal liquidity creation before the crisis.This is consistent with the notion that crises may be preceded by either â€Å"too much† or â€Å"too little† liquidity creation, although at this stage we offer this as tentative food for thought rather than as a conclusion. Second, there seem to be two main differences between banking crises and market-related crises. 18 The banking crises, namely the credit crunch and the subprime lending crisis, were both preceded by positive abnormal liquidity creation by banks, while two out of the three market-related crises were preceded by negative abnormal liquidity creation.In addition, during the two banking crises, the crises themselves seem to have exerted a noticeable influence on the pattern of aggregate liquidity creation by banks. Just prior to the credit crunch, abnorm al liquidity creation was positive and had started to trend upward, but reversed course and plunged quite substantially to become negative during and after the crisis. Just prior to the subprime lending crisis, aggregate liquidity creation was again abnormally positive and trending up, but began to decline during the crisis, although it remains abnormally high by historical standards.The other crises, which are less directly related to banks, did not seem to exhibit such noticeable impact. Third, liquidity creation has both decreased during crises (e. g. , the 1990-1992 credit crunch) and increased during crises (e. g. , the 1998 Russian debt crisis / LTCM bailout). Thus, liquidity creation likely both exacerbated and ameliorated the effects of crises. Fourth, off-balance sheet illiquid guarantees (primarily loan commitments) moved more than semi-liquid assets (primarily mortgages) and illiquid assets (primarily business loans) during banking crises.Fifth, while liquidity creation i s generally thought of as a financial intermediation service with positive economic value at the level of the individual bank and individual borrower (see Diamond and Rajan 2000, 2001), our analysis hints at the existence of a â€Å"dark side† to liquidity creation. Specifically, it may be more than coincidence that the subprime lending crisis was preceded by a very high level of positive abnormal aggregate liquidity creation by banks relative to historical levels.The notion that this may have contributed to the subprime lending crisis is consistent with the findings that banks adopted lax credit standards (see Dell’Ariccia, Igan, and Laeven 2008, Keys, Mukherjee, Seru, and Vig 2008), which in turn could have led to an increase in credit availability and off-balance sheet guarantees. Thus, while Diamond and Rajan (2000, 2001) argue that financial fragility is needed to create liquidity, our analysis offers the intriguing possibility that the causality may be reversed a s well: too much liquidity creation may lead to financial fragility. 9 5. The effect of capital on banks’ competitive positions and profitability around financial crises This section focuses on the second goal of the paper – examining how bank capital affects banks’ competitive positions and profitability around financial crises. We first explain our methodology and provide summary statistics. We then present and discuss the empirical results. In an additional check, we examine whether the stock return performance of high- and low-capital listed banks is consistent with the competitive position and profitability results for large banks.In another check, we generate some â€Å"fake† crises to analyze whether our findings hold during â€Å"normal† times as well. 5. 1. Empirical approach To examine whether banks with high capital ratios improve their competitive positions and profitability during financial crises, and if so, whether they are able to h old on to this improved performance after these crises, we focus on the behavior of individual banks rather than that of the banking sector as a whole.Because our analysis of aggregate liquidity creation by banks shows substantial differences across crises, we do not pool the data from all the crises but instead analyze each crisis separately. Our findings below that the coefficients of interest differ substantially across crises tend to justify this separate treatment of the different crises. We use the following regression specification for each of the five crises: ? PERFi,j = ? + ? 1 * EQRATi,j + B * Zi,j (1) where ?PERFi,j is the change in bank i’s performance around crisis j, EQRATi,j is the bank’s average capital ratio before the crisis, and Zi,j includes a set of control variables averaged over the pre-crisis period. All of these variables are discussed in Section 5. 2. Since we use a cross-sectional regression model, bank and year fixed effects are not included . In all regressions, t-statistics are based on robust standard errors. Given documented differences between large and small banks in terms of portfolio composition (e. g. Kashyap, Rajan, and Stein 2002, Berger, Miller, Petersen, Rajan, and Stein 2005) and the effect of capital on liquidity creation (Berger and Bouwman forthcoming), we split the sample into large and small banks, and run all regressions separately for these two sets of banks. Large banks have gross total assets (GTA) exceeding $1 billion at the end of the quarter preceding the crisis and small banks have GTA up to 20 $1 billion at the end of that quarter. 5. 2. Variable descriptions and summary statistics We use two measures of a bank’s performance: competitive position and profitability.The bank’s competitive position is measured as the bank’s market share of overall liquidity creation, i. e. , the dollar amount of liquidity created by the bank divided by the dollar amount of liquidity created by the industry. Our focus on the share of liquidity creation is a departure from the traditional focus on a bank’s market share of deposits. Liquidity creation is a more comprehensive measure of banking activities since it does not just consider one funding item but instead is based on all the bank’s on-balance sheet and off-balance sheet activities.To establish whether banks improve their competitive positions during the crisis, we define the change in liquidity creation market share, ? LCSHARE, as the bank’s average market share during the crisis minus its average market share over the eight quarters before the crisis, normalized by its average pre-crisis market share. To examine whether these banks hold on to their improved performance after the crisis, we also measure each bank’s average market share over the eight quarters after the crisis minus its average market share over the eight quarters before the crisis, again normalized by its average marke t share before the crisis.The second performance measure is the bank’s profitability, measured as the return on equity (ROE), i. e. , net income divided by stockholders equity. 23 To examine whether a bank improves its profitability during a crisis, we focus on the change in profitability, ? ROE, measured as the bank’s average ROE during the crisis minus the bank’s average ROE over the eight quarters before the crisis. 24 To analyze whether the bank is able to hold on to improved profitability, we focus on the bank’s average ROE over the eight quarters after the crisis minus its average ROE over the eight quarters before the crisis.To mitigate the influence of outliers, ? LCSHARE and ? ROE are winsorized at the 3% level. Furthermore, to ensure that average values are calculated based on a sufficient number of quarters, we 23 We use ROE, the bank’s net income divided by equity, rather than return on assets (ROA), net income divided by assets, since banks may have substantial off-balance sheet portfolios. Banks must allocate capital against every offbalance sheet activity they engage in. Hence, net income and equity both reflect the bank’s on-balance sheet and off-balance sheet activities.In contrast, ROA divides net income earned based on on-balance sheet and off-balance sheet activities merely by the size of the on-balance sheet activities. 24 We do not divide by the bank’s ROE before the crisis since ROE itself is already a scaled variable. 21 require that at least half of a bank’s pre-crisis / crisis / post-crisis observations are available for both performance measures around a crisis. Since the subprime lending crisis was still ongoing at the end of the sample period, we require that at least half of a bank’s pre-subprime crisis observations and all three quarters of its subprime crisis observations are available.The key exogenous variable is EQRAT, the bank’s capital ratio averaged over the eight quarters before the crisis. EQRAT is the ratio of equity capital to gross total assets, GTA. 25 The control variables include: bank size, bank risk, bank holding company membership, local market competition, and proxies for the economic environment. Bank size is controlled for by including lnGTA, the log of GTA, in all regressions. In addition, we run regressions separately for large and small banks. We include the z-score to control for bank risk. 26 The z-score indicates the bank’s distance from default (e. g. Boyd, Graham, and Hewitt 1993), with higher values indicating that a bank is less likely to default. It is measured as a bank’s return on assets plus the equity capital/GTA ratio divided by the standard deviation of the return on assets over the eight quarters before the crisis. To control for bank holding company status, we include D-BHC, a dummy variable that equals 1 if the bank was part of a bank holding company. Bank holding company membership m ay affect a bank’s competitive position because the holding company is required to act as a source of strength to all the banks it owns, and may also inject equity voluntarily when needed.In addition, other banks in the holding company provide cross-guarantees. Furthermore, Houston, James, and Marcus (1997) find that bank loan growth depends on BHC membership. We control for local market competition by including HERF, the bank-level HerfindahlHirschman index of deposit concentration for the markets in which the bank is p