Wednesday, October 30, 2019
How Should Organizational Information Systems Be Audited for Security Essay
How Should Organizational Information Systems Be Audited for Security - Essay Example S. General Accounting Office; Mandol and Verma; Cert-In; Stanford University; Davis). At the present, businesses should take a number of steps in an attempt to formulate or improve an IS security audit facility. For instance, organizations must clearly outline their business goals and aims. After that, the business should evaluate its own information security audit readiness. However, this kind of evaluation requires from organizations to recognize a variety of matter such as reporting limitations, legal problems, the audit situation, security and safety vulnerabilities, abilities automated tools and associated costs. Additionally, it is essential for the organizations to plan how to decide what information systems security audit projects should be performed for instance both stand-alone information system security audit projects and those projects which require support from the information systems security audit potential. Thus, when the planning stage is successfully completed, bus inesses should be able to connect the aims and objectives selected in the initial phase to the tasks required for their completion. On the other hand, all through the process, businesses should not ignore the resources exist on the Web intended for research and training (U. S. General Accounting Office; Mandol and Verma; Cert-In; Stanford University; Davis). Moreover, making a decision regarding organizationââ¬â¢s aims and objectives for developing or improving an information systemââ¬â¢s security audit capability will support them in determining and understanding the varieties of skills, tools and training required to carry out this process. In this scenario, it is essential for the organizations to define objectives and aims earlier without initial recognition like that how and by whom the business aims and objectives would be convened (for instance, whether organization resources would be contractor, in-house, shared staff or a number of combinations). In addition, establis hment of temporary milestones will facilitate in attaining a staged accomplishment of organizationââ¬â¢s desired policy. Additionally, while constructing an information system security audit potential, administration should review the organizationââ¬â¢s information systems security audit willingness by keeping in mind the applicable issues. In this scenario, the implementation of a baseline by recognizing powers and faults will facilitate an organization to choose a most excellent system to proceed (U. S. General Accounting Office; Mandol and Verma; Cert-In; Stanford University; Davis). Moreover, the process of tackling information security risks varies and depends on the nature of the processing carried out by the business and sensitivity of the data and information which is being processed. However, to completely judge these issues and risks, the auditor should completely understand information about the businessââ¬â¢s computer operations and major applications. In this s cenario, a most important part of planning to produce or improve a successful information systems security audit potential can encompass activities such as assessing the present staffââ¬â¢s skills, knowledge and capabilities to decide what the audit capability is at the present and what knowledge
Monday, October 28, 2019
Emo Culture Essay Example for Free
Emo Culture Essay Like the social and fashion trends of eras long gone, emo is not simply about the way you dress it is a lifestyle. It culminates in your clothing, shoes, hairstyle, attitude and most importantly musical selection. This section describes the emo lifestyle and attitudes. People do tend to adopt at least the attitudes of the music they listen to most even if they dont admit it. This is because a lot of people are not able to separate themselves from the ideas that are expressed. Music is different from other art-forms in that it penetrates the soul in a way something visual cannot. People seem to like to group together for some reason, its in our nature, and emo is just another group or sub-culture. People join it because they might agree with some, most or all of what the group is generally about. Being Emo is just another way that people are trying to express themselves, really the same as other street styles, just with a different soundtrack. In the end, each of the people who have chosen to follow the scene is their own person- they are just part of a scene that is tipped as being defiant and unacceptable- something most young people are drawn to. [To the Top!] What are Emos like? Firstly, labeling someone as an emo based on their hair style is a poor way to interpret personal expression, just as calling someone a goth based on their preference for black clothing. Whether or not a person listens to emo music, writes emo poetry, or adopts an alternative lifestyle is a personal decision that does not automatically have anything to do with the colour or cut of their hair. Emo styles are unique, individual looks that say a lot about the persons style, but the emotions behind them may never be understood by anyone else. When referring to a persons personality and attitude, most definitions of emo include a number of the following terms: sensitive, shy, quiet, sad, introverted, glum, self-pitying, mysterious and angst ridden. Depression and broken-heartedness are sometimes used to describe the emo personality. Emos feel society doesnt accept them, they are outcasts and nobody understands them! This is generalising and it is important to note those into the emo / scene culture can obviously also be the opposite of the personality traits listed above as with anyone. At its core, emo is all about being upfront with your emotions. Hot Topic even issued a patch that read, cheer up, emo kid! These personality traits are often identified by his/her music and fashion (generalising here). For example the emo band Hawthorne Heights contains multiple references to unrequited love, emotional and relationship problems. Many of these traits are present in most teenagers and not just emos! The courting of misery and death is a long-established teenage tradition. When death is a long way off, you can afford to be more morbid about it. In particular, Goths and Emos are a rebellion against sporty, manly cultures. Frailness, which conveys a sense of vulnerability has been associated with the male emos in particular, but from what I know this isnt particularly valid. Finally touching on the term scene that has become popular since the emo subculture kicked off. Scene kids I believe are more about the style and looking like an emo without the personality of it all. In other words, scene kids are the ones that dress emo, but only because its a trend or you could say Scene is Emo without the emotion. The term is subject to significant debate like emo though.
Saturday, October 26, 2019
Zara: Information Technology For Fast Fashion :: Problem, Solution, Case Study
Problem Statement: In 2003, Zara's CIO must decide whether to upgrade the retailer's IT infrastructure and capabilities. At the time of the case, the company relies on an out-of-date operating system for its store terminals and has no full-time network in place across stores. Despite these limitations, however, Zara's parent company, Inditex, has built an extraordinarily well-performing value chain that is by far the most responsive in the industry. Therefore the major problem to the company is to decide whether it has to upgrade the present system and by doing so, risking the reliability they have with the current system or to continue with the present DOS based system which will not be compatible for future changes or improvements. Analysis & Recommendation: Zaraââ¬â¢s main strategy is the ability to respond very quickly to the demands of target customers which called for identifying trends of the customer in advance. The company has been able to identify the trends and meet the demand with the help of its autonomously organized structure and its effective value chain systems. The present system followed by Zara has been very effective and very easy to maintain, which as a result has persuaded the company to continue without any change in the present system so far. The problem that Zara faces right now is that the system that they use, P-O-S (Point of Sale terminals), runs on DOS which Microsoft does not support anymore and any hardware change in the POS terminal will not be compatible with the current POS software. Although the sense of urgency for the change may not be that high, investing in IT infrastructure is a must as MS Dos is an obsolete technology and there is no contract or guarantee from their POS terminal vendor that they will continue supplying the same terminal with out much changes in the hardware for any specific period of time, therefore change is unavoidable. The other main issue that Zara faces is that the stores donââ¬â¢t share inventory information electronically and hence inventory management becomes highly difficult and manual. The decision making process is based on the judgment of employees throughout the company instead of relying on a small set of decision makers; the majority of the decisions were made by store managers and as a result they placed orders for the items rather than simply accepting and displaying what headquarters decided to send them.
Thursday, October 24, 2019
Introduction to Computer Organization and Computer Evolution Essay
In describing computers, a distinction is often made between computer architecture and computer organization. Although it is difficult to give precise definitions for these terms, a consensus exists about the general areas covered by each. Computer Architecture refers to those attributes of a system visible to a programmer or, put another way, those attributes that have a direct impact on the logical execution of a program. Examples of architectural attributes include the instruction set, the number of bits used to represent various data types (e.g., numbers, characters), I/O mechanisms, and techniques for addressing memory. Computer Organization refers to the operational units and their interconnections that realize the architectural specifications. Examples of organizational attributes include those hardware details transparent to the programmer, such as control signals; interfaces between the computer and peripherals; and the memory technology used. As an example, it is an architectural design issue whether a computer will have a multiply instruction. It is an organizational issue whether that instruction will implemented by a special multiply unit or by a mechanism that makes repeated use of the add unit of the system. The organizational decision may be based on the anticipated frequency of use of the multiply instruction, the relative speed of the two approaches, and the cost and physical size of a special multiply unit. Historically, and still today, the distinction between architecture and organization has been an important one. Many computer manufacturers offer a family of computer models, all with the same architecture but with differences in organization. Consequently, the different models in the family have different price and performance characteristics. Furthermore, a particular architecture may span many years and encompass a number of different computer models, its organization changing with changing technology. A prominent example of both these phenomena is the IBM System/370 architecture. This architecture was first introduced in 1970 and included a number of models. The customer with modest requirements could buy a cheaper, slower model and, if demand increased, later upgrade to a more expensive, faster model without having to abandon software that had already been developed. These newer models retained the same architecture so that the customerââ¬â¢s software investmentà was protected. Remarkably, the System/370 architecture, with a few enhancements, has survived to this day as the architecture of IBMââ¬â¢s mainframe product line. II.Structure and Function A computer is a complex system; contemporary computers contain millions of elementary electronic components. The key is to recognize the hierarchical nature of most complex systems, including the computer. A hierarchical system is a set of interrelated subsystems, each of the latter, in turn, hierarchical in structure until we reach some lowest level of elementary subsystem. The hierarchical nature of complex systems is essential to both their design and their description. The designer need only deal with a particular level of the system at a time. At each level, the system consists of a set of components and their interrelationships. The behaviour at each level depends only on a simplified, abstracted characterization of the system at the next lower level. At each level, the designer is concerned with structure and function: â⬠¢Structure: The way in which the components are interrelated â⬠¢Function: The operation of each individual component as part of the structure The computer system will be described from the top down. We begin with the major components of a computer, describing their structure and function, and proceed to successively lower layers of the hierarchy. Function Both the structure and functioning of a computer are, in essence, simple. Figure 1.1 depicts the basic functions that a computer can perform. In general terms, there are only four: â⬠¢Data processing: The computer, of course, must be able to process data. The data may take a wide variety of forms, and the range of processing requirements is broad. However, we shall see that there are only a few fundamental methods or types of data processing. â⬠¢Data storage: It is also essential that a computer store data. Even if the computer is processing on the fly (i.e., data come in and get processed, and the results go out immediately), the computer must temporarily store at least those pieces of data that are being worked on at any given moment. Thus, there is at least a short-term data storage function. Equally important, the computer performs a long-term data storageà function. Files of data are stored on the computer for subsequent retrieval and update. â⬠¢Data movement: The computer must be able to move data between itself and the outside world. The computerââ¬â¢s operating environment consists of devices that serve as either sources or destinations of data. When data are received from or delivered to a device that is directly connected to the computer, the process is known as input-output (I/O), and the device is referred to as a peripheral. When data are moved over longer distances, to or from a remote device, the process is known as data communications. â⬠¢Control: Finally there must be control of these three functions. Ultimately, this control is exercised by the individual(s) who provides the computer with instructions. Within the computer, a control unit manages the computerââ¬â¢s resources and orchestrates the performance of its functional parts in response to those instructions. FIGURE 1.1 A FUNCTIONAL VIEW OF THE COMPUTER At this general level of discussion, the number of possible operations that can be performed is few. Figure 1.2 depicts the four possible types of operations. The computer can function as a data movement device (Figure 1.2a), simply transferring data from one peripheral or communications line to another. It can also function as a data storage device (Figure 1.2b), with data transferred from the external environment to computer storage (read) and vice versa (write). The final two diagrams show operations involving data processing, on data either in storage (Figure 1.2c) or en route between storage and the external environment Structure Figure 1.3 is the simplest possible depiction of a computer. The computerà interacts in some fashion with its external environment. In general, all of its linkages to the external environment can be classified as peripheral devices or communication lines. There are four main structural components (Figure 1.4): â⬠¢Central Processing Unit (CPU): Controls the operation of the computer and performs its data processing functions; often simple referred to as processor â⬠¢Main memory: Stores data â⬠¢I/O: Moves data between the computer and its external environment â⬠¢System interconnection: Some mechanism that provides for communication among CPU, main memory, and I/O FIGURE 1.3 THE COMPUTER FIGURE 1.4 THE COMPUTER: TOP-LEVEL STRUCTURE There may be one or more of each of the aforementioned components. Traditionally, there has been just a single CPU. In recent years, there has been increasing use of multiple processors in a single computer. The most interesting and in some ways the most complex component is the CPU; its structure is depicted in Figure 1.5. Its major structural components are: â⬠¢Control unit: Controls the operation of the CPU and hence the computer â⬠¢Arithmetic and logic unit (ALU): Performs the computerââ¬â¢s data processing functions â⬠¢Registers: Provides storage internal to the CPU â⬠¢CPU interconnection: Some mechanism that provides for communication among the control unit, ALU, and registers FIGURE 1.5 THE CENTRAL PROCESSING UNIT (CPU) Finally, there are several approaches to the implementation of the control unit; one common approach is a microprogrammed implementation. In essence, a microprogrammed control unit operates by executing microinstructions that define the functionality of the control unit. The structure of the control unit can be depicted as in Figure 1.6. FIGURE 1.6 THE CONTROL UNIT III.Importance of Computer Organization and Architecture The computer lies at the heart of computing. Without it most of the computingà disciplines today would be a branch of the theoretical mathematics. To be a professional in any field of computing today, one should not regard the computer as just a black box that executes programs by magic. All students of computing should acquire some understanding and appreciation of a computer systemââ¬â¢s functional components, their characteristics, their performance, and their interactions. There are practical implications as well. Students need to understand computer architecture in order to structure a program so that it runs more efficiently on a real machine. In selecting a system to use, they should be able to understand the tradeoff among various components, such as CPU clock speed vs. memory size. [Reported by the Joint Task Force on Computing Curricula of the IEEE (Institute of Electrical and Electronics Engineers) Computer Society and ACM (Association for Computing Machinery)]. IV.Computer Evolution A brief history of computers is interesting and also serves the purpose of providing an overview of computer structure and function. A consideration of the need for balanced utilization of computer resources provides a context that is useful. The First Generation: Vacuum Tubes ENIAC: The ENIAC (Electronic Numerical Integrator And Computer), designed by and constructed under the supervision of John Mauchly and John Presper Eckert at the University of Pennsylvania, was the worldââ¬â¢s first general-purpose electronic digital computer. The project was a response to U.S. wartime needs during World War II. The Armyââ¬â¢s Ballistics Research Laboratory (BRL), an agency responsible for developing range and trajectory tables for new weapons, was having difficulty supplying these tables accurately and within a reasonable time frame. Mauchly, a professor of electrical engineering at the University of Pennsylvania, and Eckert, one of his graduate students, proposed to build a general-purpose computer using vacuum tubes for the BRLââ¬â¢s application. In 1943, the Army accepted this proposal, and work began on the ENIAC. The resulting machine was enormous, weighing 30 tons, occupying 1500 squre feet of floor space and containing more than 18,000 vacuum tubes. When operating, it consumed 140 kilowatts of power. It was also substantially faster than any electromechanical computer, being capable of 5000 additions per second. The ENIAC was a decimal ratherà than a binary machine. That is, numbers were represented in decimal form and arithmetic was performed in the decimal system. Its memory consisted of 20 ââ¬Å"accumulators,â⬠each capable of holding a 10-digit decimal number. A ring of 10 vacuum tubes represented each digit. At any time, only one vacuum tube was in the ON state, representing one of the 10 digits. The major drawback of the ENIAC was that it had to be programmed manually by setting switches and plugging and unplugging cables. The ENIAC was completed in 1946, too late to be used in the war effort. Instead, its first task was to perform a series of complex calculations that were us ed to help determine the feasibility of the hydrogen bomb. The use of the ENIAC for a purpose other than that for which it was built demonstrated its general-purpose nature. The ENIAC continued to operate under BRL management until 1955, when it was disassembled. The von Neumann Machine: The task of entering and altering programs for the ENIAC was extremely tedious. The programming process could be facilitated if the program could be represented in a form suitable for storing in memory alongside the data. Then, a computer could get its instructions by reading them from memory, and a program could be set or altered by setting the values of a portion of memory. This idea, known as the stored-program concept, is usually attributed to the ENIAC designers, most notably the mathematician John von Neumann, who was a consultant on the ENIAC project. Alan Turing developed the idea at about the same time. The first publication of the idea was in a 1945 proposal by von Neumann for a new computer, the EDVAC (Electronic Discrete Variable Automatic Computer). In 1946, von Neumann and his colleagues began the design of a new stored-program computer, referred to as the IAS computer, at the Princeton Institute for Advanced Studies. The IAS computer, although not completed until 1952, is the prototype of all subsequent general-purpose computers. Figure 1.7 shows the general structure of the IAS computer. It consists of: â⬠¢A main memory, which stores both data and instructions â⬠¢An arithmetic and logic unit (ALU) capable of operating on binary data â⬠¢A control unit, which interprets the instructions in memory and causes them to be executed â⬠¢Input and output (I/O) equipment operated by the control unit FIGURE 1.7 STRUCTURE OF THE IAS COMPUTER Commercial Computers The 1950s saw the birth of the computer industry with two companies, Sperry and IBM, dominating the marketplace. UNIVAC I: In 1947, Eckert and Mauchly formed the Eckert-Mauchly Computer Corporation to manufacture computers commercially. Their first successful machine was the UNIVAC I (Universal Automatic Computer), which was commissioned by the Bureau of the Census for the 1950 calculations. The Eckert-Mauchly Computer Corporation became part of the UNIVAC division of Sperry-Rand Corporation, which went on to build a series of successor machines. The UNIVAC I was the first successful commercial computer. It was intended, as the name implies, for both scientific and commercial applications. The first paper describing the system listed matrix algebraic computations, statistical problems, premium billings for a life insurance company, and logistical problems as a sample of the tasks it could perform. UNIVAC II: The UNIVAC II which had greater memory capacity and higher performance than the UNIVAC I, was delivered in the late 1950s and illustrates several trends that have remained characteristic of the computer industry. First, advances in technology allow companies to continue to build larger, more powerful computers. Second, each company tries to make its new machines upward compatible with the older machines. This means that the programs written for the older machines can be executed on the new machine. This strategy is adopted in the hopes of retaining the customer base; that is, when a customer decides to buy a newer machine, he or she is likely to get it from the same company to avoid losing the investment in programs. The UNIVAC division also began development of the 1100 series of computers, which was to be its major source of revenue. This series illustrates a distinction that existed at one time. In 1955, IBM, which stands for International Business Machines, introduced the companion 702 product, which had a number of hardware features that suited it to business applications. These were the first of a long series of 700/7000 computers that established IBM as the overwhelmingly dominant computer manufacturer. The Second Generation: Transistors The first major change in the electronic computer came with the replacement of the vacuum tube by the transistor. The transistor is smaller, cheaper, and dissipates less heat than a vacuum tube but can be used in the same wayà as a vacuum tube to construct computers. Unlike the vacuum tube, which requires wires, metal plates, a glass capsule, and a vacuum, the transistor is a solid-state device, made from silicon. The transistor was invented at Bell Labs in 1947 and by the 1950s had launched an electronic revolution. The National Cash Registers (NCR) and, more successfully, Radio Corporation of America (RCA) were the front-runners with some small transistor machines. IBM followed shortly with the 7000 series. The second generation is noteworthy also for the appearance of the Digital Equipment Corporation (DEC). DEC was founded in 1957 and, in that year, delivered its first computer, the PDP-1 (Programmed Data Processor). This computer and this company began the minicomputer phenomenon that would become so prominent in the third generation. The IBM 7094: From the introduction of the 700 series in 1952 to the introduction of the last member of the 7000 series in 1964, this IBM product line underwent an evolution that is typical of computer products. Successive members of the product line show increased performance, increased capacity, and/or lower cost. Table 1.1 illustrates this trend. The Third Generation: Integrated Circuit A single, self-contained transistor is called a discrete component. Throughout the 1950s and early 1960s, electronic equipment was composed largely of discrete componentsââ¬âtransistors, resistors, capacitors, and so on. Discrete components were manufactured separately, packaged in their own containers, and soldered or wired together onto masonite-like circuit boards, which were then installed in computers, oscilloscopes, and other electronic equipment. Early second-generation computer contained about 10,000 transistors. This figure grew to the hundreds of thousands, making the manufacture of newer, more powerful machines increasingly difficult. In 1958 came the achievement that revolutionized electronics and started the era of microelectronics: the invention of the integrated circuit. Microelectronics: Microelectronics means, literally, ââ¬Å"small electronics.â⬠Since the beginnings of digital electronics and the computer industry, there has been a persistent and consistent trend toward the reduction in size of digital electronic circuits. The basic elements of a digital computer, as we know, must perform storage, movement, processing, and control functions. Only two fundamental types of components are required: gates and memoryà cells. A gate is a device that implements a simple Boolean or logical function. Such devices are called gates because they control data flow in much the same way that canal gates do. The memory cell is a device that can store one bit of data; that is, the device can be in one of two stable states at any time. By interconnecting large numbers of these fundamental devices, we can construct a computer. We can relate this to our four basic functions as follows: â⬠¢Data storage: Provided by memory cells. â⬠¢Data processing: Provided by gates. â⬠¢Data movement: The paths between components are used to move data from memory to memory and from memory through gates to memory. â⬠¢Control: The paths between components can carry control signals. When the control signal is ON, the gate performs its function on the data inputs and produces a data output. Similarly, the memory cell will store the bit that is on its input lead when the WRITE control signal is ON and will place the bit that is in the cell on its output lead when the READ control signal is ON. Thus, a computer consists of gates, memory cells, and interconnections among these elements. The integrated circuit exploits the fact that such components as transistors, resistors, and conductors can be fabricated from a semiconductor such as silicon. It is merely an extension of the solid-state art to fabricate an entire circuit in a tiny piece of silicon rather than assemble discrete components made from separate pieces of silicon into the same circuit. Many transistors can be produced at the same time on a single wafer of silicon. Equally important, these transistors can be connected with a process of metallization to form circuits. Figure 1.8 depicts the key concepts in an integrated circuit. A thin wafer of silicon is divided into a matrix of small areas, each a few millimetres square. The identical circuit pattern is fabricated in each area, and the wafer is broken up into chips. Each chip consists of many gates and/or memory cells plus a number of input and output attachment points. This chip is then packaged in housing that protects it and provides pins for attachment to devices beyond the chip. A number of these packages can then be interconnected on a printed circuit board to produce larger and more complex circuits. As time went on, it became possible to pack more and more components on theà same chip. This growth in density is illustrated in Figure 1.9; it is one of the most remarkable technological trends ever recorded. This figure reflects the famous Mooreââ¬â¢s law, which was propounded by Gordon Moore, cofounder of Intel, in 1965. Moore observed that the number of transistors that could be put on a single chip was doubling every year and correctly predicted that this pace would continue into the near future. FIGURE 1.9 GROWTH IN CPU TRANSISTOR COUNT The consequences of Mooreââ¬â¢s law are profound: 1.The cost of a chip has remained virtually unchanged during this period of rapid growth in density. This means that the cost of computer logic and memory circuitry has fallen at a dramatic rate. 2.Because logic and memory elements are placed closer together on more densely packed chips, the electrical path length is shortened, increasing operating speed. 3.The computer becomes smaller, making it more convenient to place in a variety of environments. 4.There is a reduction in power and cooling requirements. 5.The interconnections on the integrated circuit are much more reliable than solder connections. With more circuitry on each chip, there are fewer interchip connections. IBM System/360: By 1964, IBM had a firm grip on the computer market with its 7000 series of machines. In that year, IBM announced the System/360, a new family of computer products. Although the announcement itself was no surprise, it contained some unpleasant news for current IBM customers: the 360 product line was incompatible with older IBM machines. Thus, the transition to the 360 would be difficult for the current customer base. This was a bold step by IBM, but one IBM felt was necessary to break out of some of the constraints of the 7000 architecture and to produce a system capable of evolving with the new integrated circuit technology. The 360 was the success of the decade and cemented IBM as the overwhelmingly dominant computer vendor, with a market share above 70%. The System/360 was the industryââ¬â¢s first planned family of computers. The family covered a wide range of performance and cost. Table 1.2 indicates some of the key characteristics of the various models in 1965. The concept of a family of compatible computers was both novel and extremely successful. The characteristics of a family are as follows: â⬠¢Similar or identical instruction set: The program that executes on one machine will also execute on any other. â⬠¢Similar or identical operating system: The same basic operating system is available for all family members. â⬠¢Increasing speed: the rate of instruction execution increases in going from lower to higher family members. â⬠¢Increasing number of I/O ports: In going from lower to higher family members. â⬠¢Increasing memory size: In going from lower to higher family members. â⬠¢Increasing cost: In going from lower to higher family members. DEC PDP-8: Another momentous first shipment occurred: PDP-8 from DEC. At a time when the average computer required an air-conditioned room, the PDP-8 (dubbed a minicomputer by the industry) was small enough that it could be placed on top of a lab bench or be built into other equipment. It could not do everything the mainframe could, but at $16,000, it was cheap enough for each lab technician to have one. The low cost and small size of the PDP-8 enabled another manufacturer to purchase a PDP-8 and integrate it into a total system for resale. These other manufacturers came to be known as original equipment manufacturers (OEMs), and the OEM market became and remains a major segment of the computer marketplace. As DECââ¬â¢s official history puts it, the PDP-8 ââ¬Å"established the concept of minicomputers, leading the way to a multibillion dollar industry.â⬠Later Generations Beyond the third generation there is less general agreement on defining generations of computers. Table 1.3 suggests that there have been a number of later generations, based on advances in integrated circuit technology. GenerationApproximate DatesTechnologyTypical Speed (operations perà second) With the rapid pace of technology, the high rate of introduction of new products and the importance of software and communications as well as hardware, the classification by generation becomes less clear and less meaningful. In this section, we mention two of the most important of these results. Semiconductor Memory: The first application of integrated circuit technology to computers was construction of the processor (the control unit and the arithmetic and logic unit) out of integrated circuit chips. But it was also found that this same technology could be used to construct memories. In the 1950s and 1960s, most computer memory was constructed from tiny rings of ferromagnetic material, each about a sixteenth of an inch in diameter. These rings were strung up on grids of fine wires suspended on small screens inside the computer. Magnetized one way, a ring (called a core) represented a one; magnetized the other way, it stood for a zero. It was expensive, bulky, and used destructive readout. Then, in 1970, Fairchild produced the first relatively capacious semiconductor memory. This chip, about the size of a single core, could hold 256 bits of memory. It was non-destructive and much faster than core. It took only 70 billionths of a second to read a bit. However, the cost per bit was higher than for that of core. In 1974, a seminal event occurred: The price per bit of semiconductor memory dropped below the price per bit of core memory. Following this, there has been a continuing and rapid decline in memory cost accompanied by a corresponding increase in physical memory density. Since 1970, semiconductor memory has been through 11 generations: 1K, 4K, 16K, 64K, 256K, 1M, 4M, 16M, 64M, 256M, and, as of this writing, 1G bits on a single chip. Each generation has provided four times the storage density of the previous generation, accompanied by declining cost per bit and declining access time. Microprocessors: Just as the density of elements on memory chips hasà continued to rise, so has the density of elements on processor chips. As time went on, more and more elements were placed on each chip, so that fewer and fewer chips were needed to construct a single computer processor. A breakthrough was achieved in 1971, when Intel developed its 4004. The 4004 was the first chip to contain all of the components of a CPU on a single chip: the microprocessor was born. The 4004 can add two 4-bit numbers and can multiply only be repeated addition. By todayââ¬â¢s standards, the 4004 is hopelessly primitive, but it marked the beginning of a continuing evolution of microprocessor capability and power.
Wednesday, October 23, 2019
Analyse and comment on the success Essay
Analyse and comment on the success of the title sequence of Baz Luhrmannââ¬â¢s 1997 film adaptation of ââ¬ËRomeo & Julietââ¬â¢Ã The 1997 adaptation of Shakespeareââ¬â¢s Romeo and Juliet by Baz Luhrmann was attempting to reach out to a younger audience by modernising the old play with new ideas, even though the old text was kept. Set in modern times with modern things that a young audience could relate to, Luhrmann successfully hauled Shakespeareââ¬â¢s text from 16th century Verona, Italy to late 20th century Miami, USA. The purpose of this essay is to review, analyse and comment on the use of Luhrmannââ¬â¢s background to help him in making the movie, the success of the film but most importantly the cinematic success of the title scene. The location was specifically chosen to represent modern times. America was the most modern country available. Although Luhrmann wanted to shoot the film in Miami, it was seen by the mayor of Miami as unrealistic to put the city on hold while they shot the movie; Mexicoââ¬â¢s capital city, Mexico City was used instead. It had everything the crew needed, it had a typical city milieu; it was perfect. As the movie was going to be originally set in Miami, the characters had to at least look like they were from Miami. The Montague household wore very casual Hawaiian t-shirts, which were not buttoned up but hung loosely on the wearer. The Capulets wore very serious, cool clothes, mostly dark colours like black and grey. The choices of clothes used were to symbolise gangsters and mob mentality. ââ¬Ëfrom ancient grudge break to new mutinyââ¬â¢ were being portrayed by Luhrmann, as two rival gangs. Casting was very much a big thing in Luhrmannââ¬â¢s version. He had to pick actors that young people could relate to. It was hard finding such actors, as the young and popular ones had problems reading Shakespeareââ¬â¢s diverse and complicated old English text. Luhrmann knew that Leonardo Di Caprio would be perfect for Romeo, as he was a heartthrob worldwide, and would bring in the young girls, who adored him. Throughout the movie we are shown images of power. These images are more abundant in the opening title scene. Images of guns, violence and police are shown to create an atmosphere of chaos and anarchy. The guns are used, again to create a modernised version of the old play. The guns represent the swords used by the people in the sixteenth century, and they are referred to as swords by the actors, ââ¬Ëput up thy swords.ââ¬â¢ Religion is one of the main themes in the play, and Luhrmann uses many powerful images to show this in the opening title scene. Two statues of Jesus are filmed round about either side of the city. The statues are opposite each other with their faces facing inwards, as if looking over the people of ââ¬ËVeronaââ¬â¢ and keeping guard. Also the Christian crucifix is used to replace some of the Ts in the scripture which occasionally flash between the montage of images of police, violent riots and arrests; the scripture is repeating the important parts of the sonnet that opens the play. Cinematography is used to great effect in the opening scene. Zooming and panning left and right all help to create the effect of disorder and chaos. Before the title of the movie is actually shown there is a montage of images; each clip lasts only for a split second. Each clip is taken from the movie to show the audience that it isnââ¬â¢t an old fashioned soppy love story, but a violent, tragic blockbuster. Using the scenes of gun fights and violence the montage really creates an adrenalin rush. The music and the backing track for the title scene really goes with the visual picture on the screen. Fast tempo gets the blood pumping and again it helps to cause chaos. It has real power as it builds up towards the end of the title scene; during the montage of images it speeds up and the power of it envelops you. Then finally we see the title of the movie and the music stops after a few seconds after the title falls into view and the audience is left in silence. In conclusion, considering all the areas of the title scene, Baz Luhrmann has successfully given birth to a fantastic opening to Romeo and Juliet. The title scene underlines key aspects of the sonnet which opens the play, to help the people in the audience who donââ¬â¢t understand Shakespeare and even the ones who do. He introduces the principle characters, again to stop the audience from getting confused. The use of exciting images, such as the gun and the images of police and violence makes certain people in the audience stay and not walk out because they may believe it to be boring; many teenagers would believe Shakespeare to be boring. The opening scene had a very strong effect on me personally. It really made me want to see the movie again, even though I had already seen it about three times. The scene gripped me and not many title scenes have done that to me. Baz Luhrmannââ¬â¢s version of Romeo and Juliet was a complete success.
Tuesday, October 22, 2019
Comparing Thomas Hobbes and John Locke â⬠History Essay
Comparing Thomas Hobbes and John Locke ââ¬â History Essay Free Online Research Papers Comparing Thomas Hobbes and John Locke History Essay Thomas Hobbes and John Locke were two of the great political theorists of their time. Both created great philosophical texts that help to describe the role of government in manââ¬â¢s life, as well as their views of manââ¬â¢s state of nature. Even though both men do have opposite views on many of their political arguments, the fact that they are able to structure their separate ideologies on the state of man in nature is the bond that connects them. Both men look toward the creation of civil order in order to protect not only the security of the individual, but also the security of the state. For Hobbes, the state of nature is a very bleak, dreary place. He believed that people in this state were not guided by reason, but instead were guided by our innate primal, animalistic instincts . Hobbes believed that moral concepts such as the ideas of good and evil did not exist in the state of nature, and that man could use any force necessary in order to protect his life and goods around him. Hobbes called this condition ââ¬Å"Warâ⬠which meant ââ¬Å"every man against every man.â⬠Hobbes also described the state of nature as having no benefits that people in modern society take for granted: ââ¬Å"No commerce, no agriculture, no account of time, no arts, no letters, and no society.â⬠Men in this state live with an overbearing sense of fear and grief, always on the defense in order to protect themselves, and their possessions. Hobbes relates manââ¬â¢s wanting to escape from the state of nature and war by looking towards peace, which allows man to dissolve his incessant feeling of fear. In order to obtain peace, Hobbes looks to man using reason, which enables man to respond to what Hobbes calls ââ¬Å"The Laws of Natureâ⬠. It is through these laws that man can seek peace and to enable manââ¬â¢s natural right to all things, providing that others will do the same. Hobbes labeled this mutual transferring of rights between men a ââ¬Å"contractâ⬠. Hobbes beloved that there still must be some common power in effect in order to enforce the laws, because it was Hobbes fear that humanââ¬â¢s hunger for power would always be a threat to the contract. Out of the various forms of government, Hobbes preferred the idea of an absolute monarch to rule over the people. Hobbes concluded that there must be some sovereign authority that was created by the people as part of the social contract that would endowed with the individual powers and the wills of all, and would be authorized to punish anyone who broke the rules. This absolute sovereign, dubbed ââ¬Å"Leviathanâ⬠was to be so effective because it helped to create a continuous circle that reinforced the social contract. The sovereign operated through fear; the threat of punishment helped to reinforce the mandates that the laws of nature provided, thereby ensuring the continued operation of the social contract that was in place. It was through this creation of an absolute ruler, that the idea of the ââ¬Å"Commonwealthâ⬠was created. People who lived under the rule of the sovereign in the commonwealth essentially gave up all of their own personal rights to govern themselves to the sovereign. The ââ¬Å"peopleâ⬠in the commonwealth are able to retain their right to self-preservation by endowing the sovereign with all of their other rights. It is through this transfer of power, and entering into the contract with the sovereign in the commonwealth, that Hobbes states how man is able to get out of the state of nature and into society. John Locke also believed in many of the same ideas as Hobbes, such as the social contract and the state of nature, however the positions in which he took on them were sometimes polar opposites. In Lockeââ¬â¢s view of the state of nature, Locke states that while there were no civil societies yet formed, people basically were able to live in peace, because the natural laws that governed them were an innate quality in which everyone had. Locke stated that in the state of nature, all people were equal, and had executive power of the natural laws. Where as Hobbes believed the state of ââ¬Å"warâ⬠was a natural part of the state of nature, Locke differed, saying that the two were not the same. Locke believed that the state of nature involved people living together, using reason to govern their lives without the need for a common superior, or leader. The state of ââ¬Å"warâ⬠occurred when people tried to force things on others, and it was Lockeââ¬â¢s belief that when this occurred, people had the right to wage war because it was his belief that force without right was an adequate basis for the state of war. In order to transition from the state of nature into a civil society, Locke believed that people would naturally want to give up their natural freedom in order to assure protection for their ââ¬Å"lives, liberties, and propertyâ⬠. Locke believed that the best form of government for a civil society would be one that would be run by the majority of people with common views, and that the individual, when entering into the society would submit him to the will of the majority and follow the rules set forth by it. In transitioning from the state of nature to a civil society, Locke stated that the state of nature differed from a civil society because it lacked ââ¬Å"an established, settled, known law; a known, and different judge; and power to back and support the sentenceâ⬠. In order to complete this transition into a civilized society, people had to relinquish their natural rights. These rights included the right to do what they wanted within the bounds of the laws of nature, and the power to punish the crimes committed against natural law. Both rights are given up in order to put oneself under the protection of the executive power of the civil society. In the end the civil society would provide ââ¬Å"a law, a judge, and executive working to no other end, but the peace, safety, and public good of the people.â⬠Many of Lockeââ¬â¢s ideals were considered to be very progressive at the time of their creation, and were implemented into the forming of the United States Constitution. Many of the ideas that were put into the creation of the constitution were based on Lockeââ¬â¢s principles of equality and government working to the advantages of the people. After entering into a civil society, Locke stated that the government of the commonwealth, using the element of a majority, should have a single legislative body that was used for the creation of laws. Locke suggests many types of governments such as Democracy, or Oligarchy, but he never states that one is better then the other. This again is another difference in the views between Locke and Hobbes. While Hobbes favored one single person to be the law maker, or absolute monarch, Locke stated that the power to create law should rest within a majority legislative body and that the law created by it should be absolute. No other body could create laws of its own, and every member of society and the commonwealth must abide by the laws that were created by the legislative majority. While the legislation is an absolute governing body, it does in fact have limits as well. Locke states that the legislative body must govern by fixed laws that apply equally to everyone, and that the laws that are designed are to be done solely for the good of the people; lastly, the legislative body cannot increase taxes on property owners with out the peopleââ¬â¢s consent. John Locke and Thomas Hobbes ideas about common law governments help to explain, at least from a philosophical ideal, the evolution of man from the animal age to the enlightened 17th century in which they resided in. While I believe the critical difference between their views is the amount of power they each placed in the idea of a sovereign power, they also shared many other different ideals, such as the state of nature in which people resided, and their ideas of how people living in the commonwealth should relinquish their rights. However, one crucial element of commonality should be noted that existed between Locke and Hobbes. Even though many of their ideals differed their end result was the same; the common good of the people. Though they both may differ on how this plan works, they are able to base at the crux of each of their arguments, the essential need for reason in manââ¬â¢s life, and how we as a race are able to better ourselves through the tools of reason and gove rnment. Research Papers on Comparing Thomas Hobbes and John Locke - History EssayComparison: Letter from Birmingham and CritoAssess the importance of Nationalism 1815-1850 EuropeCapital PunishmentPETSTEL analysis of IndiaUnreasonable Searches and SeizuresTwilight of the UAWQuebec and CanadaAppeasement Policy Towards the Outbreak of World War 2Bringing Democracy to Africa19 Century Society: A Deeply Divided Era
Monday, October 21, 2019
recession in india
recession in india Free Online Research Papers Global Recession has brought magnanimous amount of grief and anxiety to all workers all over the world. It has severely affected the lifestyles and the living conditions of people worldwide. Business closing down, great retrenchment and staggering percentage of unemployment mirror how recession affects our modern world. People are overly vacated with what are the jobs that wonââ¬â¢t be directly affected by recession and how to stay afloat amidst this time of ordeal. According to the latest employment projections from United States Department of Labor, good tidings are on the horizon for all job seekers. Hereââ¬â¢s the 5 stable jobs expected to experience employment frenzy through 2018. 1. Accountants and Auditors They provide vital services to companies and individuals who want to maintain solid financial footing by analyzing and communicating financial information, ensuring public records are kept, and preparing taxes. Recession resistance: Accountants and auditors held 1.3 million jobs in 2008, and that number is expected to increase by 279,400 over the next decade into 2018. Education: A bachelorââ¬â¢s degree in accounting, is the most widely sought-after qualification by employers. For upper-level positions, some employers might prefer a masterââ¬â¢s degree in accounting or business administration. Average yearly salary: $65,840 2. Medical Assistants Providing needed assistance in the offices of physicians, podiatrists, and chiropractors, medical assistants handle administrative, clinical, or other specialized tasks. Recession resistance: The U.S. Department of Labor forecasts the number of medicals assistants will grow 34 percent from 2008-2018. Reasons: Medical advancements and an aging U.S. population. Education: Medical assisting certificate and associateââ¬â¢s degree programs provide academic and clinical training in various areas and can usually be completed in one to two years. Average yearly salary: $29,060 3. Registered Nurses RNs treat patients, give advice about medical conditions, instruct families on how to deal with health issues, and provide valuable emotional support. Recession resistance: RNs are the largest health care occupation with 2.6 million jobs. And that number is expected to increase by 22 percent through 2018. Reasons: Increasingly complex medical treatments and the rising number of aging Americans needing long-term care. Education: A bachelorââ¬â¢s degree, an associateââ¬â¢s degree, and a diploma, from an approved nursing program are the three most common educational avenues to a career as an RN. Youââ¬â¢ll advance further and faster with a more advanced degree. Average yearly salary: $65,130 4. Computer Software Engineers Programmers They make computers tick by creating, testing, and evaluating software applications and systems. Engineers might even design the latest hot-selling computer game or develop a new operating system. Recession resistance: In 2008, computer software engineers and programmers held about 1.3 million jobs. That figure is expected to jump 21 percent by 2018. Reasons: Concerns over information security and increased needs for new software. Education: Bachelorââ¬â¢s degrees in computer programming and applications, networking, or information systems, are among the most sought after by employers. An associateââ¬â¢s degree or certificate might suffice for others. Average yearly salary: $73,470 5. Management Analysts Sometimes called management consultants, analysts serve private industry by evaluating and recommending ways to better an organizationââ¬â¢s efficiency and productivity or to increase profits. Recession resistance: Competition for management analyst jobs is highly competitive, but firms who might hire consultants specializing in environmental (ââ¬Å"greenâ⬠) issues are expected to help the number of analysts jobs grow by 24 percent into the year 2018. Education: Educational requirements in this field might vary for entry-level positions. A masterââ¬â¢s degree in business administration or a related field ââ¬â such as e-business or e-commerce ââ¬â is considered useful. However, because analysts handle a wide range of projects, a bachelorââ¬â¢s degree in fields such as human resources, information technology, or marketing and sales could open doors. Average yearly salary: $82,92 Latest Trend in Recruitments Temporary Staffing in Indian Companies The HR fraternity in India is undergoing sea level changes with upcoming trends like e-recruitments, outsourcing HR functions, and the like. Now the next big thing Temporary Staffing is gaining acceptance across industries. Few months back the job market was overflowed with people who were labeled as leftover guys who could not find a permanent job for themselves. But that is passà © now. Companies are recruiting employees on temporary basis mainly for a particular project, paying them off and then letting them go as soon as the project is over. What is Temping? Temping is the process of hiring temporary workers or, as they are called Temps, for a shorter duration of time for a particular project and remain in the company till the project lasts. The temps work for one Client Company while being on rolls of a third party. A temp is contract worker who is being hired for a short time, typically till a project lasts. The contract ranges from a period of 2 months to 15 months. These temps are made available by the employee leasing firms like TeamLease. Such companies provide a wide range of temporary staffing solutions including temporary-to-permanent services where in the company hires an employee for trial basis and absorbs him within the company on the basis on his performance; and long-term contracts where temps are hired for a longer period of time which may last up to two years. The non core functions like sales, front office, customer support, finance, back end operations and administration demand more temps. The reason seems to be quite obvious companies focus on their core functions to sustain the cut throat competition, while they outsource their non core functions. In India, almost 80 million people are working on temporary basis, however a meager 0.5 per cent of them are employed in the organized sector. Currently there are about 1,20,000 to 1,30,000 temps working with over 500 companies, including ICICI Lombard, Bharti, Reliance Infocomm, HP, Wipro BPO, Transworks and so on. If we go by sector basis, studies show that temps are predominant in IT sector. However, others sectors like banking, FMCG, retail and consumer durables sector are also showing their interest in hiring temps. So how often do these temporary workers turn into permanent employees? Though, earlier, the chance of being absorbed by the company was almost negligible, the trend is gaining pace as the demand for skilled workforce is increasing. The conversion rate has grown up to 20 to 30 percent form four percent. Why Temping Temping started off with MNCs hiring contract workers. It comes with a packet of benefits for the organizations as well as for the employees. Organizations enjoy the benefit of workforce flexibility and ease of recruitments and quick replacements. Temping also saves training costs as leasing companies direct skilled and experienced workers to the companies. Moreover, non productive employees can be chucked out without many complications. By outsourcing non core functions, the company deeply focuses on its core functions only. The companies also get more work done from temporary workers and also escape for paying them perks and incentives. From employees point of view, temping helps an employee to acquire different skills and upgrade basic skills by working in different setups. Employees acquire multiple skills to remain employable is competitive job markets. Temping even offers tempting career opportunities to housewives, retired personnel, people with defense backgrounds, freelancers and freshers. Temps who work for big brands also boast about the same in their resumes, thus, giving them an advantage over others. These are some flip sides of temping too. Job insecurity always acts as a demotivator for others candidates. The temps hardly get any perks and incentives like the permanent employees. The chances of becoming permanent with the client company are also less so possibility of achieving a stable career lacks. The underperformers are always at risk as they can be sacked anytime and that too without a notice. Moreover, too much hopping act as red flags in ones resume. Sustaining the trend Though job security is still essential for many in India, an increasing number of young people are opting for temporary jobs. The market for such jobs will grow exponentially in coming years. Almost every sector, be it capital intensive or labor intensive, is showing keen interest in temps. Moreover, those candidates who have a hunger for multiple skills, are increasingly taking up these jobs. Permanent job assurance is now passà © as downsizing can happen any time. Temping will prove to be a viable option in such cases. The industry watchers believe that this new HR trend is here to stay. The Future Of Temporary Staffing Temporary staffing is expected to grow exponentially in the country, in the near future. ââ¬Å"It is the quality and ease of availability of manpower that would define the role employee leasing organisations stand to play, not only in non-core functions but also certain core business areas of organisations,â⬠points out Reddy, adding that it is imperative for outsourcing partners to move from ââ¬Å"onlyâ⬠employee leasing to complete end-to-end ââ¬Å"activity management.â⬠It is also necessary for outsourcing partners to be equipped with vertical and functional specialisations, with key differentiators customised to the Indian employment scenario. In a recruitment market where the concept of full-time employment is increasingly becoming a thing of the past, temporary staffing is emerging as the viable option. Advantages of temporary staffing The opportunity for organisations to focus on core areas Flexibility of employment Ease of recruitment and replacement Long-term cost advantages Benefits of scale Future of recruitments India Inc is likely to witness 10-15 per cent increase in hiring in 2010-11, led by the telecom sector which is forecast to provide awhopping over one lakh jobs, global consultancy Ernst Young has said Indian job market seems to be striking right chord with countrys working population, as more and more vacancies are being created and filled across sectors. On a conservative stand, percentage increase in hiring in the new fiscal can be between 10-15 per cent, Ernst Young Partner and National Head (People Organisation) N S Rajan told PTI. The telecom growth story would continue in the fiscal and hiring activity in this sector is likely to be in excess of 1,00,000 jobs, Rajan said. Other sectors that are likely to lead hiring in the new fiscal include pharmaceuticals, FMCG and education, as they are facing a talent crunch at present. Ernst Young, however, believes that despite the ensuing euphoria over rising number of jobs, companies are likely to approach hiring with caution due to the hard lesson learnt in the past. Although most companies are doing away with hiring freeze imposed during the economic downturn, they are likely to hire strategically and look for long-term talent needs and not near term staffing requirements. Moreover, Ernst Young believes that while hiring would continue mostly to meet the replacement demand created as a result of erstwhile hiring freeze, there are likely to be mixed trends in the level of hiring activity across sectors. Though hiring has picked up in the economy across sectors like pharmaceutical, chemical, auto, insurance, education, retail and IT, it is unlikely that the bullish hiring trends of 2007 will be restored within the next one year, Rajan said. In sectors like auto/auto-components, banking, financial services and insurance (BFSI), and real estate, the hiring is on the rise to primarily fill in vacancies resulting from significant downsizing in the past and to meet future expansion plans. Interestingly, most companies are expecting higher attrition levels over the next few months on account of jobs coming back into economy resulting in increments being used as a tool to retain talent. Suffered IT Industry Due to Recession The final tally of jobs lost due to recession in the US is out. Computer World has reported that the US tech industry lost 250,000 jobs last year, nearly 4% of its total workforce. Tech manufacturing was worst hit and lost 8.1% or 112,600 jobs. Software services, which was least hit, lost 1.2% or 21,000 jobs. Overall, technology did better than other sectors of US economy which registered an overall unemployment rate of 9.3% last year. The report says hiring is back in the US with improving economy. California, Texas, New York, Florida and Virginia are top five states for finding jobs in the US. Though Indian IT industry also saw significant layoffs, there is no convincing data on the number of jobs lost due to recession. Most Indian firms, including the big players, chose to fire their employees stealthily on performance issues. Research Papers on recession in indiaThe Effects of Illegal ImmigrationTwilight of the UAWInfluences of Socio-Economic Status of Married MalesRiordan Manufacturing Production PlanPETSTEL analysis of IndiaIncorporating Risk and Uncertainty Factor in CapitalOpen Architechture a white paperArguments for Physician-Assisted Suicide (PAS)Hip-Hop is ArtThe Project Managment Office System
Subscribe to:
Posts (Atom)