Intel Science Talent Search

0

Intel Science Talent Search (Intel STS) is a research-based science competition in the United States primarily for high school students. It has been referred to "the nation's oldest and most prestigious" [1] science competition, and the Westinghouse/Intel awards have been referred to as the "Baby Nobels."[2] In his speech at the dinner honoring the 1991 Winners, President George H. W. Bush called the competition the "Super Bowl of science."[3]

The Intel STS is administered by the Society for Science & the Public, which began the competition in 1942 with Westinghouse; for many years, the competition was known as the "Westinghouse Science Talent Search." In 1998, Intel became the sponsor after it outbid Siermens, which had acquired Westinghouse's power generation unit . (Siemens subsequently sponsored its own competition.) Over the years, over $3.8 million in scholarships have been awarded through the program.

Nearly all of the entrants work with mentors, as high school students typically do not have the capabilities of doing research projects entirely on their own. The mentors are usually professional researchers,[4][citation needed] and the entrants' work is ordinarily performed over two years in those laboratories.[citation needed] However, the research papers must be all in the entrants' own writing, and the teenage Finalists' papers are regarded to be "college-level, professional quality."[5][citation needed] The selection process is highly competitive, and besides the research paper, letters of recommendation, essays, test scores, extracurricular activities, and high school transcripts may be factored in the selection of finalists and winners.

Each year, approximately 1,600 papers are submitted. The top 300 applicants are announced in mid-January with each semifinalist and their school receiving $1,000. In late January, the 40 finalists (the scholarship winners) are informed. In March, the finalists are flown to Washington, D.C. where they are interviewed for the top ten spots, which have scholarships ranging from $20,000 to $100,000 for the first prize winner. By tradition, at least one of the interviewers is a Nobel Laureate, and the interviewers have included Glenn T. Seaborg (Nobel Laureate with Edwin M. McMillan in Chemistry, 1951) and Joseph Taylor (Nobel Laureate in Physics, 1993).[citation needed] In addition, all finalists receive $5,000 scholarships and an Intel-based computer.

Some Intel STS finalists and winners have gone on to receive higher honors in mathematics, science, and technology: among them, six have received Nobel Prizes[citation needed]; two have earned the Fields Medal; three have been awarded the National Medal of Science; ten have won the MacArthur Fellowship; 56 have been named Sloan Research Fellows; 30 have been elected to the National Academy of Sciences; and five have been elected to the National Academy of Engineering

Intel, x86 processors, and the IBM PC

0

Despite the ultimate importance of the microprocessor, the 4004 and its successors the 8008 and the 8080 were never major revenue contributors at Intel. As the next processor, the 8086 (and its variant the 8088) was completed in 1978, Intel embarked on a major marketing and sales campaign for that chip nicknamed "Operation Crush", and intended to win as many customers for the processor as possible. One design win was the newly created IBM PC division, though the importance of this was not fully realized at the time.

IBM introduced its personal computer in 1981, and it was rapidly successful. In 1982, Intel created the 80286 microprocessor, which, two years later, was used in the IBM PC/AT. Compaq, the first IBM PC "clone" manufacturer, produced a desktop system based on the faster 80286 processor in 1985 and in 1986 quickly followed with the first 80386-based system, beating IBM and establishing a competitive market for PC-compatible systems and setting up Intel as a key component supplier.

In 1975 the company had started a project to develop a highly advanced 32-bit microprocessor, finally released in 1981 as the Intel iAPX 432. The project was too ambitious and the processor was never able to meet its performance objectives, and it failed in the marketplace. Intel extended the x86 architecture to 32 bits instead.

Intel GMA

0

The Intel Graphics Media Accelerator, or GMA, is Intel's current line of integrated graphics processors built into various motherboard chipsets.

These integrated graphics products allow a computer to be built without a separate graphics card, which can reduce cost, power consumption and noise. They are commonly found on low-priced notebook and desktop computers as well as business computers, which do not need high levels of graphics capability. 90% of all PCs sold have integrated graphics.[1] They rely on the computer's main memory for storage, which imposes a performance penalty, as both the CPU and GPU have to access memory over the same bus.

The GMA line of GPUs replaces the earlier "Intel Extreme Graphics", and the Intel740 line, which were discrete units in the form of AGP and PCI cards. Later, Intel integrated the i740 core into the Intel 810 chipset.

The original architecture of GMA systems supported only a few functions in hardware, and relied on the host CPU to handle at least some of the graphics pipeline, further decreasing performance. However, with the introduction of Intel’s 4th generation of GMA architecture (GMA X3000) in 2006, many of the functions are now built into the hardware, providing an increase in performance. The 4th generation of GMA combines fixed function capabilities with a threaded array of programmable executions units, providing advantages to both graphics and video performance. Many of the advantages of the new GMA architecture come from the ability to flexibly switch as needed between executing graphics-related tasks or video-related tasks. While GMA performance has been widely criticized in the past as being too slow for computer games, the latest GMA generation should ease many of those concerns for the casual gamer.

Despite similarities, Intel's main series of GMA IGPs is not based on the PowerVR technology Intel licensed from Imagination Technologies. Intel used the low-power PowerVR MBX designs in chipsets supporting their XScale platform, and since the sale of XScale in 2006 has licensed the PowerVR SGX and used it in the GMA 500 IGP for use with their Atom platform.

Intel has begun working on a new series of discrete (non-integrated) graphics hardware products, under the codename Larrabee.


Network classification

0

Connection method:

Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hnr. Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.

Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.
Wired Technologies:

Twisted-Pair Wire - This is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed range from 2 million bits per second to 100 million bits per second.

Coaxial Cable – These cables are widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.

Fiber Optics – These cables consist of one or more thin filaments of glass fiber wrapped in a protective layer. It transmits light which can travel over long distance and higher bandwidths. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed could go up to as high as trillions of bits per second. The speed of fiber optics is hundreds of times faster than coaxial cables and thousands of times faster than twisted-pair wire.

Motherboard

0
A motherboard is the central printed circuit board (PCB) in many modern computers, and holds many of the crucial components of the system, while providing connectors for other peripherals. The motherboard is sometimes alternatively known as the main board, system board, or, on Apple computers, the logic board.[1] It is also sometimes casually shortened to mobo

·

Prior to the advent of the microprocessor, a computer was usually built in a card-cage case or mainframe with

components connected by a backplane consisting of a set of slots themselves connected with wires; in very old designs the wires were discrete connections between card connector pins, but printed-circuit boards soon became the standard practice. The central processing unit, memory and peripherals were housed on individual printed circuit boards which plugged into the backplane.

During the late 1980s and 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard (see below). In the late 1980s, motherboards began to include single ICs (called Super I/O chips) capable of supporting a set of low-speed peripherals: keyboard, mouse, floppy disk drive, serial ports, and parallel ports. As of the late 1990s, many personal computer motherboards supported a full range of audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component.

The early pioneers of motherboard manufacturing were Micronics, Mylex, AMI, DTK, Hauppauge, Orchid Technology, Elitegroup, DFI, and a number of Taiwan-based manufacturers.

Popular personal computers such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment.

The term mainboard is archaically applied to devices with a single board and no additional expansions or capability. In modern terms this would include embedded systems, and controlling boards in televisions, washing machines etc. A motherboard specifically refers to a printed circuit with the capability to add/extend its performance/capabilities with the addition of "daughterboards".


History of computing hardware

0

The history of computing hardware is the record of the constant drive to make computer hardware faster, cheaper, and store more data.

Before the development of the general-purpose computer, most calculations were done by humans. Tools to help humans calculate are generally called calculators. Calculators continue to develop, but computers add the critical element of conditional response, allowing automation of both numerical calculation and in general, automation of many symbol-manipulation tasks. Computer technology has undergone profound changes every decade since the 1940s.

Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.

Aside from written numerals, the first aids to computation were purely mechanical devices that required the operator to set up the initial values of an elementary arithmetic operation, then propel the device through manual manipulations to obtain the result. An example would be a slide rule where numbers are represented by points on a logarithmic scale and computation is performed by setting a cursor and aligning sliding scales. Numbers could be represented in a continuous "analog" form, where a length or other physical property was proportional to the number. Or, numbers could be represented in the form of digits, automatically manipulated by a mechanism. Although this approach required more complex mechanisms, it made for greater precision of results.

Both analog and digital mechanical techniques continued to be developed, producing many practical computing machines. Electrical methods rapidly improved the speed and precision of calculating machines, at first by providing motive power for mechanical calculating devices, and later directly as the medium for representation of numbers. Numbers could be represented by voltages or currents and manipulated by linear electronic amplifiers. Or, numbers could be represented as discrete binary or decimal digits, and electrically-controlled switches and combinatorial circuits could perform mathematical operations.

The invention of electronic amplifiers made calculating machines much faster than mechanical or electromechanical predecessors. Vacuum tube amplifiers gave way to discrete transistors, and then rapidly to monolithic integrated circuits. By defeating the Tyranny of numbers, integrated circuits made high-speed and low-cost digital computers a widespread commodity.

This article covers major developments in the history of computing hardware, and attempts to put them in context. For a detailed timeline of events, see the computing timeline article. The history of computing article treats methods intended for pen and paper, with or without the aid of tables. Since all computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers.

Intel 80386

0

The Intel 80386, also known as the i386, or just 386,[1] was a 32-bit microprocessor introduced by Intel in 1985. The first versions had 275,000 transistors and were used as the central processing unit (CPU) of many personal computers and workstations. As the original implementation of the 32-bit extensions to the 8086 architecture, the 80386 instruction set, programming model, and binary encodings are still the common denominator for all 32-bit x86 processors. This is termed x86, IA-32, or the i386-architecture, depending on context.

The 80386 could correctly execute most code intended for earlier 16-bit x86 processors such as the 80286; following the same tradition, modern 64-bit x86 processors are able to run most programs written for older chips, all the way back to the original 16-bit 8086 of 1978. Over the years, successively newer implementations of the same architecture have become several hundreds of times faster than the original 80386 (and thousands of times faster than the 8086). A 33 MHz 80386 was reportedly measured to operate at about 11.4 MIPS.[2]

The 80386 was launched in October 1985, and full-function chips were first delivered in 1986.[vague] Mainboards for 80386-based computer systems were at first expensive to buy, but prices were rationalized upon the 80386's mainstream adoption. The first personal computer to make use of the 80386 was designed and manufactured by Compaq.[3]

In May 2006, Intel announced that production of the 80386 would cease at the end of September 2007.[4] Although it has long been obsolete as a personal computer CPU, Intel and others had continued to manufacture the chip for embedded systems. Embedded systems that utilise a 80386 or one of its derivatives are widely used in aerospace technology.