Sunum yükleniyor. Lütfen bekleyiniz

Sunum yükleniyor. Lütfen bekleyiniz

BIL106E Introduction to Scientific & Engineering Computing Hüseyin TOROS, Ph.D. Istanbul Technical University Faculty of Aeronautics and Astronautics Dept.

Benzer bir sunumlar

... konulu sunumlar: "BIL106E Introduction to Scientific & Engineering Computing Hüseyin TOROS, Ph.D. Istanbul Technical University Faculty of Aeronautics and Astronautics Dept."— Sunum transkripti:

1 BIL106E Introduction to Scientific & Engineering Computing Hüseyin TOROS, Ph.D. Istanbul Technical University Faculty of Aeronautics and Astronautics Dept. of Meteorological Engineering Voice: An important fraction of our interaction will be via Useful Pages: (Free OnLine Dictionary of Computing) F Compiler: Read this first, Full installer (3.4Mb): win95nt.exethiswin95nt.exe For more information syllabus_Fsyllabus_F 07/26/09 Introduction to Scientific & Engineering Computing 1

2 History of Computers The first computers were people! That is, electronic computers (and the earlier mechanical computers) were given this name because they performed the work that had previously been assigned to people. "Computer" was originally a job title: it was used to describe those human beings (predominantly women) whose job it was to perform the repetitive calculations required to compute such things as navigational tables, tide charts, and planetary positions for astronomical almanacs. Imagine you had a job where hour after hour, day after day, you were to do nothing but compute multiplications. Boredom would quickly set in, leading to carelessness, leading to mistakes. And even on your best days you wouldn't be producing answers very fast. Therefore, inventors have been searching for hundreds of years for a way to mechanize (that is, find a mechanism that can perform) this task. 07/26/09 2 Introduction to Scientific & Engineering Computing

3 The abacus was an early aid for mathematical computations. Its only value is that it aids the memory of the human performing the calculation. A skilled abacus operator can work on addition and subtraction problems at the speed of a person equipped with a hand calculator (multiplication and division are slower). The abacus is often wrongly attributed to China. In fact, the oldest surviving abacus was used in 300 B.C. by the Babylonians. 07/26/09 3 Introduction to Scientific & Engineering Computing

4 ELECTRONİC NUMERİCAL INTEGRATOR AND COMPUTER 1st large scale electronic digital computer designed and constructed at the Moore School of Electrical Engineering of the University of Pennsylvania since 1920s, faculty had worked with Aberdeen Proving Ground’s Ballistics Research Laboratory (BRL) 07/26/09 4 Introduction to Scientific & Engineering Computing

5 INSPİRATİON AND PERSPİRATİON UNİTE 1943 Mauchly and Eckert prepare a proposal for the US Army to build an Electronic Numerical Integrator calculate a trajectory in 1 second May 31, 1943 Construction of ENIAC starts 1944 early thoughts on stored program computers by members of the ENIAC team July 1944 two accumulators working 07/26/09 5 Introduction to Scientific & Engineering Computing

6 ACCUMULATOR (28 VACUUM TUBES) 07/26/09 6 Introduction to Scientific & Engineering Computing

7 ENIAC AT MOORE SCHOOL, UNİVERSİTY OF PENNSYLVANİA 07/26/09 7 Introduction to Scientific & Engineering Computing

8 07/26/09 8 Introduction to Scientific & Engineering Computing

9 EARLY THOUGHTS ABOUT STORED PROGRAM COMPUTİNG January 1944 Moore School team thinks of better ways to do things; leverages delay line memories from War research September 1944 John von Neumann visits Goldstine’s meeting at Aberdeen Train Station October 1944 Army extends the ENIAC contract to include research on the EDVAC and the stored-program concept Spring 1945 ENIAC working well June 1945 First Draft of a Report on the EDVAC: Electronic Discrete Variable Automatic Computer 07/26/09 9 Introduction to Scientific & Engineering Computing


11 Freddy Williams and Tom Kilburn Developed an electrostatic memory Prototype operational June 21, 1948 and machine to execute a stored program Memory: 32 words of 32 bits each Storage: single Williams tube (CRT) Fully operational: October 1949 Ferranti Mark I delivered in February /26/09 11 Introduction to Scientific & Engineering Computing

12 EDSAC Maurice Wilkes, University Mathematical Laboratory, Cambridge University Moore School Lectures Electronic Delay Storage Automatic Calculator, EDSAC operational May, 1949 J. Lyons Company and the LEO, Lyons Electronic Office, operational fall /26/09 12 Introduction to Scientific & Engineering Computing


14 NATİONAL PHYSİCAL LABORATORY Alan Turing Automatic Computing Engine (ACE) Basic design by spring, 1946 Harry Huskey joins project Pilot ACE working, May 10, 1950 English Electric: DEUCE, 1954 Full version of ACE at NPL, /26/09 14 Introduction to Scientific & Engineering Computing


16 07/26/09 16 Introduction to Scientific & Engineering Computing MAİNFRAME COMPUTERS

17 REMİNGTON RAND UNIVAC 43 UNIVACs were delivered to government and industry Memory: mercury delay lines: 1000 words of 12 alphanumeric characters Secondary storage: metal oxide tape Access time: 222 microseconds (average) Instruction set: 45 operation codes Accumulators: 4 Clock: 2.25 Mhz 07/26/09 17 Introduction to Scientific & Engineering Computing

18 1951 UNİVAC Typical 1968 prices—EX-cluding maintenance & support!

19 IBM 701 (DEFENSE CALCULATOR) Addition time: 60 microseconds Multiplication: 456 microseconds Memory: 2048 (36 bit) words using Williams tubes Secondary memory: Magnetic drum: 8192 words Magnetic tape: plastic Delivered: December 1952: IBM World Headquarters (total of 19 installed) 07/26/09 19 Introduction to Scientific & Engineering Computing

20 SECOND GENERATİON ( ) 1958 Philco introduces TRANSAC S-2000 first transistorized commercial machine IBM 7070, 7074 (1960), 7072(1961) 1959 IBM 7090, 7040 (1961), 7094 (1962) 1959 IBM 1401, 1410 (1960), 1440 (1962) FORTRAN, ALGOL, and COBOL are first standardized programming languages 07/26/09 20 Introduction to Scientific & Engineering Computing

21 THİRD GENERATİON ( ) April 1964 IBM announces the System/360 solid logic technology (integrated circuits) family of “compatible” computers 1964 Control Data delivers the CDC 6600 nanoseconds telecommunications BASIC, Beginners All-purpose Symbolic Instruction Code 07/26/09 21 Introduction to Scientific & Engineering Computing

22 FOURTH GENERATİON (1971- ) Large scale integrated circuits (MSI, LSI) Nanoseconds and picoseconds Databases (large) Structured languages (Pascal) Structured techniques Business packages 07/26/09 22 Introduction to Scientific & Engineering Computing




26 PDP-11 (1970) 26

27 INTEL Noyce, Moore, and Andrew Grove leave Fairchild and found Intel in 1968 focus on random access memory (RAM) chips Question: if you can put transistors, capacitors, etc. on a chip, why couldn’t you put a central processor on a chip? Ted Hoff designs the Intel 4004, the first microprocessor in 1969 based on Digital’s PDP-8 07/26/09 27 Introduction to Scientific & Engineering Computing MİCROCOMPUTERS

28 Ed Roberts founds Micro Instrumentation Telemetry Systems (MITS) in 1968 Popular Electronics puts the MITS Altair on the cover in January 1975 [Intel 8080] Les Solomon’s 12 year old daughter, Lauren, was a lover of Star Trek. He asked her what the name of the computer on the Enterprise was. She said “ ‘computer’ but why don’t you call it Altair because that is where they are going tonight!” 07/26/09 28 Introduction to Scientific & Engineering Computing


30 INTEL PROCESSORS 07/26/09 30 Introduction to Scientific & Engineering Computing MICROPROCESSORYEARSPEEDWORD LENGT H TRANSIST ORS MIPS Intel KHz4-bit2, Intel KHz8-bit3, Intel MHz8-bit6, Intel MHz16-bit29, Intel MHz16-bit29, Intel MHz16-bit134, Intel MHz32-bit275,0004 Intel (i486) MHz32-bit1.2 Million70 Intel (Pentium) MHz32-bit3.3 Million Intel Pentium Pro MHz32-bit5.5 Million300 Intel Pentium MMX MHz32-bit4.5 Million - Intel Pentium II MHz32-bit7.5 Million - Intel Pentium III MHz32-bitOver 9.5 Million - Intel Itanium Processor20001 GHz64-bit15,000,0001,200

31 07/26/09 31 Introduction to Scientific & Engineering Computing Computer Processing Speed Computer processing speed depends on a variety of factors. Three of the most important factors are: Word length (the number of bits that can be processed at one time by the microprocessor) Cycle Speed (how fast individual events are processed, measured in Megahertz) Data Bus Width (determines how much data can be transferred between the CPU and memory) Other factors Include: RAM (amount of available random access memory) Disk Access Speed (speed that data can be read from hard disk) Code Efficiency (how efficiently the computer code has been designed)

32 32 What is a computer? İnput Processing Output storage. 07/26/09 Introduction to Scientific & Engineering Computing The computer is an automatic device that performs calculations makes decisions has capacity for storing instantly recalling vast amount of information 32

33 33 07/26/09 Introduction to Scientific & Engineering Computing Computer system – A collection of related components that are designed to work together. A system includes hardware and software. Hardware Software What is a computer System? 33

34 Introduction to Scientific & Engineering Computing What is a computer? Hardware Processor Memory I/O units (Input/Output Units) 07/26/09 34

35 Introduction to Scientific & Engineering Computing How does a computer work? Executes very simple instructions. Executes them incredibly fast. Must be programmed: it is the software, i.e., the programs, that characterize what a computer actually does. 07/26/09 35

36 Computer Structure Input Devices Output Devices Control Unit Arithmetic- Logic Unit Main Memory External Memory CPU = Central Processing Unit Major Components of a computing system 07/26/09 Introduction to Scientific & Engineering Computing 36

37 Computer Structure Registers are a set of special high-speed memory locations within the CPU Access speed within the register is thousands of times faster than access speed in RAM MEMORY MEASUREMENT The memory unit of a computer is two-state devices. Then it is natural to use a binary scheme (using only the two binary digits {bits} 0 and 1 to represent information in a computer). Bytes = 8 Bits Memory is commonly measured in bytes, and a block of 2 10 = 1024 bytes = 1 K 1MB=1024 K= = = 2 20 = 1,048,576 bytes. or = 2 23 = 8,384,608 bits. 07/26/09 Introduction to Scientific & Engineering Computing 37

38 07/26/09 38 Introduction to Scientific & Engineering Computing What is a computer program? The computer program characterizes what a computer actually does. A program (independently of the language in which it is written) is constituted by two fundamental parts: A representation of the information (data) relative to the domain of interest. A description of how to manipulate the representation in such a way as to realize the desired functionality: operations. To write a program both aspects have to be addressed.

39 39 07/26/09 Introduction to Scientific & Engineering Computing A list of instructions that are grouped together to accomplish a task or tasks. The instructions, called machine code or assembly code consist of things like reading and writing memory, arithmetic operations, and comparisons. 39 Program

40 40 07/26/09 Introduction to Scientific & Engineering Computing Every program must be translated into a machine language that the computer can understand.machine language This translation is performed by compilers, interpreters, and assemblers.compilersinterpretersassemblers When you buy software, you normally buy an executable version of a This means that the program is already in machine language -- it has already been compiled and assembled and is ready to execute.compiled Program

41 07/26/09 41 Introduction to Scientific & Engineering Computing

42 07/26/09 42 Introduction to Scientific & Engineering Computing While easily understood by computers, machine languages are almost impossible for humans to use because they consist entirely of numbers. Programmers, therefore, use either a high-level programming language or an assembly language. An assembly language contains the same instructions as a machine language, but the instructions and variables have names instead of being just numbers.Programmersassembly languageinstructionsvariablesnames Programs written in high-level languages are translated into assembly language or machine language by a compiler. Assembly language programs are translated into machine language by a program called an assembler.Programshigh-level languagescompilerassembler Every CPU has its own unique machine language. Programs must be rewritten or recompiled, therefore, to run on different types of computers.CPUcompiledrun Program

43 07/26/09 43 Introduction to Scientific & Engineering Computing Compiler A program that translates source code into object code. The compiler derives its name from the way it works, looking at the entire piece of source code and collecting and reorganizing the instructions. Thus, a compiler differs from an interpreter, which analyzes and executes each line of source code in succession, without looking at the entire program. The advantage of interpreters is that they can execute a program immediately. Compilers require some time before an executable program emerges. However, programs produced by compilers run much faster than the same programs executed by an interpreter.

44 07/26/09 44 Introduction to Scientific & Engineering Computing Every high-level programming language (except strictly interpretive languages) comes with a compiler. In effect, the compiler is the language, because it defines which instructions are acceptable. Because compilers translate source code into object code, which is unique for each type of computer, many compilers are available for the same language. For example, there is a FORTRAN compiler for PCs and another for Apple Macintosh computers. In addition, the compiler industry is quite competitive, so there are actually many compilers for each language on each type of computer. More than a dozen companies develop and sell C compilers for the PC. Compiler Source Program (High-level language)‏ Compiler Object Program (machine language)‏ Compilation errors Run-time errors Steps of execution of a Fortran program

45 07/26/09 45 Introduction to Scientific & Engineering Computing Interpreter A program that executes instructions written in a high-level language. There are two ways to run programs written in a high-level language. The most common is to compile the program; the other method is to pass the program through an interpreter.programexecutesinstructionshigh-level languageruncompile An interpreter translates high-level instructions into an intermediate form, which it then executes. In contrast, a compiler translates high-level instructions directly into machine language. Compiled programs generally run faster than interpreted programs. The advantage of an interpreter, however, is that it does not need to go through the compilation stage during which machine instructions are generated. This process can be time-consuming if the program is long. The interpreter, on the other hand, can immediately execute high-level programs. For this reason, interpreters are sometimes used during the development of a program, when a programmer wants to add small sections at a time and test them quickly. In addition, interpreters are often used in education because they allow students to program interactively.compilermachine languageprogrammer Both interpreters and compilers are available for most high-level languages. However, BASIC and LISP are especially designed to be executed by an interpreter. In addition, page description languages, such as PostScript, use an interpreter. Every PostScript printer, for example, has a built-in interpreter that executes PostScript instructions.BASICLISPpage description languagesPostScriptprinter

46 46 07/26/09 Introduction to Scientific & Engineering Computing Programing Language A programming language that is once removed from a computer's machine language. Machine languages consist entirely of numbers and are almost impossible for humans to read and write. Assembly languages have the same structure and set of commands as machine languages, but they enable a programmer to use names instead of numbers. Each type of CPU has its own machine language and assembly language, so an assembly language program written for one type of CPU won't run on another. In the early days of programming, all programs were written in assembly language. Now, most programs are written in a high-level language such as FORTRAN or C. Programmers still use assembly language when speed is essential or when they need to perform an operation that isn't possible in a high-level language.programming language computer'smachine languagecommandsprogrammernamesCPUprogramrun high-level languageFORTRANC

47 Why we use a programming language ? The main reason for learning a programming language is to use the computer to solve scientific engineering problems 07/26/09 47 Introduction to Scientific & Engineering Computing

48 07/26/09 Introduction to Scientific & Engineering Computing  Taught in three versions: Fortran (F) C Matlab Introduction to Scientific & Engineering Computing 48

49 49 Programming language Basic skills for scientific/engineering problem solving using computers: Data structures and algorithms Programming skills in a (standard) language Skills for integrating the computing chain: ????????????????? Analyze  Program  Run  Visualize 07/26/09 Introduction to Scientific & Engineering Computing

50 Engineering simulation of the natural/artificial systems  Build a conceptual  quantitative model (most of the time, write down the appropriate equations)‏  Formulate a solution to these equations using numerical methods Data structures + algorithms  Program these data structures and algorithms in a language  Run the program and analyze its output using visualization techniques 07/26/09 Introduction to Scientific & Engineering Computing 50

51 07/26/09 51 Introduction to Scientific & Engineering Computing Scientific & Engineering problems ??????

52 07/26/09 52 Introduction to Scientific & Engineering Computing Flowcharts are often used to graphically represent algorithms. In mathematics, computing, linguistics, and related subjects, an algorithm is a finite sequence of instructions, an explicit, step-by-step procedure for solving a problem, often used for calculation and data processing. It is formally a type of effective method in which a list of well-defined instructions for completing a task, will when given an initial state, proceed through a well-defined series of successive states, eventually terminating in an end-state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as probabilistic algorithms, incorporate randomness. solving a problem : Algorithm - Flowcharts

53 07/26/09 53 Introduction to Scientific & Engineering Computing

54 07/26/09 54 Introduction to Scientific & Engineering Computing Write out the problem statement. Include information on what you are to solve, and consider why you need to solve the problem.

55 07/26/09 55 Introduction to Scientific & Engineering Computing A simple flowchart representing a process for dealing with a broken lamp.

56 07/26/09 56 Introduction to Scientific & Engineering Computing A simple flowchart for computing factorial N (N!)

57 57 Kavak Ağacı ile Kabak Ulu bir kavak ağacının yanında bir kabak filizi boy göstermiş. Bahar ilerledikçe bitki kavak ağacına sarılarak yükselmeye başlamış. Yağmurların ve güneşin etkisiyle müthiş bir hızla büyümüş ve neredeyse kavak ağacı ile aynı boya gelmiş. Bir gün dayanamayıp sormuş kavağa: -Sen kaç ayda bu hale geldin ağaç? -On yılda, demiş kavak. -On yılda mı? Diye gülmüş ve çiçeklerini sallamış kabak. -Ben neredeyse iki ayda seninle aynı boya geldim bak! -Doğru, demiş kavak. Günler günleri kovalamış ve sonbaharın ilk rüzgârları başladığında kabak üşümeye sonra yapraklarını düşürmeye, soğuklar arttıkça da aşağıya doğru inmeye başlamış. Sormuş endişeyle kavağa: -Neler oluyor bana ağaç? -Ölüyorsun, demiş kavak. -Niçin? -Benim on yılda geldiğim yere, iki ayda gelmeye çalıştığın için. Çalışmadan emek harcamadan gelinen nokta başarı sayılmaz. Kolay kazanılan, kolay kaybedilir. Her işte alın teri ve emek şarttır. 07/26/09 Introduction to Scientific & Engineering Computing 57

58 58 07/26/09 Introduction to Scientific & Engineering Computing

59 07/26/09 Introduction to Scientific & Engineering Computing 59

60 ! program check integer::M,N,F print*,”Please enter full number for N? N>1” read*,N M=1 F=1 do if (M==N) then print*,”M=“,M,” N=“,N,” F=“,F exit else M=M+1 F=F*M cycle endif enddo end program check 07/26/09 Introduction to Scientific & Engineering Computing 60

61 07/26/09 Introduction to Scientific & Engineering Computing 61

62 62 Programming and Problem Solving Program-development process consists of at least five steps: 1) Problem analysis and specification  The first stage in solving the problem is to analyze the problem and formulate a precise specification of it 2) Data organization and algorithm design  Determine how to organize and store the data in the problem.  Develop procedures to process the data and produce the required output. These procedures are called algorithms. 3) Program coding  Coding is the process of implementing data objects and algorithms in some programming language. A Simple program begins with the PROGRAM, and ends with the END PROGRAM statements 07/26/09 Introduction to Scientific & Engineering Computing

63 63 Programming and Problem Solving 4) Execution and testing  This is the checking step that the algorithm and program are correct.  Compile (produce an object file)  {compile-time errors} + run {run-time errors}: IMPORTANT!! Logic errors that arise in the design of the algorithm or in the coding of the program are very hard to find. 5) Program maintenance  In real world applications, programs need to modify to improve their performance. 07/26/09 Introduction to Scientific & Engineering Computing

64 Bazı şeyler paylaşıldıkça küçülür, BİLGİ ve SEVGİ ise paylaşıldıkça büyür 07/26/09 Introduction to Scientific & Engineering Computing 64

65 Algoritma ve Akış Diyagramları Algoritma sözcüğü Ebu Abdullah Muhammed bin Musa el Harezmi adındaki İran’lı alimden gelmiştir. El Harezmi 9. yüzyılda cebir alanındaki çalışmalarını kitaba dökerek matematiğe çok büyük bir katkı sağlamıştır. Hazırladığı bu “Hisab el-cebir ve el-mukabala” kitabı dünyanın ilk cebir kitabı ve aynı zamanda ilk algoritma kitabı olma özelliğini kazanmıştır. Algoritma, çözülmesi gereken bir problemin belirli kurallar ve mantık çerçevesinde adım adım çözülerek yazıya dökülmesi işlemidir. Algoritmalar sonlu ve kesin ifadelerle birlikte kullanılmalıdır. Aksi takdirde sonu belirlenmemiş ve/veya kesin ifadeler kullanılmamış algoritmalar yazılırsa sonsuz döngü, kilitlenme gibi istenmeyen bir takım hatalar ortaya çıkacaktır. diyagramlari/ Yazar: Mehmet PEKGENÇ

66 Algoritma Neden Gereklidir? Aslında algoritmalar her zaman hayatımızın bir parçası olmuştur. Çoğu insan her gün birtakım işlerini algoritma yoluyla yaptıkları halde bunun farkında değildir. Örneğin, çay demlemek, yemek yapmak, işe gitmek, araba sürmek gibi işlemlerde normal şartlar altında belli bir sıralama takip edilmektedir. Kısacası yapacağımız tüm işlemlerde yapılması ve/veya yapılmaması gereken kurallar dizisi olacaktır. Bu yüzden tüm programcılar bu kuralları atlamadan hatasız bir şekilde program oluşturmaya çalışırlar. Bunu başarabilmeleri için birkaç saat ya da gün ayırarak bu işlem basamaklarını oluşturmak zorundadırlar. Eğer algoritma oluşturmadan program yazılırsa, daha sonra yazılan yüzlerce ve/ veya binlerce kod arasından hata bulmak neredeyse imkansızdır. Fakat elimizin altında belli bir sırada işlem basamakları olduğu sürece hangi programlama dilinde yazarsak yazalım programımız hatasız olacaktır. Özellikle bilgisayar programcılığı okuyan arkadaşlarımızın arasında rastladığım kadarıyla çoğu, bu durumu göz ardı etmektedir. Ne kadar çok programlama dili bilirsek o kadar iyi program yazabiliriz düşüncesine sahipler. Unutulmamalıdır ki eğer bir soruna matematiksel ya da mantıksal bir çözüm getiremiyorsak programlama dili bilmemizinde bir anlamı yoktur.Programcılar algoritmaların dışında bir de akış diyagramları kullanmaktadırlar.

67 Akış Diyagramı Neden Gereklidir? Algoritmanın daha iyi anlaşılabilmesi için yazı yerine akış diyagramları kullanılmaktadır. Bunu kullanmanın diğer bir önemli yanı da oluşturulan her algoritmanın herkesin anladığı dili yansıtmasıdır. Aksi halde algoritmalar, yazıldığı dili (Türkçe, İngilizce vb.) bilmeyen diğer bir kişi tarafından anlaşılır olmayacaktır. Bu yüzden uluslararası standart sembollerle tüm algoritmaların yazıldıkları dillerden bağımsız hale getirilmesi gerekmektedir. Bunun için akış diyagramları hayatımıza girmiştir. Konunun daha iyi anlaşılması açısından şu örneği verebiliriz. Trafik lambalarında renk ve/veya şekil yerine DUR, BEKLE ya da GEÇ yazması bizim için anlamlıdır ancak yabancılar için hiçbir anlam ifade etmemektedir. Trafik lambalarında yazı yerine KIRMIZI, YEŞİL gibi renk kodlarının kullanılması tüm dünyada bu işaretleri anlaşılır kılmıştır. Bu yüzden algoritmaları akış diyagramları ile sembolize etmek projenizin uluslararası anlaşılırlığını artıracaktır

68 Algoritmanın Özellikleri Etkinlik: Algoritma oluştururken dikkat etmemiz gereken özelliklerden biri etkinliktir. Bu özelliğin sağlanabilmesi için oluşturulan algoritmalarda işlem tekrarı olmamasına dikkat edilmelidir. Diğer bir işlem ise oluşturulan algoritmanın gerekirse diğer algoritmalar içinde kullanılmasını sağlıyor. Kesinlik: Kullanılan değerler her zaman kesin ifadeler içermelidir. Sonluluk: Her algoritmanın bir sonu vardır. Ne kadar işlem ya da döngü olursa olsun algoritmanın uygun bir adımda sonlandırılması gerekir. Giriş/Çıkış: Tüm algoritmaların giriş ve çıkış verileri olmak zorundadır. Çıkış verilerinin kesinlikle doğru olmasına dikkat edilmelidir. Çünkü, çıkışverileri başka bir algoritmanın giriş verisi olarak kullanılabiliyor. Performans: Performans değeri, algoritmanın ne kadar tekrar ettiği, uyguladığı işlemler, çalışma süresi gibi işlemlere göre belirlenir. Eğer oluşturulan algoritmada performans ihtiyacı varsa saydığım bu tür özelliklere dikkat edilmesi gerekiyor. Eğer bu tip sorunlar varsa tekrar incelenerek düzeltilmesi gerekir.

69 Örnek Algoritma... Yeni başlayan arkadaşlara örnek oluşturması için 1’den 100’e kadar olan tam sayıların aritmetik ortalamasını hesaplayan bir algoritma ve akış diyagramını yapalım. Bunun için öncelikle 1’den 100’e kadar olan sayıların toplamına ihtiyacımız olacaktır. Sonrasında bu toplamın 100’e bölünmesiyle ortalama elde edilecektir. Burada toplam için (n*n+1) /2 ne güne duruyor dediğinizi duyar gibiyim. :) Tabiki toplamı bulmak için Gauss yöntemini kullanabiliriz ancak anlatmaya çalıştığımız şey algoritma kurmayı göstermek olduğundan biz uzun yolu seçiyoruz. Bu anlattıklarımız ışığında algoritmamız aşağıdaki gibi olacaktır. Başla Sayac=1 : Toplam=0 : Aritmetik_Orta=0 Eğer Sayac=100 ise Git 6 Toplam=Toplam+Sayac Sayac=Sayac+1 : Git 3 Aritmetik_Orta=Toplam/100 Aritmetik_Orta değerini yaz Dur.

70 Ayrıca anlaşılırlığını artırmak için akış diyagramını çizdiğimizde oluşacak diyagram ise aşağıdaki gibidir.

71 Algoritmanın programlama dillerinden bağımsız olmasının yanında işletim sistemlerinden de bağımsız olduğunu unutmayınız. Programcılığın başlangıcından itibaren, hayatınızın sonuna kadar devam edecek ve değişmeyecek tek gerçek aslında algoritmadır. Bu yüzden programcılığa yeni başlayan ya da başlayacak olan arkadaşlara algoritma ile iyi geçinmelerini tavsiye ederim. Konu ile ilgili görüş ve önerilerinizi Özgürlükİçin forumlarına bekliyorum. forumlarına diyagramlari/ Yazar: Mehmet PEKGENÇ

"BIL106E Introduction to Scientific & Engineering Computing Hüseyin TOROS, Ph.D. Istanbul Technical University Faculty of Aeronautics and Astronautics Dept." indir ppt

Benzer bir sunumlar

Google Reklamları