Proceedings of Papers Volume 2 - PDF Free Download (2023)




3 XLII INTERNATIONAL SCIENTIFIC CONFERENCE ON INFORMATION, COMMUNICATION AND ENERGY SYSTEMS AND TECHNOLOGIES -ICEST 7- Procedings of Papers Volume of volumes Editor: Technical Editor: Published by: Printed by: Prof. Dr. Cvetko Mitrovski Pargovski Jove Faculty of Technical Sciences - Bitola Mikena - Bitola Total print run: ISBN: CIP - Каталогизација во публикација Матична и универзитетска библиотека "Св. Климент Охридски", Битола : 4 (63) (8) 7. 5 (63) (8) : (63) (8) : 4 (63) (8) (63) (8) INTERNATIONAL Scientific Conference on Information, Communication and Energy Systems and Technologies (4 ; 7 ; Bitola) ICEST 7: proceedings of papers / XLII International Scientific Conference on Information, Communication and Energy Systems and Technologies ; [Editor Cvetko Mitrovski]. - Bitola : Faculty of Technical Sciences, 7. - св. (XVI, 43 ; XVI, стр ) : илустр. ; 8 см Фусноти кон текстот. - Библиографија кон трудовите. - Регистри ISBN Gl. stv. nasl.. Mitrovski, Cvetko а) Електроенергетски системи - Мониторинг - Анализи - Зборници б) Телекомуникациска техника - Сигнали - Анализи - Зборници в) Автоматски системи - Интелигентни системи - Зборници г) Компјутерски системи - Системи за менаџмент на бази на податоци - Интернет технологии - Зборници - д) Индустриска електроника - Контролни системи - Конвертери - Зборници COBISS.MK - ID 86

4 TABLE OF CONTENTS VOLUME MEDICAL SYSTEMS MED.S. BCI Mental Tasks Patterns Determination... P. K. Manoilov University of Rousse, Bulgaria MED.S. An Investigation on Signals in Magnetocardiography... D. Tz. Dimitrov Technical University of Sofia, Bulgaria MED.S.3 A Stimulation of Neural Tissue by Pulse Magnetic Signals... D. Tz. Dimitrov Technical University of Sofia, Bulgaria MED.S.4 Eye-Blinking Artefacts Duration Analysis... P. K. Manoilov University of Rousse, Bulgaria MED.S.5 The Web Side System for Registration and Processing Medical Data of Urological Department Patients... J. Makal, J. Bilkiewicz and A. Nazarkiewicz* Technical University (BTU), Bialystok, Poland *Provincial Integrated Hospital, Białystok, Poland MED.S.6 Laboratory Stand in Web Browser for Measurements on Distance... J. Makal, A. Idzkowski and A. Krasowski Technical University (BTU), Bialystok, Poland MED.S.7 Electronic Identification and Patient Parameters Monitoring... S. Ranđić, A. Peulić, A. Dostanić and M. Acović Technical Faculty, Sv. Save, Čačak, Serbia SIGNAL PROCESSING II SP II. Comparative Analysis of Basic Self-Organizing Map and Neocognitron for Handwritten Character 477 Recognition... I. R. Draganov and A. A. Popova Technical University of Sofia, Bulgaria SP II. Comparative Analysis of Integral Calculus Algorithms in Magnetic Signals Evaluation V. Dimitrova Technical University of Sofia, Bulgaria SP II.3 Investigation of Maximally Flat Fractional Delay All-pass Digital Filters K. S. Ivanova and G. K. Stoyanov Technical University of Sofia, Bulgaria SP II.4 A Unification of Determined and Probabilistic Methods in Pattern Recognition G. Kunev, G. Varbanov and C. Nenov Technical University of Varna, Bulgaria SP II.5 Complex Input Signal Quantization Noise Analysis for Orthogonal Second-Order IIR Digital Filter Sections Z. Nikolova Technical University of Sofia, Bulgaria SP II.6 Speech Overlap Detection Algorithms Simulation S. Pleshkova-Bekiarska and D. Damyanov Technical University of Sofia, Bulgaria SP II.7 FPGA Implementation of the D-DCT/IDCT for the Motion Picture Compression R. J.R. Struharik and I. Mezei Faculty of Technical Sciences, Novi Sad, Serbia SP II.8 Minutiae-based Algorithm for Automatic Fingerprint Identification E. H. Mulalić, S. S. Cvetković and S. V. Nikolić University of Niš, Serbia I

5 SP II.9 Performances of The Exponential Sinusoidal Audio Model... Z. N. Milivojević, P. Rajković and S. M. Milivojević* University of Niš, Serbia *Technical Faculty, Cacak, Serbia SP II. Non-uniform Thresholds for Removal of Signal-Dependent Noise in Wavelet Domain... M. Kostov, C. Mitrovski and M. Bogdanov* University of Bitola, Macedonia *University of Skopje, Macedonia COMPUTER SYSTEMS AND INTERNET TECHNOLOGIES IV 57 5 CSIT IV. GUI to Web Transcoding... Ts. Filev, J. Pankov and I. Pankov Technical University of Sofia, Bulgaria CSIT IV. Classification of Classifiers... G. Gluhchev, M. Savov and O. Boumbarov* Bulgarian Academy of Sciences *Technical University of Sofia, Bulgaria CSIT IV.3 Methods of Graphic Representation of Curves in CAD systems in Knitting Industry... E. Iv. Zaharieva-Stoyanova Technical University of Gabrovo, Bulgaria CSIT IV.4 Cognitive Model Extension for HCI... N. D. Đorđević and D. D. Rančić University of Niš, Serbia CSIT IV.5 Performance Analysis of a Suboptimal Multiuser Detection Algorithm... I. G. Iliev and M. Nedelchev Technical University of Sofia, Bulgaria CSIT IV.6 Realization of Train Rescheduling Software System... S. Mladenović and S. Vesković University of Belgrade, Serbia CSIT IV.7 Throughput Analysis of EDCA in Multimedia Environment... B. Ilievski, P. Latkoski and B. Popovski University of Skopje, Macedonia CSIT IV.8 Software Tools and Technologies in Steganography... J. Mirčevski, B. Djokić, M. Srećković* and N. Popović** Informatička škola Educon, Beograd, Serbia *Elektrotehnički fakultet, Beograd, Serbia **Ministarstvo Inostranih Poslova, Beograd, Serbia CONTROL SYSTEMS & ROBOTICS CS&R. Cascade Synchronization of Chaotic Systems on the Basis of Linear-Nonlinear Decomposition... D. P. Chantov Technical University of Gabrovo, Bulgaria CS&R. Sensorless Vector Control of Induction Motors... E. Y. Marinov, K. D. Lutskanov and Z. S. Zhekov Technical University of Varna, Bulgaria CS&R.3 Genetic Algorithms applied in Parameter Optimization of Cascade Connected Systems... B. Danković, D. Antić, Z. Jovanović and M. Milojković University of Niš, Serbia CS&R.4 Building 3D Environment Models for Mobile Robots Using Time-Of-Flight (TOF) Laser Scanner... S. Koceski and N. Koceska University of L'Aquila, Italy CS&R.5 High-Performance Velocity Servo-System Design Using Active Disturbance Estimator... B. Veselić and Č. Milosavljević* University of Niš, Serbia *Electrical Engineering Faculty, Istočno Sarajevo, Bosnia and Herzegovina CS&R.6 Optimal Control Using Neural Networks... D. Toshkova and P. Petrov Technical University of Varna, Bulgaria II

6 CS&R.7 Some Discretizing Problems in Control Theory... M. B. Naumović University of Niš, Serbia 573 POWER TRANSMISSION AND DISTRIBUTION SYSTEMS I573 PTDS I. An Algorithm for Coupled Electric and Thermal Fields in Insulation of the Large Power Cables... I. T. Cârstea and D. P. Cârstea* Faculty of Automatics, Computers and Electronics. Craiova, Romania *Industrial Group of Romanian Railways PTDS I. Control of the Electrical Field in the Connectors for High-Voltage Cables... I. T. Cârstea and D. P. Cârstea* Faculty of Automatics, Computers and Electronics. Craiova, Romania *Industrial Group of Romanian Railways PTDS I.3 Untransposed HV Transmission Line Influence on the Degree of Unbalance in Power Systems... Lj. D. Trpezanovski and M. B. Atanasovski University of Bitola, Macedonia PTDS I.4 Calculation of GIS kv Insulating Bushig Apllying Hybrid BEM-FEM Method... H. Zildžo and H. Matoruga Electrical Engeneering Faculty, Sarajevo, Bosna i Hercegovina PTDS I.5 Estimation of the Air Power Line Parameters Under the Influence of Lightning Overvoltages... M. G. Todorova and M. P. Vasileva Technical University of Varna, Bulgaria PTDS I.6 Calculation Model and Analyses of Grounding of the Fence on Medium Voltage Stations... N. Acevski and M. Spirovski University of Bitola, Macedonia EDUCATION QUALITY I EQ I. Meaning Making Through e-learning B. Gradinarova and Y. Gorvits* Technical University of Varna, Bulgaria *ORACLE 5, Moscow, Russia EQ I. Software Egineering e-learning Mathematical Software B. Fetaji, S. Osmani* and M. Fetaji Faculty of Communication Sciences and Technologies, Tetovo, Macedonia *IT center-seeu, Tetovo, Macedonia EQ I.3 Combining Virtual Learning Environment and Integrated Development Environment to Enhance e- learning M. Fetaji, S. Loskovska* and B. Fetaji Faculty of Communication Sciences and Technologies, Tetovo, Macedonia *University of Skopje, Macedonia EQ I.4 Software Engineering e-learning Information Retrieval Courseware B. Fetaji and M. Fetaji Faculty of Communication Sciences and Technologies, Tetovo, Macedonia EQ I.5 The Problems in Distant Learning... 6 V. Aleksieva Technical University of Varna, Bulgaria EQ I.6 The Roles of Colours in the Multimedia Presentation Building P. Spalevic, B. Miloševic*, K. Kuk** and G. Dimic** Faculty of Technical Sciences, Kosovska Mitrovica, Serbia * University of Niš, Serbia **High School of Electrical Engineering, Belgrade, Serbia ELECTRONIC COMPONENTS, SYSTEMS AND TECHNOLOGIES I ECST I. Synthesis of DCS in Copper Metallurgy... D. R.Milivojević, V. Tasić, M. Pavlov and V Despotović Copper Institute, Bor, Serbia 69 III

7 ECST I. Removal of Power-line Interference from ECG in Case of Non-multiple Even Sampling G. S. Mihov and I. A. Dotsinsky* Technical University of Sofia, Bulgaria *Bulgarian Academy of Sciences ECST I.3 Synthesizing Sine Wave Signals Based on Direct Digital Synthesis Using Field Programmable Gate Arrays H. Z. Karailiev and V. V. Rankovska Technical University of Gabrovo, Bulgaria ECST I.4 Negative Impedance Converter Improves Capacitance Converter V. D. Draganov, Z. D. Stanchev and I. T. Tanchev Technical University of Varna, Bulgaria ECST I.5 Fuel Cells and Fuel Cell Power Supply Systems an Overwiev Z. S. Mladenovski*, G. L. Arsov and J. Kosev *COSMOFON A.D. Skopje, Macedonia University of Skopje, Macedonia ECST I.6 Developing and Using Communication Driver for Serial Communication Between PCs and Industrial PLCs Z. M. Milić, P. B. Nikolić, D. Krstić* and M. Lj. Sokolović* Tigar MH, Pirot, Serbia *University of Niš, Serbia ECST I.7 Spice Model of Magnetic Sensitive MOSFET N. Janković, T. Pesić and D. Pantić University of Niš, Serbia ECST I.8 Reduced Data Sample Transmission Implementation to PIC Microcontroller M. I. Petkovski and C. D. Mitrovski University of Bitola, Macedonia POWER TRANSMISSION AND DISTRIBUTION SYSTEMS II & ELECTRICAL MACHINES PTDS&EM. Fast High Voltage Signals Generator for Low Emittance Electron Gun M. Paraliev Paul Scherrer Institut, Villigen, Switzerland PTDS&EM. Effect of Perforation in High Power Bolted Busbar Connections R. T. Tzeneva and P. D. Dineff Technical University of Sofia, Bulgaria PTDS&EM.3 The Influence of the Supply Voltage Unbalance on the Squirrel Cage Induction Motor Operation G. I.Ganev and G. T. Todorov Technical University of Sofia, Bulgaria PTDS&EM.4 Monitoring of the Electric Energy Quality in the Electricity Supply Tz. B. Tzanev, S. G. Tzvetkova and V. G. Kolev Technical University of Sofia, Bulgaria PTDS&EM.5 Algorithm for Efficiency Optimization of the Induction Motor Based on Loss Model and Torque Reserve Control B. Blanuša, P. Matić, Ž. Ivanović and S. N. Vukosavić* Faculty of Electrical Engineering, Banja Luka, Bosnia and Herzegovina *Faculty of Electrical Engineering,Beograd, Serbia PTDS&EM.6 Computation of Electromagnetic Forces and Torques on Overline Magnetic Separator M. I. Popnikolova Radevska and B. S. Arapinoski University of Bitola, Macedonia EDUCATION QUALITY II EQ II. Studying on Frequency Modulation in MATLAB Environment... V. M. Georgieva Technical University of Sofia, Bulgaria EQ II. An Approach of Application Development for the Virtual Laboratory Access... J. Djordjević-Kozarov, M. Jović* and D. Janković University of Niš, Serbia *University of Lugano, Switzerland EQ II.3 The Use of Virtual Reality Environments for Training Purposes in Care Settings... E. Maier, M. Dontschewa and G. Kempter UCT- Research, FHV, Dornbirn, Austria IV

8 EQ II.4 How to Give a Good Scientific Presentation... S. S. Cvetković and S. V. Nikolić University of Niš, Serbia EQ II.5 Reevaluation and Replacement of Terms in the Sampling Theory... P. Tz. Petrov Microengineering, Sofia, Bulgaria EQ II.6 SWOT Analysis of Method for Automatic Vectorization of Digital Photos Into 3D Model... Z. G. Kotevski and I. I. Nedelkovski University of Bitola, Macedonia ELECTRONIC COMPONENTS, SYSTEMS AND TECHNOLOGIES II ECST II. Monitoring System of Pulsation Processes in a Milking Machine... 7 A. T. Aleksandrov and N. D. Draganov Technical University of Gabrovo, Bulgaria ECST II. A Method for Improvement Stability of a CMOS Voltage Controlled Ring Oscillators G. Jovanović and M. Stojčev University of Niš, Serbia ECST II.3 Load Characteristics under Optimal Trajectory Control of Series Resonant DC/DC Converters Operating above Resonant Frequency N. D. Bankov and Ts. Gr. Grigorova* University of Food Technologies, Plovdiv, Bulgaria *Technical University of Sofia, Branch Plovdiv, Bulgaria ECST II.4 Modeling of the Optimal Trajectory Control System of Resonant DC/DC Converters Operating Above Resonant Frequency N. D. Bankov and Ts. Gr. Grigorova* University of Food Technologies, Plovdiv, Bulgaria *Technical University of Sofia, Branch Plovdiv, Bulgaria ECST II.5 Comparison of Temperature Dependent Noise Models of Microwave FETs Z. D. Marinković, V. V. Marković and O. R. Pronić University of Niš, Serbia ECST II.6 Power Losses and Applications of Nanocrystalline Magnetic Materials V. C. Valchev, G. T. Nikolov, A. Van den Bossche* and D. D. Yudov** Technical University of Varna, Bulgaria *EELAB EESA Firw8 UGENT, Belgium **Bourgas Free University, Bourgas, Bulgaria ECST II.7 Multi level Electronic Transformer D. D.Yudov, A. Iv. Dimitrov, V. C. Valchev* and D. M. Kovatchev* Bourgas Free University, Bourgas, Bulgaria *Technical University of Varna, Bulgaria ECST II.8 An Approach to Effectiveness Increasing of SPICE Macromodels E. D. Gadjeva, B. I. Mihova and V. G. Manchev Technical University of Sofia, Bulgaria POSTER SESSIONS PO I - SIGNAL PROCESSING PO I. Image Filtering and Scaling Algorithms... S. G. Mihov and G. S. Zapryanov Technical University of Sofia, Bulgaria PO I. System for Spectral Investigation of Signals... S. V. Kolev, I. V. Ivanova and Y. S. Velchev Technical University of Sofia, Bulgaria PO I.3 Design and Implementation of First Order Sigma-Delta Modulator... S. D. Terzieva, G. T. Tsenov, P. I. Yakimov and V. M. Mladenov Technical University of Sofia, Bulgaria PO I.4 Recovering Optical Image Transferred Through Atmospheric Turbulence... K. L. Dimitrov Technical University of Sofia, Bulgaria V

9 PO I.5 PO I.6 CATV systems - Volterra Kernels Identification... O. B. Panagiev and K. L. Dimitrov Technical University of Sofia, Bulgaria Comments on the Method Over Sampling and Averaging for Additional Bits of Resolution... P. Tzv. Petrov Micro Engineering-Sofia, Bulgaria PO II - ELECTRONIC COMPONENTS, SYSTEMS AND TECHNOLOGIES PO II. PSPICE Simulation of Optoelectronic Pulse Circuits E. N. Koleva and I. S. Kolev Technical University of Gabrovo, Bulgaria PO II. Temperature Profile of the Impulse Discharge I. P. Iliev and S. G. Gocheva-Ilieva* Technical University of Sofia, Bulgaria, Branch Plovdiv *University of Plovdiv, Bulgaria PO II.3 Simulation Investigation of Frequency Sensitive Digital Phase Detectors A. H. Yordanov and G. S. Mihov Technical University of Sofia, Bulgaria PO II.4 Voltage-Scaling D/A Converters Analysis and Practical Design Considerations D. P. Dimitrov Melexis-Bulgaria Ltd, Sofia, Bulgaria PO II.5 Mechanical Properties of Thin Electrical Films S. Letskovska and P. Rahnev Burgas Free University, Burgas, Bulgaria PO II.6 An Improved IGBT Behavioral PSpice Macromodel Ts. Gr. Grigorova and K. K. Asparuhova Technical University of Sofia, Bulgaria PO II.7 A Stochastic Model of Gamma-Ray Irradiation Effects on Threshold Voltage of MOS Transistors M. T. Odalović and D. M. Petković Faculty of Science and Mathematics, Kosovska Mitrovica, Serbia PO II.8 Some Geometrical Considerations Connected with the Planetary Movement of the Substrate During Sputtering S. Pavlov and D. Parashkevov Assen Zlatarov University, Bourgas, Bulgaria PO II.9 Program Calculation of Thin Film Thickness in Case of Parallel Arranged Target Substrate D. D. Parashkevov Assen Zlatarov University, Bourgas, Bulgaria PO III - ELECTRONIC COMPONENTS, SYSTEMS AND TECHNOLOGIES & INDUSTRIAL ELECTRONICS PO III. Automatic Quality Classifiers of Food Products Rate of the Speed Problems... A. S. Georgiev, L. F. Kostadinova and R. N. Gabrova University of Food Technologies, Plovdiv, Bulgaria PO III. Computer-Aided Engineering with the help of OrCAD... D. M. Kovatchev, E. Dimitrova and V. C. Valchev Technical University of Varna, Bulgaria PO III.3 Magnetron Dielectric Barrier Air Discharge at Low Frequency... P. D. Dineff and D. N. Gospodinova Technical University of Sofia, Bulgaria PO III.4 Application of the CFD Method for Heat Transfer Simulation... A. V.Andonova and N. M. Kafadarova Technical University of Sofia, Bulgaria PO III.5 Laser Modeling In Q-switch Regime... I. Veselinovic, M. Sreckovic and B. Veselinovic* University of Belgrade, Serbia *Vinča Institute of Nuclear Sciences,Belgrade PO III.6 Modeling of Quantum Generators and Amplifiers on Semiconductor Materials... B. Veselinović, M. Srećković*, I. Veselinović* and M. Vlajnić** Vinča Institute of Nuclear Sciences, Belgrade, Serbia * University of Belgrade, Serbia **YUBC System A.D., Belgrade, Serbia VI

10 PO III.7 PO III.8 PO III.9 Analysis of Electrical and Thermal Characteristics of Thermal Cutoffs A. Prijić, Z. Prijić, B. Pešić, D. Pantić and S. Ristić University of Niš, Serbia Macromodel for CMOS Analogue Switches Temperature Effects Sensing I. M. Pandiev Technical University of Sofia, Bulgaria Microstructure and Optical Properties of ITO Thin Films Investigated for Heat Mirrors in Solar Collectors G. H. Dobrikov, M. M. Rassovska, S. I. Boiadjiev, K. A. Gesheva*, P. S. Sharlandjiev** and A..Koserkova** Technical University of Sofia, Bulgaria *Bulgarian Academy of Sciences ** Acad. G. Bonchev, Sofia, Bulgaria PO IV - COMPUTER SYSTEMS AND INTERNET TECHNOLOGIES PO IV. Automatic Weight Estimation Method for Multiple SVMs in Software Sensors Systems P. Ts. Andreeva and S. I. Vassileva Bulgarian Academy of Sciences, Sofia, Bulgaria PO IV. Improving Fairness for Pedestrian Users of CDMA-HDR Networks V. P. Hristov South West University, Blagoevgrad, Bulgaria PO IV.3 Web Application of Traveling Salesman Problem using Genetic Algorithms M. N. Karova, J. P. Petkova and S. P. Penev Technical University of Varna, Bulgaria PO IV.4 A Performance Study of Run-time Systems for Distributed Time Warp Simulation H. G. Valchanov, N. S. Ruskova and T. I. Ruskov Technical University of Varna, Bulgaria PO IV.5 Genetic Algorithms in Solving Multiobjective Optimization Problems H. I. Toshev and Ch. D. Korsemov Bulgarian Academy of Sciences, Sofia, Bulgaria PO IV.6 Applying Tabu - Search Heuristic for Software Clustering Problem V. T. Bozhikova and M. Ts. Stoeva Technical University of Varna, Bulgaria PO IV.7 A New Approach to Symbol Description of the Spatial Location of Extended Objects in Spatial Databases M. Stoeva and V. Bozhikova Technical University of Varna, Bulgaria PO IV.8 A Web-based Course for Development, Implementation and Administration of a Secure Portal within OracleAS Portal g V. R. Antonova and P. T. Antonov Technical University of Varna, Bulgaria PO IV.9 Models of e-business in Transportation D. D. Marković, B. V. Stanivuković and M. J. Dobrodolac University of Belgrade, Serbia PO V - CONTROL SYSTEMS & ROBOTICS PO V. PO V. PO V.3 PO V.4 PO V.5 Application of Kalman Filtering Technique to Increase the Probability of Faults Detection in Test Equipment A. D. Tanev and A. Vl. Andonov Higher school of Transport, Sofia, Bulgaria Situational Control of RADAR I. E. Korobko Technical University of Sofia, Bulgaria Control Systems with Frequency Converters save Energy in Food Industry L. F. Kostadinova, R. N. Gabrova and A. S. Georgiev University if Food Technologies, Plovdiv, Bulgaria Multi-channel Wireless ECG System Y. S. Velchev Technical University of Sofia, Bulgaria Heart Rate Measurement System Y. S. Velchev Technical University of Sofia, Bulgaria VII

11 PO V.6 Robust Estimators Based on the Two-Stage Method for Closed-Loop Identification N. R. Atanasov Technical University of Varna, Bulgaria PO V.7 Scalable Modular Control Architecture For Walking Machines... 9 M. S. Milushev, N. V. Krantov and V. Zerbe* Technical University of Sofia, Bulgaria *Technical University Ilmenau, Germany PO V.8 Evaluation of Fluidic Muscles for a Walking Machine Driver M. S. Milushev, D. I. Diakov, V. K. Georgieva and N. T. Pavlovic Technical University of Sofia, Bulgaria *University of Niš, Serbia VIII

12 SESSION MED. S Medical Systems


14 BCI Mental Tasks Patterns Determination Plamen K. Manoilov Abstract Brain computer interface (BCI) is an assistive device, which translates the user s wishes in device s commands. BCI is the only possibility for completely paralysed (locked-in) people to interact with their environment. When the subject performs different mental tasks his brain issues EEG signal with different patterns. Pattern recognition approach based BCIs classify the brain activity to different mental tasks and this way makes a multichannel control. In this paper the unique characteristics finding of 5 groups mental tasks is described. Keywords BCI, EEG analysis, mental task, pattern recognition, power spectrum I. INTRODUCTION Present day knowledge, about the functioning of the brain does not meet the case to guess the subject s thoughts after analyzing his EEG. BCI prototypes determine the intent of the user from a variety of different electrophysiological signals. They are translated in real time into commands that operate a computer display or other device [5]. Successful operation requires that the user encodes commands in these signals and that the BCI derives the commands from the signals. In some BCIs patterns are in a result of subject s mental load []. The subject performs different mental tasks, resulting in changes of the power of different frequencies in different scalp zones. The changes of the power are in large scale when the user is more concentrated. The comparison is made with the power spectrum of the subjects EEG during the performance of so called Baseline task, task. The baseline task is defined like mentally doing nothing. When the subject does nothing his alpha brain activity power has maximal values. If the mental tasks are chosen properly, power spectrum changes for each of them are different or are issued by different spaces of the brain. This gives the possibility by the help of a classifier to bound the particular task with its power spectrum alteration (pattern). That type of devices are known as pattern recognition based BCIs. The pattern for each task is formed by the channels (electrodes) and frequencies where the most distinguishable changes occur. Plamen K. Manoilov is with the Communication Technique and Technologies Department, RU A. Kanchev, 8 Studentska Str., Rousse, Bulgaria, II. EXPERIMENT S DESCRIPTION The study continues the work on a project for creating a BCI, started in Delft University of Technology, Delft, The Netherlands in 4, supervised by professor drs dr Leon Rothkrantz, head of Man-Machine Interaction research group, Faculty of Electrical Engineering, Mathematics and Computer Science. During the experiments a database with 4 sessions EEG data, around minutes each, recorded from two subjects (male, 5 and 3) was prepared for use together with a tool for a statistical analysis ( R, MATLAB ). The second stage was processing the EEG from the database and finding (if possible) a specific junique characteristic for every mental task. After classifying the tasks, some of them with more clear and well-expressed characteristics could be chosen for using in BCIs control. The brain activity of α-range (8 3 Hz) was studied. All EEGs were recorded without any biofeedback. In this study only five different groups tasks are examined: Imaginary figure rotation, task 8, Hyperventilation, task 9, Visual presentation of..., task 3X, Auditive presentation of..., task 4X, Visual and auditive presentation of..., task 5X. Every two-digit task includes 4 subtasks: presentation of an yellow triangle, task X, presentation of a green dot, task X, presentation of a red cross, task X4 and presentation of blue lines, task X6. The figures are presented to the subject visually on the computer screen and auditively from the loudspeakers. Every task is performed multiple times per session. Tasks follow each other in a pseudo-random order to avoid a familiarization of the subject. The experiment schedule has planned intervals between the tasks, where the subject is allowed to blink. As a data acquisition system TruScan 3 EEG was used in the experiments. It includes EEG cap with silver chloride electrodes, placed according to the international - system, EEG amplifier and EEG adapter. The needed low resistance between the contact electrodes - skin was improved by the use of an electro technical gel and was controlled during the recording process. The EEG signal was filtered and sampled at 56 Hz. FireBird DBMS was used. Matlab was used as a data processing application. The connection between them is made by ODBC protocol. 45

15 BCI Mental Tasks Patterns Determination III. EEG ANALYSIS The preliminary selection goal is to determinate the frequencies and channels where the power spectrum changes most during the mental tasks performance according to (). The comparison is done with the power spectrum of the baseline task. B R D( k) = Pav ( k) Pav ( k), () where D - the power spectra difference. B P av - the average power spectrum of the base task (task or ), calculated according to () B M B B Pav = B P ( k, m) () M m= R P av - average power spectrum of the running task (every task with exception of and ) according to (3) R M R R Pav = R P ( k, m) (3) M m= P B ( k, m) and P R ( k, m) - the power spectra of base and running tasks according to (4) * P( k, t) = GD ( k, t) GD ( k, t) (4) where G D + g i.. π. f. t ( f, t) = x( t ) ( t t) e dt (5) D is Gabor transform. The time step is a half of the segment length - *, B R M and M - the numbers of the EEG segments for the B R base and the running task. M M is possible. M = ( l / sl ), where l - is the length of the analyzed section; As the absolute powers differs along the channels, the relative difference between the powers is calculated, (6) B R Pav ( k) Pav ( k) D( k)[%] = (6) B Pav ( k) To assess the electrooculographic (EOG) artefacts, and more precise subjects eye blinks, influence to the chosen frequency range of 8-3 Hz, their power spectrum was examinated []. It was decided to cut the parts of EEG, containing blinks. Existed database is large enough. No important information might be lost. Selecting clean from EOG artefacts segments was easy to automate, because of the experiments schedule. Duration of polluted by blink interval is user-dependent [3, 4]. For subject it varies from approximately.8-. s before and.9-.5 s after the time of blink s max amplitude. For subject values are respectively.-. s before and.9-.5 s after the blink s max amplitude. This was used to cut properly the polluted sections. Alpha rhythm amplitude is higher when the subject is in a physical rest state and a relative inactive mental state. It is blocked partially or completely by any mental effort. Mu-rhythm is blocked partialy or completely by the movement or a thought about a movement. Fig.. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 8. Here and everywhere in this paper:,,,,, Fig.. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 9 45

16 Imaginary rotation, task 8, vs. baseline task, Graphs of the power difference for task 8, Imaginary figure rotation, are shown in Fig.. From the above graph one could see that the absolute difference between the powers of the baseline task and task 8 rises from the frontal to the occipital parts of the scalp. Characteristic channels with noticeable variations are P3, T4, T5, T6, O, O. The graph in Fig., below, shows however, that the relative power difference is almost equal. Results are summarized in Table. The characteristics for both subjects differ. The sensitive channels for subject for α- rhythm are moved to the parietal part of the scalp, μ-rhythmappears more in the right part of the scalp. Plamen K. Manoilov Hyperventilation, task 9, vs. baseline task, The result from the analysis of task 9 power difference, Fig., is quite different. The variance between the powers of the baseline task and task 9, hyperventilation, is negative. This result comes after a relative long (-3s) ventilation of the lungs. It is impossible to meet this state in the normal human life, especially in locked-in person, with exception if it is not artificially provoked. The brain activity is clear expressed in the central and parietal parts of the scalp in the lowest frequencies of the range. Alterations above 5% could be noticed. There are differences in the spatial distribution between both subjects. Visual presentation, tasks 3X, vs. baseline task, The fourth presented objects has a similar influence on the subjects. The decreasing of the power is better expressed for tasks 3 and 34 (Visual presentation of an yellow triangle and a red cross ). The presentation of a green dot gives the worst results-,5 times less than the relative difference for subject. Fig. 4. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 34 Subject does not show noticeable difference in brain patterns for different presented objects. Sensitive requencies are - Hz higher than for subject. Audio presentation, tasks 4X, vs baseline task, Graphs for task 4 are shown in Fig. 5. The audiopresentation stimulates more slight reactions in both subjects. Fig. 3. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 3 Fig. 5. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 4 453

17 BCI Mental Tasks Patterns Determination It does not have different characteristics for the different presented objects. An activity in the temporal lobe is noticed in T5 and T6 for subject and in T6 for subject. Audio-visual presentation, tasks 5X, vs baseline task, The audio-visual presentation combines visual- and audiopresentations features. An activity in the temporal and the occipital part of the scalp is noticed. expressed patterns. Selection of the proper figure and color for every subject is necessary to achieve the best results. 5. From the three group of tasks 3X, 4X, 5X, the Visual presentation of is the most useful for using in BCI. The characteristics could be achieved by selfconcentration. No outside assistance is needed. On the next stage of the work the stability of the characteristics during the performance of each mental task will be studied. After the time interval of the best expressed pattern is determined the final mental task selection could be done. TABLE I MENTAL TASKS CHARACTERISTICS SUMMARY Fig. 6. Graphs of the absolute, μv /Hz, and the relative, %, difference between the power spectra of baseline task,, and task 5 IV. CONCLUSIONS The following conclusions could be made:. Unlike the other tasks, task 8, Imaginary figure rotation, alters μ-rhythm and α- rhythm in frontal placed electrodes. It has an unique characteristic. User should use to achieve this state for a short time.. In comparison to the other tasks, the Hyperventilation, task 9, is not an ordinary one. To achieve the hyperventilation state the subject have to bread deeply a long time. The task is not useful for a trivial control. The well expressed and quite different characteristics of task 9 could be used to switch on/off the BCI. As this state does not exist in the normal life, no errors are possible. 3. Power spectra changes in a result of mental tasks performance are individual for each subject. Control of proper frequencies for every user should be foreseen in the BCI. 4. The presentation of different geometrical figures and colors does not result in different patterns, but has more or less marked patterns for each subject. Signal colors, which tease the subject, give more clear Task Imaginary figure rotation, task 8 Hyperventilation, task 9 Visual presentation, tasks 3, 3, 34, 36 Audio presentation, tasks 4, 4, 44, 46 Audio-visual presentation, tasks 5, 5, 54, 56 Subj. Rhythm-Chann:Frequency[Hz] α F8, P3, T4, T5, T6, O, O : 8, 9 µ C3, P3 : 9 α F8, P3, Pz, P4, T6 : 9- µ C3, Cz, C4, P3, P4 : 9- α Fp, C3, Cz, P3, Pz, P4, T3 : 8 α Fp, F3, Fz, F4, C3, Cz, P3, Pz, P4, T6 : 8, 9, α P3, T5, T6, О, О : 8, 9 α О, О : 9, α Т5, Т6 : 8 - α Т6 : 9 α О, О, Т4, Т6 : 8- α О, О, Т4, Т6 : 8-3 REFERENCES [] Babiloni F., F. Cincotti, L. Lazzarini, J. Millan, J. Mourino, M. Varsta, J. Heikkonen, L. Bianchi, M. G. Marciani, Linear classification of low-resolution EEG patterns produced by imagined hand movements, IEEE Transactions on Rehabilitation Engineering, vol. 8(),, pp [] Manoilov P. К., EEG Eye-Blinking Artefacts Power Spectrum Analysis, Proceedings of the International Conference on Computer Systems and Technologies CompSysTech 6, V.Tarnovo, Bulgaria, 5-6 June, 6, pp. IIIA.3- IIIA.3-5. [3] Manoilov P. К., Electroencephalogram electrooculographic artefacts analysis, Proceedings of the National conference with an International participation, ELECTRONICS 6, - June, 6 pp [4] Manoilov P. K., M. P. Iliev, EOG artefacts duration analysis, Proceedings of the Fifteenth International Scientific and Applied Science Conference Electronics 6, September 6, Sozopol, Bulgaria, pp [5] Wolpaw J. R., Brain-computer interface technology: A Review of the First International Meeting, IEEE Transactions on Rehabilitation Engineering, vol. 8,, pp

18 An Investigation on Signals in Magnetocardiography Dimiter Tz. Dimitrov Abstract The main purpose of this paper is to discuss the lead systems currently being applied in detecting the equivalent magnetic dipole of the heart, and to discuss briefly on the relationship between signals in the cases of ECG-MCG Keywords Magnetocardiography, elektrokardiography I. INTRODUCTION It s well known that in electrocardiography, the mapping of the distribution of the electric potential on the surface of the thorax has been-applied since the first detection of the human electrocardiogram. It s simirarly in magnetocardiography. Though the magnetic field is a vector quantity and has therefore three components at each location in space, the mapping method has usually been applied for registering only one component (the x -component) of the magnetic field around the thorax. The mapping has usually been done on a certain grid. In lead field theory, it may be shown that lead systems used in mapping often introduce a distortion of the signal that necessarily originates from the in homogeneities of the volume conductor. (The situation is the same as in mapping the electric potential field.) Some of these magnetic measurements may also be realized with a similar sensitivity distribution by use of electric measurements with a higher signal-to-noise ratio and with easier application (Fig.). II. METHODS OF MAGNETOCARDIOGRAPHY In addition to the analysis of the parameters of the MCG signals, recorded either by determining the equivalent magnetic dipole or by the mapping method, several other techniques have also been applied. Of these the localization of cardiac sources is briefly discussed here. The localization of cardiac electric sources is a highly desired objective since it may enable the localization of cardiac abnormalities including those of abnormal conduction pathways. These may cause dangerous arrhythmias or contribute to a reduction in cardiac performance. Abnormal conduction pathways, for example, conduct electric activity from the atrial muscle directly to the ventricular muscle, bypassing the AV junction. This is called Wolff-Parkinson-White or(wpw) Dimter Tz. Dimitrov is from the Faculty of Communication Technique and Technologies,Technical University of Sofia,Bulgaria, Sofia, 8, Kliment Ohridsky, syndrome.if a retrograde conduction pathway also exists from the ventricular mass back to the atrial mass, this reentry path may result in tachycardia. If the symptoms due to this abnormal conduction do not respond to drugs, then the tissue forming the abnormal pathway must be removed surgically, hence requiring prior localization. In clinical practice the conduction pathways are at present localized invasively with a catheter in an electrophysiological study, which may last several hours. This time may be shortened by first making an initial noninvasive localization of the equivalent source of the conduction pathway from the electric potentials on the surface of the thorax. Fig. The similarity between the lead fields of certain electric and magnetic leads are illustrated. If the magnetic field is measured in such an orientation (in the х direction in this example) and location, that the symmetry axis is located far from the region of the heart, the magnetic lead field in the heart's region is similar to the electric lead field of a lead, which is oriented normal to the symmetry axis of the magnetic lead. This similarity may also be verified from the similarity of the corresponding detected signals. In magnetocardiographic localization the goal is to introduce an alternative to the electric localization using the magnetic methods. Utilization of this complementary technique may improve the overall localization accuracy. The magnetocardiographic localization is usually made by mapping the х component of the cardiac magnetic field at 3-4 locations on the anterior surface of the thorax with consecutive measurements using a single-channel magnetometer or simultaneously using a multichannel magnetometer. The dipole model is the most obvious to use as a source model for the localization methods. The accuracy of the magnetocardiographic localization depends to a great extent on the accuracy of the volume conductor model applied The accuracy of the magnetocardiographic localization of the origin of an abnormal conduction pathway is of the order of

19 An Investigation on Signals in Magnetocardiography cm. Because magnetocardiographic localization hasbeen shown to have greater complexity and costs as compared to the electric method, the magnetic method does not, at present, compete with the electric method in clinical practice III. METODS FOR DETECTING OF MAGNETIC HEART VECTOR system detecting the magnetic dipole moment of a volume source. Three such orthogonal components form the complete lead system. A natural method to realize such a lead system is to make either unipolar or bipolar measurements on the coordinate axes ( Fig.3). It s possible to assume that the heart is a spherical conducting region between the insulating lungs. For the XYZ and ABC lead systems it would be enough to assume cylindrical symmetry for each component, which leads to a spherically symmetric volume conductor for the three orthogonal measurements.the у and z components of the unipositional lead system require, however, an assumption of a conducting spherical heart region inside the insulating lungs. This assumption forces the lead fields to flow tangentially within the heart region. This is called a self-centering effect. The magnetic dipole moment of a volume current distribution J r in an infinite, homogeneous volume conductor with respect to an arbitrary origin can be defined as: r r r m = rxjdv where: m r is the magnetic dipole moment; J r is the density of volume current distribution r is the radius of an arbitrary current contuоr v is the volum of calculation The lead system that detects this magnetic dipole moment has three orthogonal components. Each component produces, when energized with the reciprocal current, a linear, homogeneous, reciprocal magnetic field B r ML over the source region. These reciprocal magnetic fields induce lead fields J r LM in which the lead current is directed tangentially, and its density is proportional to the distance from the symmetry axis(fig.). Fig.A One component of the reciprocal magnetic field () B r ML Fig.3. A natural method to measure the magnetic dipole moment of a source locating in the origin is to measure ( x, y, z) componets of the magnetic fielt on corresponding coordinate axes. IV. COMPARISON BETWEEN MCG AND ECG It can be noted that the bioelectric activity in the heart is responsible for the generation of a source current density, namely J r ( x, y, z, t). As stated before, both the electric and magnetic fields are generated by this same source which, in turn, responds to the electrophysiological phenomenon of depolarization and repolarization of cardiac muscle cells. A logical question arises as to whether any new information might be provided by the magnetic field measurement that is not available from the electric potential field measurement. While it appears, on certain theoretical grounds, that the electric and magnetic fields are not fully independent, other reasons exist for the use of magnetocardiography. These may be divided into theoretical and technical features. The former ones are based on the universal properties of biomagnetic fields and the latter ones to the technical features of the instrumentation. There are some differences between the plots of potential s curves VMCG and VECG in the cases of MCG and ECG (Fig.4) Fig.(B) One component of the lead field J r LM of an ideal lead 456

20 Dimiter Tz. Dimitrov burns this is a crucial advantage.) Second, the SQUID (Superconducting QUantum Interference Device) magnetometer is readily capable of measuring DC signals. These are associated with the S-T segment shift in myocardial infarction. Such signals can be obtained electrically only with great difficulty. Although the clinical value has yet to be demonstrated, it should be noted that because of the difficulty in performing electrical measurements, there have been few investigations of DC potentials. Fig.4 Simultaneous plots of the experimental potential s curves during the QRS complex in the cases of MCG (solid curve) and ECG (dashed curve). A.Theoretical Advantages of MCG First, the nature of lead fields of electric and magnetic leads is quite different. Specifically, the ideal magnetic lead is sensitive only to tangential components of activation sources and therefore should be particularly responsive to abnormalities in activation (since normal activation sources are primarily radial). Furthermore, the tangential components are attenuated in the ECG because of the Brody effect. Another factor is that the signal-to-noise ratio for the electrical and magnetic recordings are affected by different factors, so there could be a practical advantage in using one over the other despite their similarities in content Second, the magnetic permeability of the tissue is that of free space. Therefore the sensitivity of the MCG is not affected by the high electric resistivity of lung tissue. This makes it possible to record with MCG from the posterior side of the thorax the electric activity of the posterior side of the heart. That is difficult to do with surface ECG electrodes, but is possible to do with an esophageal electrode which is, however, inconvenient for the patient Another important application of this feature is the recording of the fetal MCG. During a certain phase of pregnancy the fetal ECG is very difficult to record because of the insulating fat layer in the fetus. B.Technical Advantages of MCG First, a possibly important distinction is that the magnetic detector is not in contact with the subject. For mass screening, there is an advantage in not requiring skin preparation and attachment of electrodes. (In the case of patients with skin V. CONCLUSION.It s clear that application of the MCG-signals in medical diagnostic has many advantages: a/ The ECG measures the electric potential field, which is a scalar field. Therefore, one measurement at each measurement location is enough. The MCG measures the magnetic field, which is a vector field. Therefore, MCG measurements should provide a vector description - that is, three orthogonal measurements at each measurement location- to get all available information. b/in MCG we are interested in the electric activation of the whole cardiac muscle, not only on its anterior surface. Therefore, to compensate the proximity effect, MCG measurements should be done symmetrically both on the anterior and on the posterior side of the thorax. Actually, the posterior measurement of the MCG increases the information especially on the posterior side of the heart, where the sensitivity of all ECG leads is low due to the insulating effect of the lungs. (As noted earlier, in the measurement of the MEG, we are mainly interested in the electric activation of the surface of the brain, the cortex. Therefore a unipolar measurement is more relevant in measuring the MEG.) c/on the basis of the existing literature on the MCG, non symmetric unipositional measurement seems to give the same diagnostic performance as the mapping of the х component of the magnetic field on the anterior side of the thorax.. A combination of electric and magnetic measurements (i.e., ECG and MCG) gives a better diagnostic performance than either method alone with the same number of diagnostic parameters, because the number of independent measurements doubles. REFERENCES [] P.Кагр Р. Cardiomagnetism. In Biomagnetism, Proc. Third Internal. Workshop On Biomagn-tism, Berlin, May 98, pp. 9-58, [] Hristov, V., and V. Vatchkov, Web based system for microscope observation with structural analyzer EPIQUANT, Engineering Science Magazine, 6, No, pp. 7-4 [3] Hristov, V. THE ZIGBEE WIRELESS NETWORKS: A REALIZATION, Proc. of the Conference FMNS 5, Blagoevgrad,9- June 5, vol., pp

21 This page intentionally left blank. 458

22 A Stimulation of Neural Tissue by Pulse Magnetic Signals Dimiter Tz. Dimitrov Abstract In this paper a theoretical and experimental investigation on stimulation of neural tissue by pulse magnetic signals is described. The experimental investigation has been done using appropriate circuit. A comparison between direct electrical stimulation of neural tissue and stimulation of neural tissue by pulse magnetic signals is done with respective conclusions and recommendations. An optimisation of parameters of used pulse magnetic signals for stimulation is done, also. Keywords Magnetic stimulation, neural tissue, pulse magnetic signals I. INTRODUCTION It s well known that the origin of the biomagnetic field is the electric activity of biological tissue. This bioelectric activity produces an electric current in the volume conductor which induces the biomagnetic field. This correlation between the bioelectric and biomagnetic phenomena is, of course, not limited to the generation of the bioelectric andbiomagnetic fields by the same bioelectric sources. This correlation also arises in the stimulation of biological tissue.magnetic stimulation is a method for stimulating excitable tissue with an electric current induced by an external time-varying magnetic field. It is important to note here that, as in the electric and magnetic detection of the bioelectric activity of excitable tissues, both the electric and the magnetic stimulation methods excite the membrane with electric current. The former does that directly, but the latter does it with the electric current which is induced within the volume conductor by the timevarying applied magnetic field. The reason for using a timevarying magnetic field to induce the stimulating current is, on the one hand, the different distribution of stimulating current and, on the other hand, the fact that the magnetic field penetrates unattenuated through such regions as the electrically insulating skull. This makes it possible to avoid a high density of stimulating current at the scalp in stimulating the central nervous system and thus avoid pain sensation. Also, no physical contact of the stimulating coil and the target tissue is required, unlike with electric stimulation. Dimter Tz. Dimitrov is from the Faculty of Communication Technique and Technologies, Technical University of Sofia,Bulgaria, Sofia, 8, Kliment Ohridsky, II. THE DESIGN OF STIMULATOR COILS A magnetic stimulator includes a coil that is placed on the surface of the skin. To induce a current into the underlying tissue, a strong and rapidly changing magnetic field must be geneated by the coil. In practice, this is generated by first charging a large capacitor to a high voltage and then discharging it with a thyristor switch through a coil. The principle of a magnetic stimulator is illustrated in Fig.. The magnitude of induced electromotive force (emf) - e is proportional to the rate of change of current, dl/dt and to the inductance of the coil L. The term dl/dt depends on the speed with which the capacitors are discharged; the latter is increased by use of a fast solid-state switch (i.e., fast thyristor) and minimal wiring length.inductance L is determined by the geometry and constitutive property of the medium. The principal factors for the coil system are the shape of the coil, the number of turns on the coil, and the permeability of the core. For typical coils used in physiological magnetic stimulation, the inductance may be calculated from the following equations. III.CURENT DISTRIBUTION IN MAGNETIC STIMULATION The magnetic permeability of biological tissue is approximately that of a vacuum. Therefore the tissue does not have any noticeable effect on the magnetic field itself. The rapidly changing field of the magnetic impulse induces electric current in the tissue, which produces the stimulation. Owing to the reciprocity theorem, the current density distribution of a magnetic stimulator is the same as the sensitivity distribution of such a magnetic detector having a similar construction. It s necessary to note that in the lead field theory, the reciprocal energization equals the application of stimulating energy. The distribution of the current density in magnetic stimulation may be calculated using the method introduced by Malmivuo (976) and later applied for the MEG (Malmivuo, 98). Two cases of application of single coil and quadrupolar coil configuration are described below. 459

23 A Stimulation of Neural Tissue by Pulse Magnetic Signals Fig. The principle of the magnetic stimulator Fig..Isointensity lines and half-intensity volume for a stimulation coil with 5mm radius. The distance of the coil plane from the scalp is mm 46

24 Dimiter Tz. Dimitrov A. Single Coil The current distribution of a single coil, producing a dipolar field, is presented on fig., which illustrates the isointensity lines and half-intensity volume for a coil with a 5 mm radius. The concepts of isointensity line and halfintensity volume are reciprocal to the isosensitivity line and halfsensitivity volume. Because of cylindrical symmetry the isointensity lines coincide with the magnetic field lines. B. Quadrupolar Coil Configuration The coils can be equipped with cores of highly permeable material. One advantage of this arrangement is that the magnetic field that is produced is better focused in the desired location. Constructing the permeable core in the form of the letter V results in the establishment of a quadrupolar magnetic field source. With aquadrupolar magnetic field, the stimulating electric current field in the tissue has a linear instead of circular form. In some applications the result is more effective stimulation. On the other hand, a quadrupolar field decreases as a function of distance faster than that of a dipolar coil. Therefore, the dipolar coil is more effetive in stimulating objects that are located deeper within the tissue. IV.STIMULUS PULSE The experimental stimulator (Fig.) has a capacitor construction equaling a capacitance of 476 μ F. This was charged to 9-6 V and then discharged by thyristor through the stimulating coil. The result was a magnetic field pulse of.-. T, 5 mm away from the coil. The length of the magnetic field pulse was of the order of 5-3 μ s. The energy W which is required to stimulate tissue is proportional to the square of the corresponding magnetic induction B. According to Faraday's induction law, this magnetic field is in turn approximately proportional to the product of the electric field magnitude E and the pulse durationt. W B E t () where: W is the energy required to stimulate tissue В is the magnetic induction Е is the electric intensity t is the pulse duration The effectiveness of the stimulator with respect to energy transfer is proportional to the square root of the magnetic energy stored in the coil when the current in the coil reaches its maximum value. A simple model of a nerve fiber is to regard each node as a leaky capacitor that has to be charged. Measurements with electrical stimulation indicate that the time constant of this leaky capacitor is of the order of 5-3 μ s. Therefore, for effective stimulation the current pulse into the node should be shorter than this. For a short pulse in the coil less energy is required, but obviously there is a lower limit too. V. ACTIVATION OF EXCITABLE TISSUE BY TIME- VARYING MAGNETIC SIGNALS The actual stimulation of excitable tissue by a time-varying magnetic field results from the flow of induced current across membranes. Without such flow a depolarization is not produced and excitation cannot result. Unfortunately, one cannot examine mis question in a general sense but rather must look at specific geometries and structures. To date this has been done only for a single nerve fiber in a uniform conducting medium with a stimulating coil whose plane is parallel to the fiber. In the model examined by Roth and Basser, the nerve is assumed to be unmyelinated, infinite in extent, and lying in a uniform unbounded conducting medium, the membrane described by Hodgkin-Huxley equations. The transmemhrane voltage V m is shown to satisfy the equation (): λ () m m Vm V x where: V = τ t E + λ x V m is transmembrane voltage λ is the membrane space constant τ is the membrane time constant х is the orientation of the fiber E x is х- component of the magnetically induced electric intensity (proportional to the x component of induced current density) It is interesting that it is the axial derivative of this field that is the driving force for an induced voltage. For a uniform system in which end effects can be ignored, excitation will arise near the site of maximum changing current and not maximum current itself. In the experimental investigation the coil lies in the xy plane with its center at x =, y =, while the fiber is parallel to the x axis and x 46

25 A Stimulation of Neural Tissue by Pulse Magnetic Signals at y =, 5cm and z =, cm. They consider a coil with radius of.5 cm wound from 3 turns of wire of. mm radius. The coil, located at a distance of. cm from the fiber, is a constituent of an RLC circuit; and the time variation is that resulting from a voltage step input. Assuming C = μf and R = 3Ω an overdamped current waveform results. From the resulting stimulation it is found that excitation results at х (or. cm, depending on the direction of the magnetic field) which corresponds to the position of maximum E x x. The threshold applied voltage for excitation is determined to be 3 V. (This results in a peak coil current of around A.) These design conditions could be readily realized. Stimulators with short risetimes (< 6 μ s ) need only half the stored energy of those with longer risetimes (> 8 μ s ). The use of a variable field risetime also enables membrane time constant to be measured and this may contain useful diagnostic information. REFERENCES [] D.Dimitrov, M.Dontschewa Computer modelling from magnetic field, 4.Intern.Konferen Computer Aided Engineering Education, , Krakow, 995 [] B D.Dimitrov, M.Dontschewa, M.Nikolova Computer simulation of 3D-Signals of Nemeks Apparat in Physioterapie, 7. Multimedia Fachtagung, , Dortmund, 997 [3] M.Dontschewa, H.-P. Schade Leistungsfähiges lowcost multimediales Präsentationssystem. Bestandteile, Voraussetzungen, 39.IWK, , Band, S. 384, Ilmenau,994 [4] Barker AT, Freeston IL, Garnham CW (99): Measurement of cortical and peripheral neural membrane time constant in man using magnetic nerve stimulation. J. Physiol. (Land.) 43: 66. [5] Barker AT, Freeston IL, Jalinous R, Merton PA, Morton HB (985): Magnetic stimulation of the human brain. J. Physiol. (Land.) 369: 3P. [6] Dimitrov,D.Medical Information Systems, handbook, Technical University of Sofia, 5 VI.CONCLUSION It s possible to do the next conclusions after theoretical and experimental investigations described above:.the magnetic stimulation can be applied to nervous stimulation either centrally or peripherally..the main benefit of magnetic stimulation is that the stimulating current density is not concentrated at the skin, as in electric stimulation, but is more equally distributed within the tissue. This is true especially in transcranial magnetic stimulation of the brain, where the high electric resistivity of the skull does not have any effect on the distribution of the stimulating current. 3.Another benefit of the magnetic stimulation method is that the stimulator does not have direct skin contact. This is a benefit in the sterile operation theater environment. 4.It may be predicted that the magnetic stimulation can be applied particularly to the stimulation of cortical areas, because in electric stimulation it is difficult to produce concentrated stimulating current density distributions in the cortical region and to avoid high current densities on the scalp. 46

26 Eye-Blinking Artefacts Duration Analysis Plamen Manoilov Abstract The artefacts impede the analysis of the electroencephalogram s (EEG) signal and should be handled properly. The most common and characteristic kinds of artefacts are the electrooculographic (EOG) ones, especially subject s eye blinks. In this paper an analysis of the duration of the EEG section, polluted by eye blinking artefacts is described with a connection of using the EEG for brain-computer interface (BCI), working with α- and μ-rhythms (range 8-3 Hz) brain potentials. Keywords BCI, blink artefact, EEG analysis, EOG, power spectrum I. INTRODUCTION A direct Brain Computer Interface (BCI) is an assistive device that accepts commands directly from the human brain without requiring any physical movement. The ultimate goal of such an interface is to provide effective communication without using the normal neuromuscular output pathways of the brain, but by accepting commands directly encoded in the neurophysiological signals. BCI should be able to detect user s wishes and commands while the user remains silent and immobilized. For people who are locked-in after having lost all voluntary muscle control due to advanced amyotrophic lateral sclerosis, brainstem stroke or muscular dystrophy, BCI may be their only means of communication with the environment. Obviously, brain-computer communication is vital for people with such severe motor disabilities to increase their quality of life. BCI may be as useful for people without any disabilityes too. In the Alternative Control Technology (ACT) program of the US Air Force Research Laboratory [] they use EEG to achieve hands free control by US military pilots. To be as effective as possible, an ideal BCI should allow the user to determine when a command is to be initiated, provide multiple independently controllable channels, and support high information transfer rates. It is unlikely that an ideal BCI will be available in the near future, but a simple reliable interface providing single switch control would also be beneficial for locked-in patients. The majority of research on human brain-computer communication has been performed using Plamen K. Manoilov is with the Communication Technique and Technologies Department, RU A. Kanchev, 8 Studentska Str., Rousse, Bulgaria, electroencephalographic [, 5, 6] (EEG) recordings which are well studied, easily available, and noninvasive. The less widely used electrocorticogram (ECoG) [4] is only available if subjects require electrode implantation on the cortical surface for clinical treatment or evaluation, and research access could be scheduled around clinical activities. Compared to EEG, ECoG recordings have less vulnerability to artefacts, superior spatial resolution, giving ECoG the potential to allow brain-computer communication with greater functionality, although a surgical risk exists at every time. Designing a BCI system one can choose from a variety of features that may be useful for classifying brain activity, recorded during mental tasks performance. The EEG is measured, sampled, and next used for a communication. Depending on the BCI, particular preprocessing and feature extraction methods are applied to the EEG sample(s) -.5 s of length. It is then possible to detect the task-specific EEG signals or patterns from the EEG samples, with a certain level of accuracy. A classifier that could be Statistical Model Neural Network (SMNN), Hidden Markov Models (HMM) or variations of Linear Discriminant Analysis (LDA) then classifies these features. EOG stands for electro-oculographic artefacts, which appear in the EEG as a result of subject s eyes moving and blinking. Eye blink artefacts are easy to distinguish. In time domain they show enormous high amplitude relative to the other EEG signal and supposed could have an influence on the control. II. PROBLEM STATEMENT AND STUDY DESCRIPTION This study is done during a work on a project for creating a BCI, started in Delft University of Technology, The Netherlands in 4. Professor drs dr Leon Rothkrantz, head of Man-Machine Interaction research group, Faculty of Electrical Engineering, Mathematics and Computer Science supervised the project. During the experiments the subjects performed different mental tasks, among them mental rotation, motor imaginary, mathematical calculations, visual presentations etc., issuing different patterns in mu (μ) and alpha (α) rhythmic brain activity frequency ranges, which after a successful classification could be used for building a BCI. In a result of the experiments a database, which contains 4 sessions EEG data, around minutes each, recorded from two subjects (male, 5 and 3) was prepared for use together with a tool for a statistical analysis ( R, MATLAB ). Second stage was processing the EEG from the database and 463

27 Eye-Blinking Artefacts Duration Analysis finding (if possible) a specific pattern for every mental task. After classifying the tasks, some of them with more clear and well-expressed pattern could be chosen for using for BCIs control. One of the questions to be solved was how to deal with the subject s eyes blinks caused EOG artefacts. Sources exist [], where the researches process the data, containing eye-blinks. From other side, other sources exist [6, 7], where is stated, that eye-blinks could lead to errors in BCIs research and work. The decision was taken to study the power spectrum of the EOG artefacts and define their influence on EEG in connection with the chosen working frequency range. After this study was done [9], the conclusion was made that the EOG artefacts influence on EEG range 8-3 Hz is significant and they should be eliminated from the data before the feature extraction. For further data processing a decision was taken first to cut the blinks and only after that process the data. Even doing this action by hand, the question about the length of the polluted by the eye-blink segments of data arises. Some authors [3] simply omit the trials where they discover eye blinks. They achieve this automatically by linearly detrending and removing those time series whose maximum, rectified EOG amplitude, exceeded a threshold. If the blink appears at the end of a trial, its influence could contaminate the next trial. Segments for processing are -.5 s long. Blink influence could last longer. From other side, cutting blindly long segments with blinks will discard useful parts of EEG and slow down BCIs work. Other author [], Fig., recognizes and marks the blinks by using parameters of the EEG waveform where it has the highest amplitude. Later the marked EEGs are intended to be used by medical doctors. Study about the length of the polluted by blinks segments is not reported. * G N ( k, t) - complex conjugate, a) b) Fig.. Blinks with different forms and durations in time domain Fig.. Parameters, used in [] to recognize eye blinks Blinks, recorded during different sessions and tasks are shown in Fig.. Except their high amplitude in the low frequency range they do not have any specific and repeated forms. The length of some of them exceeds s (56 samples). The duration of their visible part in the time domain is different and subject-dependent. In fact they depend on the subject s emotional stress, fatigue, eye dampness, etc. The study described in this paper continues the work in []. To find the duration of the influence of the blink to the EEG, Gabor transform is used, according to () * P( k, t) = G ( k, t) G ( k, t), where () N N N iπkn / N G N ( k, t) = x( n, t) H ( n) e, () n= N - number of samples for the analysis, th H (n) - n sample of Hamming window with length N, th x ( n, t) - n sample of the current segment, with offset t from the beginning of the EEG. The study uses 6-seconds EEG sections with blinks, to envelop parts before and after the blink. The blink is centered. Every section is divided to segments s each with.5 s overlapping. Moving average filter is used along to equal frequencies in neighbour segments after Fourier analysis is done. The results are given as 3D plots in Figs. 3 and 4. First a similar to Fig. blink s form is chosen - Fig. a, session 3, task 36, run. The position axis marks in Fig. 3 correspond to the real frequency as (position )=frequency, Hz. The distance between the tick marks in time scale is.5 s. In all channels amplitude variations of some frequency components in the range of 8 3 Hz are noticed synchronously with Hz-low frequency component, caused by the blink (in C3, Fig. 3a, and P3, Fig. 3b, at Hz, and in O, Fig. 3c, at and Hz). No matter that the white noise 464

28 Plamen Manoilov is slightly filtered, each time in α-range exists a frequency component(s) which amplitude follows caused by the eyelids low frequency. The visible part of the eye blink in the time domain, Fig. a, is around 8 samples -.5 s. Following the Hz component amplitude, the duration where it is rising a) a) b) b) c) Fig. 3. Spectrogram for C3, P3, O, session 3 c) Fig. 4. Spectrogram of a longlasting blink, C3, P3, O, session 3 465

29 Eye-Blinking Artefacts Duration Analysis (comparable to its steady state) is *,5=3 s. The changes of Hz frequency amplitude are similar. The spectrograms of more complex and long lasting blink from Fig. b, session 3, task 36, run, are shown in Fig. 4. Although three visible periods of the low frequency could be seen there, this is one blink. The visible part in the time domain lasts above s (more than 56 samples). The power is higher in the low frequency part of the range. The 8 Hz part is most affected. Similar to the previous case, the eye blink influence lasts in average 3 s. The analysis of blinks with different forms and durations in the time domain results in almost equal length of 3 s of the affected section. Unlike [], where is stated that average blink duration is ms in our EEG records in the time domain the visible part of the blinks are from to ms long. Most of them last around 5 ms. By reason of the different forms (subject-dependent) it is impossible to define eye blinks duration in the time domain. More important characteristic of the blinks is their influence over α-range frequencies. Blinks with different forms in the time domain affect different frequencies between 8 and 3 Hz. These frequencies appear synchronously with the Hz frequency component. III. CONCLUSIONS In all channels the blinks amplitude is more than 5 times higher than the amplitude of the blink-free EEG data. The power of the eye blinks is concentrated up to 3 Hz range. Eye blinks could be recognized in the time domain by controlling the amplitude of the raw EEG or in the frequency domain by controlling the -3 Hz power. In the range 8-3 Hz in segments, which contain blinks, the power of frequency components is more than 5% more in comparison to blinks-free EEG parts. When the analyzed segment contains a blink, the power in all channels varies, which lowers the probability of a correct classification of the mental tasks patterns. It was decided to omit the EEG segments, which contain eye blinks. The power of work frequencies (8-3 Hz) could be followed from the spectrograms of all the channels. Depending on the blinks form in the time domain different frequency components change sinchronously with the Hz frequency, where the blinks power is concentrated. According to the study, rejecting 3 s section is quite enough to have blinks free neighbor parts. REFERENCES [] Blankertz B., G. Curio, K.-R. Müller, Classifying single trial EEG: Towards brain computer interfacing. In T. G. Diettrich, S. Becker, Z. Ghahramani, editors, Advances in Neural Information Processing Systems (NIPS ), volume 4,, pp , [] Bogacz R., Blinking Artefact Recognition in EEG Signal Using Artificial Neural Network, Master thesis, Politechnika Wroclaw, Department of Informatics, Wroclaw, (in Polish). [3] Burke D. P., S. P. Kelly, P. Chazal, R. B. Reilly, C. Finucane, A Parametric Feature Extraction and Classification Strategy for Brain Computer Interfacing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 3, No., March 5. [4] Freemen W. J., L. J. Rogers, M. D. Holmes, D. L. Silbergeld, Spatial Spectral Analysis of Human Electrocorticograms, Including the Alpha and Gamma Bands, Journal of Neuroscience, 95,, pp. -. [5] Hjorth B., An on-line transformation of EEG scalp potentials into orthogonal source derivations, Electroencephalography and Clinical Neurophysiology, vol. 39, 975, pp [6] Lauer R. T., P. H. Peckham, K. L. Kilgore, EEG-Based Control of a Hand Grasp Neuroprothesis, Neuro-Report, vol., 999, pp [7] Lauer R.T., P. H. Peckham, K. L. Kilgore, W. J. Heetderks, Applications of Cortical Signals to Neuroprosthetic Control: A Critical Review, IEEE Transactions on Rehabilitation Engineering, vol. 8,, pp [8] Malmivuo J., R. Plonsey, Bioelectromagnetism, Principles and Applications of Bioelectrical and Biomagnetic Fields, New York, Oxford, Oxford University Press, 995, [9] Manoilov P. К., EEG Eye-Blinking Artefacts Power Spectrum Analysis, Proceedings of the International Conference on Computer Systems and Technologies CompSysTech 6, V.Tarnovo, Bulgaria, 5-6 June, 6, pp. IIIA.3- IIIA.3-5. [] Manoilov P. К., Electroencephalogram eelctrooculographic artefacts analysis, Proceedings of the National conference with an International participation, ELECTRONICS 6, - JUNE, 6 pp (in Bulgarian). [] Middendorf M., G. McMillan, G. Calhoun, K. S. Jones, Brain Computer Interfaces Based on the Steady-State Visual-Evoked Response, IEEE Transactions On Rehabilitation Engineering, vol. 8(),, pp. 4. [] Parra L. C., C. D. Spence, A. D. Gerson, P. Sajda, Response Error Correction A Demonstration of Improved Human- Machine Performance Using Real-Time EEG Monitoring, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol., 3, pp

30 The Web Side System for Registration and Processing Medical Data of Urological Department Patients Jaroslaw Makal, Jacek Bilkiewicz and Andrzej Nazarkiewicz 3 Abstract In this paper the functional application for gathering, processing and interpretation of medical data is described. The usage of this data base is possible with the access of internet and with minimal requirements for hardware. Information stored at server can be reviewed and different statistics may be created in depending of medical or epidemiological needs. The future plans of extending this data base with diagnostic module and inference engine are mentioned. Keywords internet technology, data base, processing of data, computer aid diagnosis. II. STRUCTURE OF DATA BASE The information set is organized in three separate parts (Fig.): doctors, patients and examinations tables. The table doctors includes the identifications data of all users which allows them to entrance the base. The patients table contains patients personal data and ID of their doctors. The biggest examinations table is composed of the identification part (patient, doctor and examination) and the result table (about rows for every record). I. INTRODUCTION Computer measuring systems are used in medicine to improve quality and efficiency in health care processes. Personal computers have become inexpensive and relatively easy to use. The Internet technology for data exchange can be utilized from almost any office or doctor s surgery (Fig.). This technical and socio-economic development has led to a situation when it appears to be appropriate to assume that a large number of doctors is able to access an Internet based information system and collect their medical data for their common or own usage []. Every doctors has also the possibility Fig.. The database structure Two technologies [] have been used for establish this base: MySQL -which enables to create any data collection with additional description. It is very popular and wide range used by web-masters also because of its low cost (the licence is not necessary), and PHP Typer Text -the script language performed in the server allowing to dynamic generation of the web side contents. The process of database creation is automatized by locating all commands in the file baza.php which is placed in the main directory at server. The administrator manages the users status and allows them to make use of all records or only of their own part. User may add/delete the patient or the result of the examination for existing item. Fig. 3. The process of activating the usage of database Fig.. The users access of database to review and analyse medical data in theirs work places: hospital, clinic or home study. Jaroslaw Makal is with the Faculty of Electrical Engineering at Bialystok Technical University (BTU), ul. Wiejska 45D, 5-35 If some results of the patient are observed, the presentation is Bialystok, Poland, in the form of a table and all items are sorted according to date Jacek Bilkiewicz is a student of BTU Faculty of Electrical Engineering, Poland,; (the latest at the beginning). For handy operation the inquiry 3 Andrzej Nazarkiewicz is with the J.Śniadecki Provincial forms of a new patient is unrolled. The selected parts of Integrated Hospital, M.C. Skłodowskiej 6, 5-95 Białystok, Poland, 467

31 The Web Side System for Registration and Processing Medical Data of Urological Department Patients questionnaire data are presented in the Table I (i.e. the part 5. consists of 3 rows; there are near rows in the full version). TABLE I THE SELECTED (NOT ALL) PERSONAL AND CLINICAL DATA GATHERED IN DATABASE. Personal data: Patient`s data (name, second name, surname) Examination`s date (year-month-day) Code of SD/case history. Anamnesis: Diabetes, Hypertension yes/no ASC (arteriosclerosis) yes/no Sexual activity yes/no if NO how many years from/since Erectile dysfunction severity: /-mild; -middle; 3- hard Smoking yes/no, number of cigarettes/per day Alcohol yes/no, g drinks/ per week Physical exercises (jogging, any sport, etc.) yes/no, how many/per week 3. Ailments Chest pain, Intermittent claudication, Orthopnea yes/no 4. Urination IPSS, Quality of Life number of score Haematuria, Urinary incontinence yes/no, number of used sanitary napkins (quantity/per4h) 5. Medicaments: Alfa-blockers yes/no Blockers of 5-fosfodiesterase /Viagra, Cialis, Levitra/ yes/no 6. Physical examination Arterial blood pressure mmhg Pulse, Pulse at distal arteries Leg ischemia (is there any hair on patient s legs?) DRE (digital rectal examination) 7. Laboratory tests Urine analysis normal/abnormal Urine bacteria culture negative/positive Glycemia, Cholesterol, LDL, HDL Total PSA (last) ng/ml date 8. USG (ultrasonography) of the low urinary tract Urolithiasis/stones at upper urinary tract yes/no Prostate volume, Residual volume of urine cm 3 9. Uroflowmetry Qmax ml/s Volume of micturition (urine) ml III. THE STATISTICS REVIEWING The pleat STATS allows users to form different statistics for particular items of questionnaire and in the future also to make so called intersected statistics. The example of one of simple graph is showed at Fig. 4. All records in the database are clustered for the sake of patients age [3]. There are 74 patients: 4 of them are from 6 to 7 years old. Likewise, the diagram percentage of patients with i.e. Qmax (max urine flow) over 5ml/s can be created in this base. Fig. 4. The histogram of number of patients diversified in the interest of their age IV. CONCLUSIONS We have performed the web side system for gathering clinical data. We used the expanded inquiry forms which include all possible data represent patient cases suffering from BPH (Benign Prostatic Hyperplasia), Prostate Cancer and the other causes of lower urinary tract disabilities [4]. So far, only direct statistics are possible in the form of histograms. In near future the answer for the question: how many patients with PSA=4-5 ng/ml have big prostate volume and have BPH as the final diagnosis, can be received from described database. The long future aim of this system is to help a doctor in a diagnosis. It has the advantage of having a large database of knowledge which can be updated (it can store more knowledge than a person). ACKNOWLEDGEMENT This paper is supported by the Ministry of Science and Higher Education (Poland) from the sources assigned for scientific researches in 5-8 in the frame of project nr 3 TC REFERENCES [] Orzechowski P., Makal J.: Acquisition of Medical Data in the Diagnosis of Benign Prostatic Hyperplasia (BPH), Proceedings of the 6th School-Conference on Computer Aided Metrology, 3 Waniewo, Poland; (p. 75-8). [] Ullman L., About PHP and MySQL for Dynamic Web Sites: Visual QuickPro Guide. st Edition 3 by Peachpit Press. [3] Girman C.J.: Ageing in urology. Population-based studies of the epidemiology of benign prostatic hyperplasia. British Journal of Urology. Supplement, 8 (998), (p ). [4] Recognition and treatment of BPH. The actual guidelines of American Urological Association. The Practical Surgery Medicine, nr 5-6, 3. [5] Chang PL, Li YC, Wang TM, Huang ST, Hsieh ML, Tsui KH, Evaluation of a decision-support system for preoperative staging of prostate cancer, Medical Decision Making 9 (4): Oct-Dec

32 Laboratory Stand in Web Browser for Measurements on Distance Jaroslaw Makal, Adam Idzkowski and Adam Krasowski Abstract The general description of the Internet laboratory stand and its Web application used in laboratory of metrology and technique of experiment has been presented. Some proposed future improvements to this distance learning system has also been discussed at the end of this paper Keywords distance learning, ASP.NET, IEEE-488. interface. I. INTRODUCTION By means of Internet many real experiments could be done by students in the frame of distance e-learning courses. The most convenient way of the measurements would be the use of a web browser to control the devices and to analyse the results. Such statement was a motivation to create a laboratory stand as a tool in teaching Theory of Circuits at Faculty of Electrical Engineering in Bialystok Technical University. In this paper the measurements performed on a simple DC circuit has been presented. The supplying current and the voltage in one branch of electrical circuit are measured. Students can use a web browser to manipulate three devices by GPIB (IEEE-488.) interface. The results are observed in the ordinary webcam and can be saved at request. (Internet Information Services) version 5. installed. IIS application process model consists of TCP/IP kernel, Inetinfo.exe which runs in-process applications (low isolation) and multiple DLLhost.exe processes which run pooled-process or out-of-process applications (medium or high isolation). Security is assured by Windows authentication, SSL, Kerberos and Web Server Certificate Wizard []. The prerecorded video from camera is converted into stream by Microsoft Windows Media Encoder 9. This software uses any camera installed on computer and does not require additional drivers or libraries. C. Communications The dynamic website has been created with the use of ASP.NET technology [-4]. The application is installed on server and a client (student) communicates with it using a web browser. The advantage of this solution is that students, before II. EQUIPMENT, SOFTWARE AND COMMUNICATIONS A. Equipment and interfaces The heart of the laboratory stand is a personal computer with IEEE-488. interface board for the PCI bus (model KPCI-488A of Keithley). The board works as a system controller (Fig. ) and controls three GPIB instruments (Tektronics P5G DC power supply, Keithley DMM multimeter and Motech FG-53 function generator). An internet camera is connected by USB interface to the server and the image is seen in a web browser during the measurements. B. Server s software The server runs on Windows XP Professional with IIS Jaroslaw Makal, Adam Idzkowski and Adam Krasowski are with the Faculty of Electrical Engineering, Bialystok Technical University, Wiejska 45D, 5-35 Bialystok, Poland, Fig.. General outlook on communication. taking the measurements, do not install any additional software on their own computers which do not have to be modern. The communications among the devices and the server has been executed by our application, GPIB interface and ieee_3m.dll library. GPIB interface board co-operates with the devices using SCPI (Standard Commands for Programmable Instruments) language. Our application sends and receives simple 469

33 Laboratory Stand in Web Browser for Measurements on Distance commands and values. As the example of this, to set the voltage V on the power supply output, the command sour:volt is given. Fig. 3. The view of laboratory stand. V. CONCLUSIONS Fig.. The panel where the settings of power supply are programmed. III. USER INTERFACE The user interface was made with the use of HTML and CSS (Cascading Style Sheets) scripts. The properly created forms like in Fig. serve to input the programming data. The interface is user friendly. All the settings can be done by choosing a proper point in the list and by filling out any number in the field. Various ways of filling out the numbers are possible, for example the number.. can be written as, M, k or ^7. IV. LABORATORY EXERCISE The aim of laboratory exercise is to find the DC characteristics of P-N junction diode, any other electronic element or circuits. The results of continuous measurement of current and voltage are shown in special laboratory window in web browser. Default sampling time (the frequency of voltage and current reading) is 5ms. The results, which help to plot I-U characteristic, are presented in the table and can be saved to the file. The view of the laboratory stand is presented in Fig. 3. The ASP.NET and IIS technologies allow us to create the server and user application for establish the distance measurements. The CSS technology enables of build-up the project create new forms and use another (total 4) measuring instruments with GPIB interface (signal generators, oscilloscopes, etc.) The user s interface looks the best in Internet Explorer because we have used Windows Media Encoder 9 for video streaming and Java applet and ActiveX control for seeing the picture. In other web browsers the image from camera is not visible. Another disadvantage is that the laboratory exercise can be done by exclusively one student at the same time. Web-based measurements can probably never replace the real contact with the instruments. But the measurements performed in hazardous enviroments require remote control of the instruments what motivates to do such experiments in the teaching. ACKNOWLEDGEMENT This work is prepared within the framework of the project S/WE/3/3. Master s thesis (M.Sc.) which concern this subject are done by Adam Krasowski, the student of Bialystok Technical University. REFERENCES [] M. Tulloch. Administering IIS (5), McGraw-Hill Osborne Media,. [] S. Worley, Inside ASP.NET, New Riders Publishing,. [3] D. S. Platt, Introducing Microsoft.NET, Second Edition, Washington, Microsoft Press,. [4] J. Liberty, D. Hurwitz, Polish translation: R. Górczyński ASP.NET. Programming, Helion, 6. 47

34 Electronic Identification and Patient Parameters Monitoring Siniša Ranđić, Aleksandar Peulić, Adam Dostanić 3 and Marko Acović 4 Abstract - In this paper has shown different method for wireless monitoring biomedical patient data and safety modes. Our system consists of mobility sensors devices and using wireless transfer for sending measured biomedical data to central computer/data base server in hospital. The proposed health remote control system supports a few levels, a first, level with sensors for monitoring biomedical data, second central level for wireless transfer measured data, third central system for acquisition data and last, fourth level, corresponding application for automatic analysis same, user interface for data access and very important part is hardware protection of data access. Barcode reader and patient s identification card realize hardware protection. This system is convenient for continuous patient monitoring as enhance care of patient s health. Keywords Monitoring biomedical patient data, Mobility sensor devices, Wireless transfer measured data I. INTRODUCTION To provide human healthcare support with a better quality, we should be able to collect a very large amount of people s vital signs and monitor it efficiently. Current welfare system is based on medical doctor regular consultation, on behalf of our own feeling. The idea is not to replace the current system, but to augment it by an environment using information technologies (IT) and wireless networking, to provide continuous monitoring of one s physiological information, perform simple diagnosis and communicate all that with medical institutions. This case arises when physicians want to monitor individuals whose chronic condition includes risk of sudden acute events or individuals for whom interventions need to be assessed in the home and outdoor environment. If observations over one or two days are satisfactory, ambulatory systems can be utilized to gather physiological data. An obvious example is the use of ambulatory systems for ECG monitoring, which has been part of the routine evaluation of cardiovascular patients for almost three decades. However, ambulatory systems are not suitable when monitoring has to be accomplished over periods of several weeks or months, as is desirable in a number of clinical applications. Wearable systems are totally no obtrusive devices that Siniša Ranđić is with the Technical Faculty, Sv. Save 65, 3 Čačak, Serbia, Aleksandar Peulić is with the Technical Faculty, Sv. Save 65, 3 Čačak, Serbia, 3 Adam Dostanić is with the Technical Faculty, Sv. Save 65, 3 Čačak, Serbia, 4 Marko Acović is with the Technical Faculty, Sv. Save 65, 3 Čačak, Serbia, allow physicians to overcome the limitations of ambulatory technology and provide a response to the need for monitoring individuals over weeks or even months. They typically rely on wireless, miniature sensors enclosed in patches or bandages, or in items that can be worn, such as a ring or a shirt. They take advantage of hand-held units to temporarily store physiological data and then periodically upload that data to a database server via a wireless LAN or a cradle that allow Internet connection. II. SYSTEM DESIGN On a wearable controller, a software environment allows accumulating data from physiological sensors, recording them into a local database, operating basic data manipulations, and communicating data with the database at medical institutions. These elements compose health remote control systems, which concept shown on Fig. Fig.. Health remote control system The whole system can operate as standalone as well as PC controlled. It can be divided into four parts, radio frequency data transmission network, analogue measurement modules, PC-based base station (including data processing) and programmable, portable recording modules with feedback options. The main goals of the design are lightweight, minimal power consumption, modular design and robust circuitry. The Network in between the measuring modules and the base station is realized as a bi-directional multi-point, single master RF-link, operating in the LPD-frequency range (868MHz) on a single channel. The RF measuring network consists of several measuring (slave) modules (a maximum of 3 slaves is possible) and of one master module. The structure of the network is fully dynamic and in operation reconfigurable and scaleable. The configuration process of the RF-network is fully automatic in conjunction with the control program, running on the PC or the Base station. The 47

35 Electronic Identification and Patient Parameters Monitoring initializations process and the required communication in between the master and the module s give the possibility to control the network dynamically. It is therefore necessary to initialize a slave module to a dedicated master. The function of master is slave polling and acquisition of data. The function of slave is recognition of control package and sending data to master. The master acts in direct conjunction with PC by USB or RS3, collecting information from RF link and sending to database by PC application. User action by PC application master translates to slaves by RF link. It is possible to operate several masters in parallel using different channels. The base station collects the data stream of the modules operating in parallel and feed them together. The bandwidth of the whole system is 9 Baud. The bandwidth of each slave is dynamically controllable by the master. For each slave is also given a back directional configuration channel. Its purpose is to configure the slave and to control hardware functions of measuring device. The channel selection by the master is managed by a collision detection algorithm to ensure the usage of the channel with the minimum radio strength signal. Individual slave (measuring) modules) are controlled by, and communicate with, the master module (PC) using a custom wireless protocol. We use standard 868 MHz RF modules (RF Transceiver TRF69) because the available Bluetooth technology requires three to five times greater power consumption. In addition, we reduce power consumption by using a custom, power-efficient communication protocol. The core of our wireless modules consists of a low-power Texas Instruments microcontroller MPS43F49. The controller features a 6-bit architecture, ultra-low power consumption (less than ma in active mode and ~ µa in standby mode), 6-KB on-chip flash memory, -KB RAM, -bit A/D converter, and dual UART. Internal microcontroller analogy channels monitor battery voltage and temperature. Therefore, slave module is capable of reporting the battery status and temperature to the upper level in the system hierarchy. Master and slave modules form a personal area network, which communication system is wireless, mid-range (up to 4 meters) and consume little energy, for practical usability. Under these requirements, it is constructed transmission circuits using weak radio frequency. On vital data reception, master module automatically records it in a local database. It is designed database architecture centered on measurement sessions and time-based classification. The environment also provides functionalities to display multiple physiological data on graphs, carry out graph manipulation (zoom, slide), and access information about sensors (maker, serial number, picture...). After the data have been saved, it is coupled with sensor information (sensor id number, name) and saved into PC database. It is designed to manage patients daily monitoring individual data, provide tools to support medical doctor diagnosis process, and a meta-data framework to make easier processes like correlation analysis and data-mining. WEB Internet service is data transfer is the system extension. Long distances patients monitoring are available and doctors have a possibilities to produce actions from house from example. There is and mobile phone services and figure shoves a system design based at Internet and mobile phone services. Fig.. Internet based patients monitoring system The hospital web server station has IP (Internet Protocol) address. The symbolic web address is need for directly locations patients data form WEB server. A mobile phone module is need for Small Message System conectd directly to WEB server by a standard RS3 ar USB interface. III. BAR CODE READER SYSTEM PROTECTION The realized health remote control system supports a three security levels, a first, lowest patient security level, second a doctor level involved at doctor personal computer and top level at data base server. This chapter describes first security level, realized bar code card reader. Every patient has own identification number and own password implemented in identification card. The identification card has intention to protect external access to health remote control system by unauthorized persons. Second convenience is storage an array of medical date concerning card owner. The Card reader is realized like external device and connected to PC by serial or USB port or wireless, Blue Tooth for example, Figure 3. Fig. 3. Bar code reader PC connections The bar code reader reads personal identification number and password from card memory and sends it to PC. The PC application is realized by Microsoft Visual C++ with intention to establish communication check accepted data form card and permit or deny access to system. CONCLUSION Developed system is based to enable continuous monitoring of patient s physiological information. A wearable controller collects measured sensor data. Integrate it into a database, which allow the exchange with medical institutions where a system manages the database for each patient s vital data. Intelligent medical monitors can significantly decrease the number of hospitalizations and nursing visits. In case of medical emergency master module can send an SMS message 47

36 Siniša Ranđić, Aleksandar Peulić, Adam Dostanić and Marko Acović to the personal medical doctor. A three security levels are involved in aim to prevent and limit access. A first security level is a bar code reader based realized, and our future work will be focused to improve security capabilities of system. REFERENCES [] Jovanov, E., Price, J., Raskovic, D., Kavi K, Martin, T., Adhami R.: "Wireless Personal Area Networks in Telemedical Environment" Proceedings of Third International Conference on Information technology in Biomedicine, (ITAB-ITIS), pp. -7,. [] Jovanov, E., Price J., Raskovic, D, Moore, A., Chapman, J., Krishnamurthy A.: "Patient Monitoring Using Personal Area Networks of Wireless Intelligent Sensors", Biomedical Sciences Instrumentation Vol. 37, in Proc. 38th Annual Rocky Mountain Bioengineering Symposium, RMBS, April, Copper Mountain Conference Center, pp [3] Jovanov, E., O Donnel, A., Morgan A., Priddy,B., Hormigo, R.: "Prolonged Telemetric Monitoring Of Heart Rate Variability Using Wireless Intelligent Sensors And A Mobile Gateway", nd Joint EMBS-BMES, Houston,Texas, October. [4] Jovanov, E., Price J., Raskovic, D, Moore, A., Chapman, J., Krishnamurthy A.: "Patient Monitoring Using Personal Area Networks of Wireless Intelligent Sensors", Biomedical Sciences Instrumentation Vol. 37, in Proc. 38th Annual Rocky Mountain Bioengineering Symposium, RMBS, April, Copper Mountain Conference Center, pp [5] Peulic, A., Randjic, S.: "Radio Frequency Measured Data Transmit", ICEST.-4. Oct., Nis, Yugoslavia, CD zbornik, Measurement Technique [6] Peulic, A., Randjic, S.: "Computer Based High Radio Frequency Systems Contro", TELSIKS.-3. October 3, Nis, Yugoslavia, Proceedings is on CD [7] Peulic, A., Randjic, S.: "Computer Based Remote Measuring and Acquisition of Dynamic Data", ICEST October 3, Sofia, Bulgaria 473

37 This page intentionally left blank. 474

38 SESSION SP II Signal Processing II


40 Comparative Analysis of Basic Self-Organizing Map and Neocognitron for Handwritten Character Recognition Ivo R. Draganov and Antoaneta A. Popova Abstract In this paper we present a comparative analysis of the performance both as accuracy and time consumption of two neural network classifiers for handwritten character recognition a basic self organizing map and neocognitron proposed by Kohonen and Fukushima respectively. The results of our study are found useful for characters and pseudo-characters recognition as a certain stage of processing in a whole handwriting recognition system. Keywords character, pseudo-character, word, handwriting recognition, self-organizing map, neocognitron. classifiers is more appropriate to be included in a complete handwriting recognition system. II. COMPARED CLASSIFIERS DESCRIPTION The architecture of the self-organizing feature map (SOFM) used in our experiments is given in Fig.. I. INTRODUCTION A possible concept used in some systems for handwriting word recognition is splitting words in single characters and pseudo-characters []. Thus the separated elements of a word should be recognized at a later stage in these systems using a number of preliminary chosen rules based on common handwriting and language characteristics to form word hypotheses. After confirming or rejecting these hypotheses applying lexicon verification along with language statistics and other techniques we are able to recognize handwritten text word by word. Considering the large number of methods developed for handwritten character recognition [,6] and their capability to recognize pseudo-characters (parts of one or more characters) we decide to directly compare a self-organizing map [3] and a neocognitron [4] proposed by Kohonen and Fukushima respectively. These both self-learning neural networks will be compared in their basic form considering recognition accuracy with high quality images of characters free from noise, shifts, etc. Our main goal is to find out to what degree they differ one from another as for their recognition capability based on the different architecture and working principles they have when high quality test data is passed to them. In the next part we present a description of the structure, preprocessing, learning and recognition algorithms used for both networks in our study. The third part contains the experimental results for different parameters defined in part two. In the last part a conclusion is made which of these Ivo R. Draganov is with the Faculty of Communications and Communications Technologies, Technical University, Kliment Ohridski 8, Sofia, Bulgaria, Antoaneta A. Popova is with the Faculty of Communications and Communications Technologies, Technical University, Kliment Ohridski 8, Sofia, Bulgaria, 477 Fig.. Self-organizing feature map architecture The input receives consecutively vectors p with dimensions Rx, where R=56, each p represents a character (or pseudocharacter). At first a grayscale images for the separate characters are used which we binarize using Otsu algorithm. Afterwards we resize the binary symbol to 56x56 pixels with nearest neighbour interpolation. Then thin or thicken the width of the pen used to pixels using erosion or dilation l times: l = t, () where t is the initial width of the pen found as a maximal value in a histogram representing all the widths of the pen (number of ones closed between zeros for each line in the image if an inverted one is used) in the resized binary image. This preprocessing step is needed to make characters as invariant as possible for further forming the learning extract. Finally we divide the result image to 6x6 pixels sized blocks. The number of pixels (ones for example) corresponding to character in each block is calculated and the normalized value (in range [,]) forms a respective component for the input vector p. The blocks are passed from left to right and from top to bottom. We use the original Kohonen learning rule defined as: wi ( q) = wi ( q ) + α[ p( q) wi ( q )] =, () = ( α) w ( q ) + αp( q) i

41 Comparative Analysis of Basic Self-Organizing Map and Neocognitron for Handwritten Character Recognition where i denotes a single neuron; w the neuron weight; q current step; α the learning rate; N i (d) is the neighbourhood around a wining neuron from which weights of all neurons j should be updated. Here d is the radius of the neighbourhood: N ( d) = { j, d d}. (3) i IW, from Fig. is a weight matrix for all the S input neurons (equal to the number of classes/subclasses for all the characters) with R weights each. n i is the result from finding the distance from p to i-th column of IW, :, ni = IWi p, (4) which is passed to competitive layer C, the output from which a a Sx vector containing one component equal to, indicating the recognized character and all the other components equal to. Given the preprocessing steps from above it is clear that our SOFM operates in 56-dimensional space, for each dimension of which a range of to is set. We use midpoint initialization for all the neurons weights (.5). The learning rate α and neighbourhood distance d are altered through the learning procedure. It lasts for a given number of steps Q. The neighbourhood distance starts as the maximum distance d max (calculated at first step) between two neurons, and decreases to preliminary defined end neighbourhood distance d min. Similarly the learning rate starts at some initial value α init and decreases until it reaches some smaller one α end both are preliminary set. As d and α decrease, the neurons of the network typically order themselves in the input space with the same topology in which they are ordered physically. The neocognitron performs classification of input through a succession of functionally equivalent stages. Each stage extracts appropriate features from the output of the preceding stage and then forms a compressed representation of those extracted features. Fig. shows the structure of the neocognitron for a case of recognizing objects (e.g. digits). ij field. The behaviour of an S-cell can be mathematically formulated as a function φ(.) of a cell s activation: o( x) = rlϕ ( a( x)), (5) T + x w a( x) =, (6) rl +. bl. rms( x) + rl where x is the vector of activities present at the receptive field input, w is the vector of weights learned by the S-cell and r l is the selectivity parameter found by a separate closed-form training algorithm which we will not discuss. The rms activity of the input to the S-cell is defined by: N rms( x) = c x, (7) i= where the vector c=[c,,c N ] T describes a Gaussian kernel that serves to accentuate inputs towards the centre of the cell s receptive field, as well as implementing the arithmetic mean of the inputs. b l is a factor set by the learning rule to maximize the cell s response to any training feature. Interesting here is the S-cell transfer function considered influencing the final recognition accuracy [5]. Originally Fukushima used a threshold linear function of the form:, a < ϕ ThreshLin ( a) =, (8) a, a but here we will use another two transfer functions in our experiments the more simple threshold function:, a < ϕ Thresh ( a) =,, a (9) and the sigmoid transfer function: ϕ Sig ( a) = βa + e. () Fig.3 shows the structure of an S-cell along with the described transfer functions. i i Fig.. Structure of a neocognitron for objects The feature extraction is performed by arrays of S-cells that have been trained to respond to certain features that characterize the input patterns U as it is seen from Fig.. After training the weight vector of an S-cell is equal to the sum of the inputs that have appeared within its receptive field. Each S-cell also receives an inhibitory signal proportional to the root mean square (rms) activity present in its receptive Fig.3. S-cell structure with different transfer functions C-cells compress the representation the input to a C-cell from its receptive field is a subsampling of the activity in the preceding S-plane. By susbsampling this activity, a compressed representation of the S-plane output is obtained. C-cell also blur the activations of the preceding S-planes by performing a weighted sum of inputs, this time using fixed weights that describe a Gaussian kernel. If we denote the 478

42 Ivo R. Draganov and Antoaneta A. Popova subsampled input as a vector x and the Gaussian kernel as w then the activation of a C-cell in Fukushima s original description of the neocognitron can be written as: a ( x) = w T x. () This weighted mean is then passed through a transfer function that limits the output of the C-cell to [,): a ψ Mean ( a) =. () + a Here is the second difference from the very original Fukushima s neocognitron. Again in [5] is noted that blurring of S-plane activity by the C-cells is important in allowing the neocognitron to be tolerant of a considerable degree of input distortion. Thus a ranked order filter is incorporated in the very structure of the C-cell. The output of the modified C-cell is given by: ψ ( x) = max x w, (3) Max where x=[x,,x N ] T is again the subsampled input vector and w=[w,,w N ] T is the Gaussian kernel. We refer to Eq. (3) as a weighted max operation. III. EXPERIMENTAL RESULTS All the experiments are implemented on an IBM compatible PC with Pentium 4 Processor working at.5 GHz, 56 MB RAM. The operating system is MS Windows XP with SP and the working environment is Matlab 7.. SP. Our database collected from different authors contains images of 45 lower and upper case Latin characters and digits (8-bpp grayscale, 5 dpi). From them we use 85 preprocessed (filtered by median and range filters) images of lower case characters to train the networks and afterwards 6 additional images which form the test set (for the recognition phase) although a set of 4 is considered to be enough [5]. We use the following parameters for the accuracy comparison: relative number of correctly recognized characters: mc rc =., %, (4) m where m is the number of passed characters for recognition (m=6); m c the number of correctly recognized ones; relative number of wrong classified characters: mw rw =., %, (5) m where m w represents the absolute number of wrongly recognized characters; relative number of non-classified characters: mn rn =., %, (6) m where m n represents the absolute number of non-classified (rejected) characters. i i i Our first experiment includes finding the optimal value for the minimum learning rate α min of the SOFM. We set α max =.9 and d min =. a midrange value for the typical working interval [.,] for the particular case. Before recognition phase we train the network for epochs with 6 classes with 5 subclasses for each present. The so called gridtop topology is used along with Euclidean distance. Rejection criterion is d> -6. The results are shown in Table I. TABLE I FINDING THE OPTIMAL MINIMAL LEARNING RATE FOR SOFM α min Recognition r c, % r w, % r n, % The graphical representation of the results from Table I is given in Fig.4. r c,r w,r n,% r c r w r n α min,- Fig.4. r c, r w, r n as functions of α min for SOFM Given the above results we choose to use α min =.6 as an optimal value because the number of correctly recognized characters is maximal for it. Thus the next experiment concerns the radius of the neighborhood for fixed α min we decrease d min from to -5 by a factor of. Again our SOFM is trained for epochs, 6 classes with 5 subclasses. The results are shown in Table II. TABLE II FINDING THE OPTIMAL NEIGHBOURHOOD DISTANCE FOR SOFM d min Recognition r c, % r w, % r n, %

43 Comparative Analysis of Basic Self-Organizing Map and Neocognitron for Handwritten Character Recognition The graphical representation of the results from Table I is given in Fig.4. The results for the neocognitron using the same training and test data are shown in Table IV. The only parameter we change is the S-cell transfer function as seen. The values of the other parameters for this experiment are given in [5]. r c,r w,r n,% r w r n r c TABLE IV RECOGNITION ACCURACY FOR THE NEOCOGNITRON φ(.) Recognition Thresh ThreshLin Sigmoid r c, % r w, % r n, % IV. CONCLUSION d min,- Fig.5. r c, r w, r n as functions of α min for SOFM Now we have the optimal α min =.6 and d min =.. How the number of epochs of training the SOFM affects the recognition process, for 6 classes with 5 subclasses, is shown in Table III and graphically in Fig. 6. Training time consumption order is as follows for epochs a few minutes, for about an hour and for more than 8 hours. r c,r w,r n,% TABLE III INFLUENCE OF THE NUMBER OF EPOCHS FOR SOFM Epochs Recognition r c, % r w, % r n, % r c r w r n 3 Epochs,- Fig.6. r c, r w, r n as functions of number epochs for SOFM It is obvious that the SOFM is equivalent to the neocognitron concerning recognition accuracy when a reasonable amount of time is spent for training (- epochs) and preliminary processed high quality test data is used (small noise levels, shifts, etc.). The last confirms the need and the important role of qualitative preprocessing. For both - learning rate and neighborhood distance for SOFM should be found appropriate values during training as they highly affect the final recognition accuracy. As neighborhood distance decreases and inclines to, the SOFM behaves itself more like a simple competitive neural network which is explainable by its working principle. As for the S-cell transfer function for the neocognitron it is visible that the sigmoid one is the best option from the three tested ones. In the other two cases the neocognitron drops behind SOFM as for the recognition accuracy. Depending of the circumstances both of the networks can be applicable in a complete handwriting recognition system. REFERENCES [] R. Bozinovic and S. Srihari, Off-Line Cursive Script Word Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol., pp , 989. [] O. Trier, A. Jain and T. Taxt, Feature Extraction Methods for Character Recognition A Survey, Pattern Recognition, vol. 9, no. 4, pp , 996. [3] T. Kohonen, The Self-Organizing Map, Proceedings of IEEE, vol. 78, no. 9, pp , 99. [4] K. Fukushima, Neocognitron: A Self Organizing Neural Network Model for a Mechanism of Pattern Recognition Unaffected by Shift in Position, Biological Cybernetics, vol. 36, no. 4, pp. 93-, 98. [5] K. Fukushima and N. Wake, Handwritten Alphanumeric Character Recognition by the Neocognitron, IEEE Transactions on Neural Networks, vol., no. 3, pp , 99. [6] A. Bekiarski and L. Batchishing, Choice of structure features for recognition of mongolian letters, Electrotechnics and Electronics, XXXII, no. -, pp ,

44 Comparative Analysis of Integral Calculus Algorithms in Magnetic Signals Evaluation Abstract - The article presents three approaches for lowfrequency magnetic signals calculation, based on direct integration of the Biot-Savart law using complete elliptic integrals; differences are into the chosen approach for complete elliptic integral calculation. The methods discussed are implemented in C# programming language and the results of their execution are compared both visually and numerically. Keywords integral calculus algorithms, complete elliptic integrals, low-frequency-magnetic field I. INTRODUCTION The goal of the presented program component is to expose easy-to-use and easy-to-understand approach to evaluate the precision of low-frequency magnetic field calculation when different methods have been used for elliptic integrals solving. Two ways for precision evaluation are developed: Numerical comparison; Visual comparison (D graphs). Both ways are based on the comparison of the results of law-frequency magnetic field calculation, performed using the etalon calculation and matching calculation. Etalon calculation is performed using the formula for on-axis points []. Matching calculation is performed using the formulas for off-axis points based on magnetic filed potential that involves complete elliptic integrals of the first and second kind []: π / dβ K = () k sin β π / L = k sin β. dβ () In both cases calculations are performed for the same set of points of z-axis that is perpendicular to the plane of the current turn. The differences into the precision of calculations are caused by the method chosen for elliptic integral calculation: Using diagram; Using elliptic integral Tables; Using Arithmetic-Geometric-Mean. Magnetic field calculation for different cases of field sources coils with great number of current turns requires a great deal of computation for large number of points into the area influenced by the field. We need a method that will guarantee a reduced complexity for a large amount of field points, namely with a matrix of points nearly 9 or more. Virginya T. Dimitrova is with the Faculty of Computer Systems and Control, Technical University, Sofia, Bulgaria, Virginya Dimitrova 48 The method must guarantee a good precision of calculations too. The main part of field computations is related to complete elliptic integrals evaluation. Consequently, improving the precision, decreasing the complexity of calculations and optimizing memory consumption can be achieved using the best suited method for complete elliptic integrals evaluation. The first implemented method can be determined as a direct method and uses a diagram to compute the complete elliptic integrals of first and second kind K(k) and L(k) for the corresponding values of k. The main disadvantage of this method is the need to store floating point values into the memory as arrays elements (for example) thus limiting the step of discretization to avoid enlarging of arrays. Besides of errors of discretization this method does not ensure enough precision for the impossibility to obtain real values for K(k) and L(k) with more that two digits after the decimal point from the diagram. The method however has some advantages: relatively small size of the arrays and their static nature allows implementation with static arrays instead of linked lists and thus only two fast indexing operations per k value are needed to obtain K(k) and L(k) values. Second implemented method is also a direct method as the first one, but is based on calculations (not on the predefined values stored into the memory). This method saves memory but instead of fast indexing operations uses so called modulus m = -k and double precision constants into a lot of multiplication and addition operations. (The method is based on FORTRAN subroutine COMELP, rewritten in C# language). The third implemented method can be determined as an iterative method (starts from a guess and finds successive approximations that converge to the solution) and is effective and numerically stable. This method computes Legendre elliptic integrals k(k) and L(k) by computing the equivalent Carlson elliptic integrals and corresponding RC, RF and RJ routines, originally written in FORTRAN and rewritten in C# language. Elliptic integrals used in magnetic field calculation are expressed using Legendre s notation. Numerical comparison is realized in two modes: Single values comparison; Multiple values comparison. When the first mode is used the value of the z-coordinate of the point into which the magnetic induction B should be calculated is entered from the keyboard and the two compared numeric values as well as their difference are displayed. Under the second mode the magnetic induction B is calculated for the predefined set of points and the corresponding compared values are displayed in rows-andcolumns form (DataGrid Windows Forms control).

45 Comparative Analysis of Integral Calculus Algorithms in Magnetic Signals Evaluation In regard to the visual comparison two different graphical representations of the results of precision analysis are supplied: D graph with z-coordinates of points on X-axis and calculated B-values (using etalon method and matching method) in different colors on Y-axis; D graph with z-coordinates of points on X-axis and the difference of the calculated B-values (etalon method and matching method) on Y axis. II. DESIGN OF THE PROGRAM The core of the program is the algorithm shown in fig. It supplies all the possibilities needed to satisfy the goal of the program. Show pair/pairs of values Begin Point(s) selection Etalon calculation Matching calculation Representation type? End Figure. Flow-chart of the program III. IMPLEMENTATION Graph type? Max value Scale factor Draw D graph Difference Program decision is created using Visual Studio.NET program environment, C# programming language and Windows Forms application template. Methods for magnetic induction calculation are placed into a separate FieldCalcs class. To achieve the universality (ability to calculate magnetic induction for every point into the space) the constructor has two parameters the values of ρ and z. For the current implementation the value of the first parameter is always. and only the second parameter changes its value. Separate instance methods for calculation are supplied: BonZ() etalon method for on-axis points and BEveryWhere() matching method for off-axis points. The BEveryWhere() method has a parameter passed using delegate mechanism thus allowing to dynamically change the method used for complete elliptic integrals calculation (fig. ). FieldCalc class BonZ ( ) BEveryWhere ( ) Delegate EllipIntegralsCalc class Graph ( ) Table ( ) AGM ( ) Figure. Program components IV. CONCLUSION The presented program component has been successfully applied for the evaluation of the effectiveness, numerical stability and precision of the calculations in regard to lowfrequency magnetic field generated by different configurations of field sources. The structure and implementation of the program component allows methods for complete elliptic integral calculation, exposed by the FieldCalc class, to be replaced, improved or extended with additional methods without changes in the design of the component or applications that reference it. REFERENCES [] M. P. Zlatev, Theoretical Electrotechnic, Technika, Sofia, 97 [] B. C. Carlson, On computing elliptic integrals and functions, J. Math and Phys.., 44 (965), pp.36-5 [3] Developing Microsoft.NET Applications for Windows with Visual C#.NET [4] V. Todorova, 3D computer simulation of the static magnetic field distribution over the virtual human body, Information Technologies and Control, ISSN 3-6 [5] V.Dimitrova, Maleshkov St., Methods for 3d surface subdivision in calculation and visualization of static magnetic field distribution, Electronics and Electrotechnics, 7 [6] V. Todorova, Maleshkov St., Geometric Data Exchange in XML format Using.NET Environment, Computer Science 4, Technical University of Sofia, Bulgaria [7] V. Todorova, Maleshkov St., OpenGL programming environment: Problems and Solutions, XVIII-Conference SAER-4, Varna, Bulgaria [8] D.Dimitrov, A.Dimitrov, Computer Simulation of Low Frequency Magnetic Field, p.7-75, Proceedings of 7th EAEEIE Annual Conference on Innovation in Education for Electrical and Information Engineering, June, -3, 6, University of Craiova, Romania. [9] D.Dimitrov, H.Hristov, Modeling the Moving of Charges in Homogenous Magnetic Field, p.7-3, Proceedings of the First International Conference on Communications, Electromagnetics and Medical Applications (CEMA 6), October,9-, Sofia, Bulgaria. [] D.Dimitrov, An Investigation on Propagation and Absorption of Electromagnetic Signals Through Biological Media, p.-6, Proceedings of the First International Conference on Communications, Electromagnetics and Medical Applications (CEMA 6), October,9-, Sofia, Bulgaria. 48

46 Investigation of Maximally Flat Fractional Delay All-pass Digital Filters Kamelia S. Ivanova and Georgi K. Stoyanov Abstract In this paper the relations between the allpass transfer function poles placement and the fractional delay parameter values are analysed and new closed form expressions are derived. It is shown that the poles are taking very unusual positions compared to other filter realizations. Then, the sensitivities of the most popular allpass sections are investigated and the most appropriate structures for different delay-time values are identified. Using these results it is possible to design high accuracy fractional delay structures over different frequency ranges and in a limited wordlength environment. Keywords IIR digital filters, allpass sections, fractional delay, maximally flat approximation, poles position, low sensitivity. I. INTRODUCTION Recently, there is a growing interest in developing fractional delay digital filters, which appeared to be very useful in numerous fields of digital signal processing and digital communications (timing adjustment, jitter elimination, digital modems, and speech synthesis) []. The theory and the design methods of FIR fractional delay filters are quite well developed [][][3] and mature enough to have convenient structures to implement them. There are, however, very few publications about IIR fractional delay filters, probably because of the problems connected to the IIR realizations like possible instabilities, higher level of the round-off noises and worst behavior in a limited wordlength environment due to their higher sensitivities. In general the obtained solution has to be checked so that all poles of the filter remain within the unit cycle in the z-domain. The design of IIR fractional delay is by far more complicated than that of corresponding FIR filters. In this work we choose to investigate allpass based fractional delay IIR filters because they have the best magnitude properties, permitting us to concentrate on the phase response characteristics. We use the approximation procedures proposed by Thiran [4], which appear to be the most appropriate for the design of fractional delay digital structures with a maximally flat group delay. When designing recursive digital filters in limited wordlength environment, it is very important to develop or choose allpass sections with minimized sensitivities for every given transfer function poles position. But the pole-positions Kamelia S. Ivanova is with the Faculty of Communications and Communications Technology, Technical University, Kliment Ohridski 8, Sofia, Bulgaria, Georgi K. Stoyanov is with the Faculty of Communications and Communications Technology, Technical University, Kliment Ohridski 8, Sofia, Bulgaria, are varying considerably for different values of the realized fractional delay, so a thorough analysis of the connections between the poles placement and the fractional delay parameter values has been made in this paper. Additionally, we investigate the range of fractional delay parameter values for which the allpass sections are remaining stable as well as the range of values for which the allpass sections have only real poles. We derive analytical relations between the fractional delay parameter values and the poles for second, third and fourth order allpass sections. The results so obtained are presented analytically and graphically. These results generalize the behavior of the fractional delay allpass sections so they can be used to design high accuracy fractional delay structures in a different frequency range and in a limited wordlength environment. II. ANALYSES OF ALLPASS BASED FRACTIONAL DELAY FILTERS OF DIFFERENT ORDER There are several approaches to approximate given phase, group delay, or phase delay response specifications [][][3]. To obtain maximally flat group delay responses, we select the method proposed by Thiran because it provides a closed form solution for allpass transfer function coefficients. The coefficients of an allpass filter with a maximally flat group delay response at the zero frequency can be expressed as [4]: N k N D N + n ak = ( ), for k =,, KN. k n= D N + k + n This allpass filter is stable when D > N and when N < D < N as it was observed in []. We have shown in [7] that the transfer function pole placements are closely related to the fractional delay parameter values. The fractional delay parameter values must be very carefully selected to keep the transfer function poles position inside of the unit circle. A. Investigation of a second order transfer function It is easy to obtain the two real poles of the second order fractional delay allpass transfer function when fractional delay parameter value is < D < and the pair of complexconjugate decision when D >. The complex-conjugate poles pair can be expressed as a function of the fractional delay parameter value as follows () 483

47 Investigation of Maximally Flat Fractional Delay All-pass Digital Filters D 3( D ) p, = ± j () D + ( D + ) ( D + ) The possible poles positions as a function of increasing fractional delay parameter values from two to infinity are shown in Fig.. One could notice that transfer function poles occupy fixed position on the root loci. The most common requirement for real applications is time delay with small fractional delay parameter values ( N.5 < D < N +. 5, where N is the transfer function order) which means that poles should be positioned near z = (more specifically in the range between and. on both real and imaginary axes in the z plane). where A = D 3 + 6D + D 6, (5) D 3 8D + D 8 B = D + and (6) C = ( 4D + 4D BD + 45B) A ( D + ).(7) Fig.. Possible pole positions of third order allpass transfer function Fig.. Possible pole positions of second order allpass transfer function B. Investigation of a third order transfer function Similar investigation can be made for third order allpass transfer function. Third order fractional delay allpass filter is stable for fractional delay parameter values D >. In most of the cases there are one real and a pair of complex-conjugated poles. We identify two distinct situations. In the first one, for < D < 3, the real pole is negative and the complexconjugated poles are with positive real parts on the lower root loci. This placement is specific for N. 5 < D < N fractional delay parameter values. In the second case, for D > 3, the real pole and the real part of the complexconjugated pair are positive. The complex-conjugated poles take values on the upper root loci (Fig. ). Here one could conclude that poles placement for small fractional delay parameter values are concentrated in the vicinity of z =. p,3 3 C ( D 3) A D 3 p = 4 +, (3) 3 A( D + ) ( D + )( D + ) C D + 3 C ( D 3) A D 3 = + + ± 3 A( D + ) ( D + )( D + ) C D +, (4) 3 3 C ( D 3) A j A( D + ) ( D + )( D + ) C C. Investigation of a fourth order transfer function Investigation of fourth order transfer functions leads to similar conclusions, as shown in Fig. 3. One specific distinction of this function is that the upper loci have negative real part for small variation of fractional delay parameter values D. Fig. 3. Possible pole positions of fourth order allpass transfer function At this point it is easy to make a generalized conclusion for the behavior of N-th order allpass structures: they are stable for D > N, given that for D = N there exist N solutions in z =. There are always real poles for the range of values N < D < N, and at least one of them is always negative. There are always pairs of complex-conjugated poles for the 484

48 Kamelia S. Ivanova and Georgi K. Stoyanov case of D > N, and increasing of N leads a shift for the bigger part of the loci toward the left half of the z-plane. III. SECOND ORDER FRACTIONAL DELAY ALLPASS SECTIONS It is clear from the previous section that the transfer function poles of the allpass circuits with fractional delay are laying in the vicinity of z= and thus we need realizations with higher pole-density in this region in order to ensure high fractional delay time accuracy. Our extensive search has shown that no such realizations are existing. The most promising candidate is the one based on the famous coupled form having equal pole-density inside the unit circle. Unfortunately, we could not synthesize an allpass section with uniform pole-distribution and because of that we have to investigate and compare the other known allpass sections in order to identify these with lower sensitivity for each polepositions. We have shown in [7] that for small values of fractional delay parameter N.5 < D < N +. 5, the phase delay response remains constant over wider range of frequencies and this range is narrowing when increases. In fact realizations with larger D (i.e. transfer function poles near z = ) can be used for implementation of fractional delay filters for very special applications. When we want to achieve larger non-integer time delay, it is recommendable to use a cascade with the necessary integer number of delay elements and one second order fractional delay allpass structure. This will ensure the largest possible frequency range over which the group delay response will stay flat. The transfer function of the most popular allpass sections are as follows [5]: b b b z + z H ; (8) MH A ( z) = b z + bb z b b z + z H ; (9) MH B ( z) = b z + b z + a a ( a + a ) z + z H ; () KW A ( z) = ( a + a ) z + ( + a a ) z d + d ( d d ) z + z H ; () KW B ( z) = ( d d ) z + ( d + d ) z a a ( a ) z + z H GM ; () ( z) = a ( a) z az a + a ( a ) z + z H AL ; (3) ( z) = + a ( a) z az b ( a b + ab) z + z H ST A ( z) = ;(4) ( a b + ab) z + ( b) z c + ( + c + c ) z + z H ST. (5) B ( z) = + ( + c + c ) z + ( c ) z After representing the coefficients of these sections with the coefficients of the Thiran approximation (), we get the results shown in Table -4. With these formulae it is possible to design and realize the corresponding allpass sections for any given delay parameter D. TABLE I MHA AND MHB FRACTIONAL DELAY FILTER COEFFICIENTS MHA MHB b b b b ( D ) ( D + ) ( D ) ( D + ) ( D ) ( D + ) ( D )( D ) ( D + )( D + ) TABLE II KWA AND KWB FRACTIONAL DELAY FILTER COEFFICIENTS KWA KWB a a d d ( D 3D 4) ( D + 3D 4) ( D ) 6 ( D + )( D + ) ( D + )( D + ) ( D + ) ( D + )( D + ) TABLE III AL AND GM FRACTIONAL DELAY FILTER COEFFICIENTS AL GM a a a a ( D )( D ) ( D+ )( D+ ) ( D )( D+ ) ( D + ) ( D )( D ) ( D+ )( D+ ) ( D )( D+ ) ( D + ) TABLE IV STA AND STB FRACTIONAL DELAY FILTER COEFFICIENTS STA STB a b c c 3 3D D + ( D + )( D + ) IV. 6 6D ( D + )( D + ) ( D + )( D + ) SENSITIVITY INVESTIGATIONS Next we investigate the phase response sensitivities of the fractional delay allpass sections so obtained for two delay parameter D values using the package PANDA [6]. The worst case sensitivities (with transfer function coefficients given in the tables) are shown in Fig. 4 Fig.. Thus it appeared that the Mitra and Hirano (MHA and MHB), Gray-Markel (GM) and Ansari-Liu (AL) structures are the most appropriate for small delay parameter values since they have the lowest phase response sensitivity. For poles near z = 485

49 Investigation of Maximally Flat Fractional Delay All-pass Digital Filters the low sensitivities sections STA and STB are behaving much better than all other known sections. Fig.. Worst case sensitivity Fig.. Worst case sensitivity of STA of STB Fig. 4. Worst case sensitivity of MHA Fig. 5. Worst case sensitivity of MHB V. CONCLUSIONS In this paper we have investigated the behavior of the fractional delay allpass filters with maximally flat group delay response. It was found that transfer function poles of these filters are situated quite differently compared to the ones of the known allpass sections. It was shown that the sections sensitivities depend strongly on the value of the fractional delay parameter D and the most suitable sections for some typical pole-locations have been pointed out. Similar sensitivity analysis for other pole-locations (different values of D) should be conducted and the most proper allpass sections should be selected or synthesized in order to ensure an accurate realization of the fractional delay in a limited wordlength environment. Fig. 6. Worst case sensitivity of KWA Fig. 8. Worst case sensitivity of AL Fig. 7. Worst case sensitivity of KWB Fig. 9. Worst case sensitivity of GM REFERENCES [] T. Laakso, V. Valimaki, M. Karjalainen and U. Laine, Splitting the unit delay tools for fractional delay design, IEEE Signal Processing Mag., vol. 3, no., pp. 3 6, Jan [] V. Valimaki, Discrete-Time Modeling of Acoustic Tubes Using Fractional Delay Filters, Dr. Tech. Thesis, Espoo, Finland; Helsinki Univ. of Tech., Faculty of Electrical Eng., Laboratory of Acoustics and Audio Signal Processing, Dec [3]. V. Valimaki, A new filter implementation strategy for Lagrange interpolation, in Proc. IEEE Int. Symp. Circuits and Systems (ISCAS 95), Seattle, Washington, vol., pp , April 9 May 3, 995. [4] J. P. Thiran, Recursive digital filter with maximally flat group delay, IEEE Trans. Circuit Theory, vol. 8, no. 6, pp , Nov. 97. [5] G. Stoyanov and A. Nishihara, Very low sensitivity design of digital IIR filters using parallel and cascade-parallel connections of all-pass sections, Bulletin of INCOCSAT, Tokyo Institute of Technology, Vol., pp , March 995. [6]. N. Sugino and A. Nishihara, Frequency-domain simulator of digital networks from the structural description, Trans.of the IEICE of Japan, Vol. E73, No., pp.84-86, Nov.99. [7] K. S. Ivanova, V. I. Anzova, G. K. Stoyanov, Low sensitivity realizations of allpass based fractional delay filter, Proc. TELECOM 5, St. Konstantine and Elena, Varna, Bulgaria, pp , Oct. 7-9,

50 A Unification of Determined and Probabilistic Methods in Pattern Recognition Geo Kunev, Georgi Varbanov and Christo Nenov 3 Abstract Main goal of the present task is to find possibilities for achieving uniform rules for pattern recognition, regardless of significant difference in initial positions and different methods for achieving the end form of algorithms. We will try to present most used rules for decision making in aggregate of: uniform procedure for estimate of state, making a calculation of linear or quadratic form, comparing of results with some threshold value. We can apply this results in KDD (data mining) software. Keywords Classification, Bayes, Pattern recognition, KDD I. INTRODUCTION By non-concave location of features by class s areas we search dividing function in full quadratic form or in a part: or: d g( x) = c + c x + c x x ( x, c) i= i i d d k = j = T T g( x) = X AX + a X + a We search one-type pattern recognition rules for:. Algorithms, based on optimal statistical decision making theory,. Algorithms, based on dispersion of probabilistic recognition features, 3. Algorithms, based on minimizing of geometric mean recognition error. II. OPTIMAL STATISTICAL DECISION MAKING ij i j () () R ( A) = M[ R( A/ x)] = M[ I( xxˆ)] min (4) Average risk by repeatedly recognition of М classes is: R = M M X M k = m= Decision rule for M classes (j=,m) is: c km P. f ( y / x ) dy (5) y X, _ if _ c. P( x ) f ( y / x ) max (6) i j Decision rule for two classes is: j f ( yi / x ) P ( c yi X,_ if _ = λ > λ = f ( y / x ) P( c _ if _ i j k j k c) c ) y i X, λ < λ (7) By normal feature dispersion on M classes: T T yi X,_ if _ g j( y) = y y + ( μ ) j j j T μj μ ln + ln ( j) + ln j = j j μ j j P x c (8) = Y T AY + a j T j y + a j max By two classes and conditions: P(x )=P(x ), (c - c = c - c ), (9a) In this well known case [], pattern recognition task is treat as common statistical task with predefined optimum criteria. R ( A/ x) = M[ I( xxˆ / x] (3) Most used criterion is a loss function I ( xxˆ ) for calculation of conditional mean risk, searching the best estimation [4]: Geo Kunev is with the Faculty of Computer Sciences, TU Varna, Studentska, 9 Varna, Bulgaria, Georgi Varbanov is with the Faculty of Computer Sciences, TU Varna, Studentska, 9 Varna, Bulgaria, Christo Nenov is with the Faculty of Computer Sciences, TU Varna, Studentska, 9 Varna, Bulgaria, ] 487 Σ =Σ, we have linear recognition rule: y X,_ if _ y i ( μ μ) T T ( μ μ ) ( μ μ ) < (9b) Bayesian strategies compare projection of recognized observation (vector) on directions eq.(), with the mean values vector on the same direction: ( a = μ μ ) - by two classes a = μ - by M classes. () j j

51 A Unification of Determined and Probabilistic Methods in Pattern Recognition III. RECOGNITION MATRICES STUDY By limited a priory statistic we search recognition rules, based on sub-areas metrics for different classes. For two classes we can use Fisher s nonzero hypothesis criterion []: S Ω S R F =, : (a) where: S Ω - is dispersion estimation (by groups), S R is dispersion estimation (in group). As multidimensional analysis we can use Mahalanobis distance between groups with means μ i and μ j and common covariance matrix V: V =(μ i -μ j ) T V - (μ i -μ j ) An estimation of dispersion in groups for M classes is: S = w M = k = M k = P( x ) k k k T {( y μ )( y ) / x } P( x ) E μ where: P(x k ) every class a priory probability, Σ к classes covariance matrices, y, μ k observations and means by classes vectors. like: An estimation of dispersion by groups for M classes is: S = M B x k k= k k k = (b) () T P( )( μ μ )( μ μ ) (3) An estimation based only on statistic: S ) k k T B = ( μ k m k μ m )( μ, k μ (4) m We search best transformation in form []: S J = S * B * W = Z=a T Y+a (5) k= z Zk ( Z μ )( μ )( Z μ ) T T T A ( μ k μ )( μ k μ ) A A S = = T T ( y μ )( y μ ) A S y Y k M k k = M * k m ( μ μ * * * k μ * T ) * T B W A A = (6) where: y, μ k, μ, S B, S W are observations, means and dispersion matrixes in initial areas, and Z, μ * k, μ *, S * * B, S W - the same after linear transformation. By two classes and we have: а Т Σ к а = σ к *, и μ k *,=а Т μ k + а (7) * k k * j ) j ( μ μ J = (8) σ σ and extremum conditions can be defined: where: * k dj da i dj = dσ * k dσ k. da * dj + dσ * * dj dμ k dj dμ j * * dμ da dμ da dj da k i dj = dσ * k * * dj dμ dj dμ k j * * dμ da dμ da k * k i dσ k. da * j j * j i * j dj + dσ * j dσ j. da = * j = * i dσ j. da * * * * dj ( μ k μ j ) dj ( μ k μ j ) = ; = dσ σ + σ dσ σ + σ * k * j * k * j * k * + + * j (9а) * * dj dj ( μ k μ j ) = = (9б) dμ dμ σ + σ dσ da dσ da then: or: * k i k * dμ k = Σ k a; da * dμ k = ; da dσ j = ; da ( μ μ ) * * k j * * σ k + σ j a = Σ k i dσ = μ k ; da + Σ Σ j * j * dμ j = ; da k By equal covariance matrix: i + Σ * dμ i = Σ ja; da = ; ( μ μ ) k j a = μ k μ j j i = μ j () () a = Σ ( μ μ ) () k In case of a linear transformation by two classes with equal covariance matrix, the decision is known as Fisher s linear discriminant [],[3]: S B. W i = λ. S w. W i (3) If S w is a non-degenerate matrix: j S B. W i = λ. S w. W i (4) 488

52 Geo Kunev, Georgi Varbanov and Christo Nenov By dichotomy of two classes and S B =S B and this is same as eq.(). In multidimensional case: After centering: W = S w -.(μ к -μ i ) (5) а к = S Wк -.(μ к -μ ) (6) а к = S Wк -.μ к (7) IV. MINIMIZING OF GEOMETRIC MEAN RECOGNITION ERROR This group of methods has goal to develop procedures for finding dividing functions coefficients. After: c y = x ; a = ci ϕ( x) c ij or: x ci y ' = ; a' = ϕ( x) c we can have eq. () in form of linear dividing function [3]: ij (8) g(x) = a T Y (9) or: g(x) = a T Y + c (3) Presuming linear class divider: а T (y m -y n )= (3) vector а T is normal to every vector on dividing surface. So, every pair of classes (W i, W j ): After transformation: а T y m >, if y m W i а T y m <, if y m W i (3) y m = - y m ; y m W j, decision rule become: а T y m <; (y m W i,w j ) (33) Having a decision area supposes non-single decision, therefore it can be used additional limitations, for example, searching minimum weight vector: In this case a learning task is to find weight vector ā, matching the best possible equation in form: Ya=b; Y[n,d]; b[n,]; n>d (35) This system in common case has no exact decision, therefore after setting error vector: e=ya-b (36) the task can be treated as classical case of minimizing of: or: and: n T J ( a) = Ya b = ( a y b ) (37) i= n T T J ( a) = Y ( Ya b) = ( a y b ) y = (38) i= i i i Y T Ya=Y T b (39) If the matrix Y T Y (d.d) is non-degenerate, the vector is: а=(y T Y) - Y T b=y # (4) where Y # = (Y T Y) - Y T ; [d x n] is well known in theory pseudoreversed matrix. In this procedure it can be problems with pseudo-reversing. There are different concrete schemes for applying the least squares method and the best is the Ho-Kashap procedure. This procedure moves in steps to the minimum of eq.(37), keeping gradient directions: aj=y T (Ya-b) (4) bj=-(ya-b) (4) It begins with statistic, grouped in two classes as a generalized normalized observation matrix in form: ui xi y = (43) u j x j where: x i and x j are observations of W i и W j, classes, and vectorcolumns u i include n i threshold values, equal to one. If weight and limitation vectors are: n ui a ni A = ; b = ; n = n + n (44) n a u j n and according to eq.(39): j i i а T y m b>; (y m W i,w j ) (34) 489

53 A Unification of Determined and Probabilistic Methods in Pattern Recognition u x u ui u x a = a T T i j i T T x j x i j j u = x After including: T i T i u x T j T j n n ni n n j i x xi ui u j X = μ i (45) X a = S ( μ μ ) i j II class m x x x μ x x x x x x Minimum risk we have: and: S n W i x xi а =-μ Т а ( X nin j + ( μi μ j )( μi μ j ) n T μ i )( X μi ) = S (46) W T a = μi μ j n n a S ) n n (47) i j T W = ( μ i μ j ) a ( μ i μ j )( μ i μ (48a) j As vector direction: n n a ) n i j T ( μ i μ j )( μ i μ (48b) j by every а coincide with vector direction (μ i -μ j ): whence: a n S W a = nα. = α( μ μ ) (49) S W i ( μ μ ) i j j (5) After excluding inessential scalar coefficient nα we have direction that minimizes sum of squares in form: a = S W ( μ μ ) i V. CONCLUSION j (5) Comparison of eq. (), (), (5), (6), (7) gives the conditions, where recognition procedures, based on optimal linear feature area transformation eq.(5), and linear Fisher s discriminant has got equal mathematical sense. In both cases we have projection of observation vector on direction S w -.(μ к -μ i ), and comparing the result with some threshold value. So, according to eq.(8), (9), (), m we can assert about algorithmic equality of recognition procedures. After comparison of eq.(), (), (6), (5) we can see relation between linear parametric and non-parametric, probabilistic and determined methods, and also equivalence conditions of respective recognition procedures. The fact, that least squares procedure and maximum likelihood procedure approximate by probability to the linear Fisher s discriminant shows that we have reason to speak not about different, but asimptotic approximate procedures with common computing scheme. The computed value (projection) is compared with threshold valued, which define optimum as decision making in sense of minimizing of risk, maximum confitional or a posteriori probability (Fig. ). ACKNOWLEDGEMENT This paper is a result of a study, with the kind guidance of professor dr Asen Nedev in TU Varna. REFERENCES Bayesian probability Maximum likelihood x I class x x x μ x x x x x x X Fig. Classes and classification methods [] V. Vapnik, Statistical Learning Theory, J.Wiley, N.Y. 998 [] R.A. Fisher, Contibushions to Mathematical Statistics. J.Wiley, N.Y. 994 [3] Fukunaga. K, Introduction to statistical pattern recognition. Academ press, N.Y, 99 [4] A. Nedev, K. Tenekedjiev, Technical diagnostics and pattern recognition, TU Varna,

54 Complex Input Signal Quantization Noise Analysis for Orthogonal Second-Order IIR Digital Filter Sections Abstract In this paper a new method for the estimation of complex output noise variance due to input signal quantization is proposed. The method is applied on very low-sensitivity secondorder orthogonal complex IIR filter sections. They are used for the design of higher order narrow-band cascade realizations, which are preferred in many telecommunication applications and are normally implemented with fixed-point arithmetic. It is shown experimentally that the sensitivity of the orthogonal complex structure has a profound impact on its output noise level. The proposed method is applicable to any filter structure and can be used to study the complex signal quantization effects in general. Keywords complex orthogonal digital filters, sensitivity, quantization errors, noise analysis. I. INTRODUCTION Finite word-length (quantization) effects are important fraction of the parasitic effects group. Initially, all quantizaion effects have been united together into a single error analysis, but the most useful approach is they to be divided into two categories, requiring different analysis techniques: coefficient quantization errors in representing filter coefficients as finite fixed-point numbers; signal quantization errors due to the finite-precision arithmetic operations of addition, multiplications and storage. The filter coefficients are quantized once only and remain constant in the filter implementation. Coefficient quantization effects on filter characteristics perturb them from their ideal forms. If they no longer meet the specifications, the quantization design must be optimized by allocating more bits or choosing more proper filter realization. The structure of the digital filter has a significant effect on its sensitivity to coefficient quantization. Signal quantization, on the other hand, due to truncation or rounding, is usually best viewed as a random process and can be modeled as producing additive white noise sources in the filter. The effect of signal quantization is to add an error or noise signal to the ideal output of the digital filter, which is composite of one or more of the following error sources: the quantization error of the filter input signal; the errors resulting from the rounding or truncation of multiplication products within the filter; and quantization of the output to refer bits for input to a digital-to-analog converter or another system. Again, as for coefficient quantization, the filter structure affects considerably signal-quantizaion noise levels. In this work the attention is restricted to the noise analysis due to input signal quantization. In case of real digital filters there are various good techniques developed long ago [] []. Zlatka Nikolova is with the Dept. of Telecommunications, Technical University of Sofia, Bulgaria, Zlatka Nikolova 49 In the last years complex coefficients digital filters are gaining popularity, but their quantization noise theory is still not well developed. Small amount of publications touch the problems barely [3] [4]. Only specific problems are considered so far, and no general technique for quantization noise estimation is proposed. In this work a new method for complex analytic input signal quantization noise analysis is offered and applied to a very low-sensitivity orthogonal complex second-order section. It is shown experimentally that the low coefficient sensitivity of the circuit escort low output noise variance due to the complex input signal quantization. II. COMPLEX INPUT QUANTIZATION NOISE ANALYSIS The input signal quantization is equivalent to a set of uniformly distributed noise samples e(n) added to the actual input signal x(n). In case of fixed-point representation with rounding the quantization noise power (variance) of the random variable e(n) is: B δ σ e = =, () B is the word-length in bits and δ is the quantization step size. When the quantized input signal x(n) is complex, the originated noise source e(n) must be complex too. Then, the complex output signal y(n) will be mixed with complex output noise v(n). The noise model of complex input signal quantization is shown in Fig.. e(n)=e Re (n)+je Im (n) x(n)=x Re (n)+jx Im (n) + Complex filter y(n)=y Re (n)+jy Im (n) v(n)=v Re (n)+jv Im (n) Fig.: Noise model due to complex input signal quantization Analytic signals are processed by a special class complex digital filters named orthogonal, which transfer functions can be presented by its real and imaginary parts as follows: H ( jz) = H Re ( z) + jh Im ( z). () x Re (n) + e Re (n) e Im (n) x Im (n) + H RR (z) H RI (z) H IR (z) H II (z) y Re (n)+v Re (n) Complex filter Fig.: Block-diagram of the complex digital filter structure - noise representation + + y Im (n)+v Im (n)

55 Complex Input Signal Quantization Noise Analysis for Orthogonal Second-Order IIR Digital Filter Sections Realized by real elements a complex orthogonal structure (Fig. ) has got two inputs - real and imaginary and a relevant output couple, producing thereby four real coefficient transfer functions two-by-two equal with ± sign: H Re ( z) = H RR ( z) = H II ( z) ; H Im ( z) = H RI ( z) = H IR ( z). (3) The real x Re (n) and imaginary x Im (n) parts of an analytic signal x(n) are inphase and quadrature components. If their levels are much larger than the quantization step size δ the resulting quantization errors e Re (n) and e Re (n) can be modelled as additive noise sources. The following assumptions can be made: The quantization errors are uniformly distributed over the range (-,5δ,5δ). They are stationary white noise sequences (i.e. e(n) and e(m) for n m are uncorrelated); The error sequence is uncorrelated with the initial signal sequence; x Re (n) and x Im (n) are orthogonal, i.e. sufficiently different, so that quantization errors e Re (n) and e Im (n) are uncorrelated. Normally, the real and imaginary parts of the analytic signal are quantized the same word-length. Hence, their noise variances will be identical σ e, Re =σ e,im = σ and calculated by Eq. (). e The assumption that noise signals are statistically independent from source to source leads to the implication that the quantization noise power of their sum is equal to the sum of the respective quantization noise powers. In effect, superposition can be employed and beside the structure from Fig. makes obvious, the real v Re (n) and imaginary v Im (n) components of the complex output noise variance v(n) will be composed respectively as follows: beside σ =σ σ (4) v, Re v, H Re v, H Im σ =σ + σ. (5) v, Im v, H Re v, H Im σ v,h Re and σ v,h are the corresponding output noise Im variances of the real H Re ( z) and imaginary H Im () z parts of the orthogonal complex transfer function (): σv, H = σe H Re( z) H Re ( z ) z dz, (6) Re πj σ v, H Im = σ e πj H Im ( z) H ( z ) z dz Im. (7) III. COMPLEX ORTHOGONAL DIGITAL FILTER CIRCUIT DERIVATION In order to test the method for complex output noise variance proposed in section II, it is executed on two orthogonal sections the DF (Direct Form) - and LS (Low Sensitivity)- based structures. They are derived after the circuit (poles rotation) transformation in its orthogonal form: z = jz or z = jz, (8) is applied on the low-pass (LP) second-order real-prototypes [5]. The obtained orthogonal complex coefficients transfer functions have real and imaginary parts of band-pass (BP) type and doubled order. For the DF-based orthogonal complex structure shown in Fig. 3a they are as follow: 4 DF DF DF + ( g + g ) z + g z H ( z) = H ( z) = H ( ) Re z = g (9) RR II + g g z + g z ( ) 4 DF DF DF ( g ) + ( g g ) z HRI ( z) = HIR ( z) = HIm ( z) = gz ; () 4 + g g z + g z ( ) whereas for the LS-based orthogonal section (Fig.3b) they are: LS LS LS H RR ( z ) = H II ( z ) = H Re ( z ) = 4 + ( 4a+ 3b 6) z + ( b ) z () =,5a ; 4 + ( a+ b ) ( b ) z + b z [ ] ( ) LS LS LS H RI ( z ) = H IR ( z ) = H Im ( z ) = ( 4 a b ) + ( a+ 3b 4) z () =,5az. 4 + [( a+ b ) ( b )] z + ( b ) z The orthogonal complex filter structures are with canonical number of elements and preserve some properties of their real prototypes. In order to verify this deduction with respect to the input signal quantization noise assessment, the orthogonal BP filters are turned into the narrow-band realizations which are the most often used in practice. It is shown experimentally that the narrow-band BP LS-based structure has many times lower coefficient sensitivity than the DF-based orthogonal section in a very short word-length setting [6]. In Re In Im In Re + +,5 g g b a g -g g g (a) z z z - z - z - z - z z - a b Out Re + Out Im Out Re Out Im In Im,5 (b) Fig.3: BP orthogonal structure based on (a) DF; (b) LS secondorder sections 49

56 Zlatka Nikolova IV. NOISE ANALYSIS OF COMPLEX OUTPUT SIGNAL QUANTIZATION ERRORS In this section both real and orthogonal structures are investigated in regard to the output errors after analytic input signal quantization. Initially the real input signal is quantised with a different word-length. The output noise variance for the LS and DF real sections is calculated for same pole disposition providing narrow-band LP realizations. Experimental results for input signal quantization from 3 to 8 bits are shown in Fig. 4. Apparently, the low-sensitivity LS section output noise variance is significantly lower than this of the DF-section when the input signal is limited to 3 bits only. The numerical results in Table show that the difference comes to be more insignificant as the word-length grows up. Fig.4: Output noise variance as a function of input signal quantization for LS and DF real sections Input signal word-length in bits Table Output noise variances of the real sections DF-based (x - ) LS-based (x -3 ) Applying the method proposed in section II, a complex input signal quantization noise analysis is performed. Some experimental results for complex output noise variances for the LS and DF orthogonal sections in different complex input signal word-length environment are presented in Table. Input signal word-length in bits Table Complex output noise variances of the orthogonal complex sections DF-based (x -3 ) j j j j j j LS-based (x -4 ) j j j j j j In order to compare the obtained complex output signal noise variances their complex modulus are graphically presented in Fig. 5. Obviously, the low-sensitivity LS-based orthogonal complex section demonstrates more than two times lower output noise in case of 3 bits input signal quantization. Let s note that the shorter word-length quantization of the input signal means lower power consumption and faster computation process. For low-sensitivity circuits the resistance to quantization effects provides better signal-to-noise ratio (SNR), i.e. higher quality digital signal processing. Fig.5: Output noise variances after analytic input signal quantization for LS and DF -based orthogonal complex sections 493

57 Complex Input Signal Quantization Noise Analysis for Orthogonal Second-Order IIR Digital Filter Sections V. EXPERIMENTS The examined narrowband orthogonal second-order filter sections are tested for limited word-length analytic signal processing. The performed quantized complex input signal is a mixture of white noise and analytic sinusoidal signal. The uniformly distributed white noise samples correspond to the word-length of the input complex signal after its quantization. In Fig.6a the real part of the complex noise reached to the real output is shown for both DF- and LS-based orthogonal complex structures. The imaginary output noise signals are presented in Fig. 6b. Apparently, the complex noise at the outputs due to the quantization of the analytic complex input signal is considerably more for the DF-based section than for the LS-based. (a) (b) Fig.6: The output noise signals after input quantization to 3 bits for LS and DF - based orthogonal complex sections (a) real output; (b) imaginary output. The output SNR for the LS-orthogonal section is about,5 times higher in comparison to the DF-based circuit. To achieve the same good result LS section demonstrates for 3 bits word-length input signal quantization, the DF orthogonal filter must be employed in minimum 6 bits environment. It is clear that the level of the output noise as a result of the input signal quantization and the sensitivity of the system are in a direct relative amount. Therefore, very low-sensitivity complex filter derivation is important to achieve a better noise resistance, improved complex signal filtering and higher quality digital signal processing. VI. CONCLUSIONS In this paper a new method for complex noise analysis is proposed. The resulting error signals at the outputs of orthogonal complex second-order digital filter sections after input signal quantization are examined. The proposed method is general enough to be applied for complex filter sections of higher order. After relevant alterations it could be effectively applied for all other types of finite word effects estimation in complex coefficient systems like errors from quantization of multiplication products within the filter. The expectation that the real prototype properties will be inherited by its complex filter counterpart was confirmed once again with respect to the noise analysis after input signal quantization. It was shown experimentally that both real and orthogonal complex LS-based filter sections beside very low coefficient sensitivity demonstrate low output noise variance due to input analytic signal quantization. The DF-based real and orthogonal complex circuits keep the same mutual performance even if they have many times higher output noise variance than LS-based. Low-sensitivity of complex filters in very limited wordlength circumstances for signals and coefficients quantization makes available low computational complexity and provides better quality of the filtering process. REFERENCES [] K. J. Astrom, E. I. Jury and R. G. Agniel, A Numerical Method for the Evaluation of Complex Integrals. IEEE Trans. Automat. Contr., vol. AC-5, pp , Aug [] B. W. Bomar, Computationally Efficient Low Roundoff Noise Second-Order Digital Filter Sections With No Overflow Oscillations, IEEE Conference Proceedings Southeastcon '88, pp:66 63, -3, April 988. [3] A. Wenzler and E. Luder, New Structures for Complex Multipliers and Their Noise Analysis, IEEE International Symposium on Circuits and Systems, (ISCAS'95), Vol., pp , 8 April - 3 May 995. [4] P. K. Sim and K. K. Pang, Quantization Phenomena in a Class of Complex Biquad Recursive Digital Filters, IEEE Transaction on Circuit and Systems, vol. CAS-33, No.9, pp , Sept [5] E. Watanabe and A. Nishihara, A Synthesis of a Class of Complex Digital Filters Based on Circuitry Transformations. IEICE Trans., vol. E-74, No., pp , Nov. 99. [6] G. Stoyanov, M. Kawamata, Zl. Valkova, New first and second-order very low-sensitivity bandpass/ bandstop complex digital filter sections, Proc. IEEE 997 Region th Annual Conf. "TENCON 97", Brisbane, Australia, vol., pp.6-64, Dec. -4,

58 Speech Overlap Detection Algorithms Simulation Snejana Pleshkova-Bekiarska and Damyan Damyanov Abstract The concern of this paper are different methods of speech overlap detection. Speech overlap is the simultaneous occurrence of speech from more then one speakers. It has some very bad effects in the work of speech recognition systems. Speech overlap detection is one of the main areas in speech and speaker indexing. In speaker indexing, speech signal is partitioned into segments where each segment is uttered by only one speaker. So, parts of speech that include two or more speakers simultaneously should be determined before any following processes. Speaker overlap detection is also useful in some other speech processing applications including speech and speaker recognition. In this paper the method for speech overlap detection Spectral Auto-Correlation Peak Valley Ratio (SAPVR) is shown. At the end of this paper, the results from the work of the methods are plotted. They are the precision rate and the detection rate. The average time for processing for second of speech is also taken under consideration. Keywords Spectral Auto-Correlation Peak-Valley Ratio, K nearest neighbour, speech overlap detection. I. INTRODUCTION The auto-correlation is a standard method of evaluating how correlated is a signal with a copy of itself, delayed on certain interval d. If we have the series x(n) the auto-correlation of this signal is is usable or not []. A speech segment is "usable" if it contains enough information to identify the target speaker. The power spectrum of voiced speech can be predicted because of its harmonic structure. If certain input signals are given, like in fig, 3 and consider a frame of speech that is voiced. The frequency spectrum X(k) of such a frame will contain harmonically related pulses. This operation will always result in pulses of decreasing height with increasing lag. If the original magnitude spectrum X(k) contained harmonics at integral multiples of the digital frequency 'p', then the major contribution to the first peak in the spectral autocorrelation, after lag zero, is due to the product of adjacent harmonics, which occurs at lag 'p'. This is shown in figures 4, 5 and 6.That is, the magnitude of the first spectral peak after lag zero for a voiced frame can be approximated as R( p) = X ( p)x ( p) + X ( p) X (3 p) +... (3) Other terms will contain less energy, and will not contribute significantly to this peak. Note that this parameter contains all the information about significant harmonics. The next peak occurs at lag 'p' and its amplitude can be approximated as R( p) = X( p)x(3p) + X( p)x(4 p) +... (4) r = i [ x i [ x ( i ) m x ] * [ x ( i d ) m x ] () ( i ) m ] x * i [ x ( i d ) m ] where m x is the mean of the series x(i). If the autocorrelation is computed for delays d=,,,,n-, then we can write the formula of the auto-correlation with a length twice the length of the signal: r ( d ) = i [ x i [ x ( i ) m ( i ) m x ] x ] * [ x ( i d ) m * i [ x ( i d ) m The method of Spectral Auto-Correlation Peak-to-Value Ratio (SAPVR) uses spectral auto-correlation function to determine whether a speech frame Snejana Pleshkova-Bekiarka is with the Faculty of Telecommunications,Technical University - Sofia, 8 Kliment Ohridski St.Darvenitsa, 756, Sofia, Bulgaria, Damyan Damyanov is with the Faculty of Telecommunications, echnical University - Sofia, 8 Kliment Ohridski St. Darvenitsa,756, Sofia, Bulgaria, x x ] x ] () 495 By the inherent property of the autocorrelation function, this peak has lesser amplitude than R(p). If the segment of speech is unvoiced, the spectral autocorrelation will not contain any prominent peaks other than the one at lag. [].The behavior of spectral autocorrelation under co-channel condition varied, depending on whether.) both the target and interfering speech were voiced,.) either one of them were unvoiced or 3.) both of them were unvoiced. When both the speech frames were unvoiced, the spectral autocorrelation did not contain any pulses that were harmonically related to each other. If at least one of the speech frames was voiced, the spectral autocorrelation contained harmonically related pulses as expected. If both the speech frames were voiced, the spectral autocorrelation contained either two distinct trains of pulses that were harmonically related if the speakers pitches were different by approximately 5%, otherwise there was one train of broad pulses One important thing is that the ratio of the first local maximum after the one at lag, to the local minima between this maximum and the next local minimum, is significantly lower than that of the single speaker case. This is due to the fact that there are significant autocorrelation values for lags that are not harmonically related, due to cochannel conditions. This motivates one to define a spectral autocorrelation ratio, which reflects the extent of corruption of a target speech by the interfering speech.

59 Speech Overlap Detection Algorithms Simulation The Spectral Autocorrelation Ratio (SAR) parameter is defined as follows: ( p )/ R( )} SAR = log{ R q (5) where, R(p) is the local maximum of spectral autocorrelation other than the one at lag (occurring at lag p) and R(q) is the next local maximum that is not harmonically related to the first peak, or the local minimum between p and p. The SAR has to be properly interpreted. If speech of one of the speakers is silent or is unvoiced, a peak that is not harmonically related to the peak due to voicing state of one talker will be substantially lower in amplitude.this is shown in figures 7,8 and 9. This means the SAR will be very high, from which we would conclude that the frame of speech is usable. If, however, the speech of target and interferer were of comparable magnitude, the SAR ratio would approach zero, which would identify that particular frame as unusable[3]. What if there is a spurious peak of comparable magnitude along with the harmonically related pulses? The SAR will again be low, but the physical interpretation is that, a pure tone is mixed with the speech signal, and if it is of comparable magnitude, that speech frame is definitely unusable. II. SIMULATION OF ALGORITHMS The algorithm for evaluating of Spectral Auto-Correlation Peak-to-Value Ratio is as follows:. Open and load the wave file in the memory.. Create a vector, containing the values of the speech signal. 3. Get the vector length. 4. Create a Hamming window of N points. 5. Evaluate how many windows pass in the vector. 6. For every windowed part of the signal Evaluate the Fourier spectrum Evaluate the spectral autocorrelation [ x( i) mx ]*[ x( i d ) mx ] (6) i r( d ) = [ x( i) m ] * [ x( i d ) m ] i x(n), n =,,,,N- x d=,,,,n- Estimate the firs peak and second lag after the first lag. Evaluate the SAPVR - SAR = log{ R( p )/ R( q )} (7) Set a threshold 6.3 db If the SAPVR is above the threshold the frame is usable If not the frame is unusable i.e. there is speech overlap and the speaker cannot be identified. i x III. QUALITY ESTIMATION AND COMPARISON For quality estimation porpoises:. Get an amount of data, for which all of the frames are known (usable and not usable).. Use the SAVPR algorithm. 3. With the results from the SAVPR algorithm, evaluate the next formulas length of DR = total length of FAR = total PRC truly recognized non usable segments length of non usable segments truly recognized usable segments length of usable segments ( FAR ) * () DR - Detection Rate FAR - False Alarm Rate PRC - Precision of the recognition approach TABLE I DETECTION RATE Speech of a man Speech of a woman Speech overlap (9) TABLE II PRECISION OF THE SAVPR ALGORITHM Speech of a man.6667 Speech of a woman.749 Speech overlap In the following figures, simple signals are shown for visualization purposes. The authors have made an extensive search with many male and female voices. Plot of input signal (8) Speech of a man Fig.. Input signal speech of a man 496

60 Snejana Pleshkova-Bekiarska and Damyan Damyanov Plot of input signal Speech of a woman Spectral Auto Correlation Function Speech of a woman Fig.. Input signal speech of a woman Fig.5. Spectral Auto-correlation speech of a woman Plot of input signal Spectral Auto Correlation Function Speech overlap Speech overlap Fig.3. Input signal speech overlap Fig.6. Spectral Auto-correlation speech overlap Spectral Auto Correlation Function Speech of a man Output signal and usable frames Speech of a man Fig.4. Spectral Auto-correlation speech of a man Fig.7. Output signal and usable frames speech of a man 497

61 Speech Overlap Detection Algorithms Simulation Output signal and usable frames Speech of a woman Fig.8. Output signal and usable frames speech of a woman Output signal and usable frames IV. CONCLUSION In their future work, the authors have the goal to simulate the other methods, make an appropriate algorithms for them, and show compare the results of the different methods. REFERENCES [] Katsuri Rangan Krishnamachari, Robert E. Yantoro, Damiel S. Benincasa, Stanlet J. Wenndt Spectral Autocorrelation ratio as a usability measure of speech segments under co-channel comditions, IEEE International Symposium on Intelligent Signal Processing and Communication Systems, ISPACS. [] Robert E. Yantoro, "A study of the spectral autocorrelation peak valley ratio (SAVRP) as a method for Identification of usable speech and detection of co-channel speech". AFOSR Rome Labs Summer Report. [3] M. H. Moattar, M.M. Homayounpour Speech Overlap Detection using Features and its Applications in Speech Indexing Information and Communication Technologies, 6. ICTTA '6. nd Speech overlap Fig.9. Output signal and usable frames speech overlap 498

62 FPGA Implementation of the D-DCT/IDCT for the Motion Picture Compression Rastislav J.R. Struharik and Ivan Mezei Abstract In this paper architectures for the D DCT/IDCT (Discrete Cosine Transform, Inverse Discrete Cosine Transform) are presented. These architectures were developed for the FPGA implementation. First, algorithms for the efficient D DCT/IDCT calculation are presented. Using these algorithms microarchitectures for the efficient FPGA implementation are developed. These micro-architectures are then coded in the VHDL and synthesized using Xilinx Foundation ISE development system. Finally, maximum operating frequency and resources needed for the implementation of these cores are reported for the several families of Xilinx s FPGA IC s. Keywords Image Compression, JPEG, D DCT/IDCT, VHDL, FPGA. I. INTRODUCTION Compression is the process of reducing the size of the data sent, thereby, reducing the bandwidth required for the digital representation of a signal. Many inexpensive video and audio applications are made possible by the compression of signals. Compression technology can result in reduced transmission time due to less data being transmitted. It also decreases the storage requirements because there is less data. However, signal quality, implementation complexity, and the introduction of communication delay are potential negative factors that should be considered when choosing compression technology. Video and audio signals can be compressed because of the spatial, spectral, and temporal correlation inherent in these signals. Spatial correlation is the correlation between neighboring samples in an image frame. Temporal refers to correlation between samples in different frames but in the same pixel position. Spectral correlation is the correlation between samples of the same source from multiple sensors. There are two categories of compression: lossy and lossless. In medical system applications, image losses can translate into costly medical mistakes; therefore, lossless compression methods are used. Fortunately, the majority of video and image processing applications do not require the reconstructed data to be identical to the original data. In such applications, lossy compression schemes can be used to achieve higher compression ratios. Discrete Cosine Transform (DCT) [] is a lossy Rastislav J.R. Struharik is with the Faculty of Technical Sciences, Trg Dositeja Obradovića 6, Novi Sad, Serbia, rasti@eunet.yu Ivan Mezei is with the Faculty of Technical Sciences, Trg Dositeja Obradovića 6, Novi Sad, Serbia, compression scheme where an N x N image block is transformed from the spatial domain to the DCT domain. DCT decomposes the signal into spatial frequency components called DCT coefficients. The lower frequency DCT coefficients appear toward the upper left-hand corner of the DCT matrix, and the higher frequency coefficients are in the lower right-hand corner of the DCT matrix. Because the human visual system is less sensitive to errors in high frequency coefficients than it is to lower frequency coefficients, the higher frequency components can be more finely quantized, or even completely discarded. This operation leads to the significant improvements of the compression ratio, thereby reducing the amount of data that needs to be transmitted or stored, with only moderate degradation of the original picture quality. For most image compression standards, N = 8. An 8 x 8 block size does not have significant memory requirements, and furthermore, a block size greater than 8 x 8 does not offer significantly better compression. DCT is image independent and can be performed with fast algorithms. Examples of standards using DCT: Dolby AC & AC3: -D DCT (and -D Discrete Sine Transform) JPEG (still images): -D DCT spatial compression MPEG & MPEG: -D DCT plus motion compensation H.6 and H.63: moving image compression for video conferencing and video telephony Much of the processing required to encode or decode video using these standards is taken up by calculating the DCT and/or IDCT. An efficient hardware block dedicated to these functions will improve the performance of the digital video system considerably. II. EFFICIENT ALGORITHMS FOR THE D DCT/IDCT CALCULATION A. Algorithm for the Efficient D DCT Calculation The algorithm used for the calculation of the D DCT is based on the following equation: M N c( p) c( q) π ( m+ ) p π ( n+ ) q Ypq = Xmn cos cos () 4 m= n= M N where: 499

63 FPGA Implementation of the D-DCT/IDCT for the Motion Picture Compression c( p) = for p =, c( p) = otherwise cq ( ) = forq=, cq ( ) = otherwise Efficient implementation of this equation is possible because D DCT can be separated into two D DCT []. First, the D DCT of the rows are calculated and then the D DCT of the columns are calculated. The D DCT coefficients for the rows and columns can be calculated by separating equation () into the row part and the column part. As stated before, for most image compression standards, M=N=8. Using vector processing, the output Y of an 8 x 8 DCT for input X is given by the following equation. Y () t = C X C (3) C is the cosine coefficients and C t are the transpose coefficients. This equation can also be written as Y=C Z, where Z = X C t. Coefficients for the C and C t matrix can be calculated using the following equations. ( i+ ) jπ Cij = Kcos, K = for j =, K = M N N else t ( j+ ) iπ Cij = Kcos, K = fori =, K = N M M else (4) and storing the coefficients in ROMs. But, after a closer examination of the coefficients from the C t matrix there is a way to half the number of multipliers. When the equation Z = X C t is written in the scalar form we get following equations. Z k =37(x k +x k+x k +x k3 +x k4 +x k5 +x k6 +x k7 ) Z k=338(x k -x k7 )+746(x k-x k6 )+85(xk -x k5 )+6393(x k3-x k4 ) Z k =374(x k +x k7 )+54(x k+x k6 )-54(x k +x k5 )-374(x k3 +x k4 ) Z k3 =746(x k -x k7 )-6393(x k-x k6 )-338(x k -x k5)-85(xk3-x k4 ) (7) Z k4 =37(x k +x k7 )-37(x k+x k6 )-37(x k +x k5)+37(x k3 +x k4 ) Z k5 =85(x k -x k7 )-338(x k-x k6 )+6393(xk -x k5 )+746(xk3-x k4 ) Z k6 =54(x k +x k7 )-374(x k+x k6 )+374(x k +x k 5 )-54(x k3 +x k4 ) Z k7 =6393(xk -x k7 )-85(x k-x k6 )+746(x k -x k5)-338(xk3-x k4 ) k=,,..., 7 We can see that for example, input values x k and x k7 are always multiplied by the same coefficient, only the sign can change. This can be efficiently explored to reduce the number of multipliers as shown on the following figure. Values for the scaled and rounded coefficients are presented in the following equation C = t C = Structure of the D DCT core using this decomposition is presented on the following figure. Fig. Efficient D DCT Implementation Let us explain the way the Z matrix is calculated. Each element in the first row of the input matrix X are multiplied by each element in the first column of matrix C t and added together to get the first value Z of the intermediate matrix Z. To get Z, each element of row zero in X is multiplied by each element in the first column of C t and added and so on. The calculation can be implemented using eight multipliers (5) Fig. Efficient D DCT Implementation Using the toggle signal the adder/subtractor modules can be configured to operate as adder or as subtractor depending on the current need. All 64 values for the matrix Z can be calculated in 64 clock cycles. These values are stored in the RAM memory shown between two D DCT block on Fig.. Using these stored values as input, second D DCT is performed resulting in the matrix Y. Structure of this second D DCT block is similar to the structure shown on the Fig.. Matrix Y holds the values for the D DCT transform of the input matrix X. B. Algorithm for the Efficient D IDCT Calculation Now let us examine the problem of calculating the D IDCT. Using the D DCT matrix Y, original input matrix X can be calculated in the following way [3]. 5

64 Rastislav J.R. Struharik and Ivan Mezei t X = C Y C (7) Matrix C and C t are identical to those from D DCT. We can see that the only difference between Eq. (3) and (7) is in the order by which the matrix C and C t are applied. Although this seems to be only a minor difference, it turns out to be a significant one, because now we cannot explore the coefficient symmetries like in the case of the D DCT. Once more the Eq. (7) can be split into two simpler equations, X=C t Z, and Z = Y C. Because the different order of matrix multiplications, we cannot find the similar symmetry between the coefficients during the D IDCT operations. This means that every D IDCT block will now use eight multipliers. Structure of the D IDCT block is presented on the Fig. 3. Fig. 3 Structure of the D IDCT Module Basic structure and operation of the D IDCT core is identical to that of D DCT core presented on Fig.. C. Algorithm for the Quantization/Dequantization Operations As stated before, to improve the compression ratio it is common practice to perform the quantization of DCT components. Quantization is the process of selectively discarding visual information without a significant loss in the visual effect. Quantization reduces the number of bits needed to store an integer value by reducing the precision of the integer. Each DCT component is divided by a separate quantization coefficient, and rounded to the nearest integer. The larger the quantization coefficient (i.e., coefficient weighting), the smaller the resulting answer and associated bits needed to express the DCT component. In the reverse process, the fractional bits are "rounded" and are recovered as zeros, constituting a precision loss from the original number. There are several different recommended procedures to perform the quantization. We have opted for the procedure used in the MPEG- compression standard [4]. Since we used a -bit representation of the DCT components, DC value was not quantized. For the AC components following formula is used to determine the value of the quantization factor. QDCT ij 3 DCT ij Qmatrixij Qscale = (8) Value of the Qmatrix is the matrix of the quantization coefficients with the following values depending whether we are quantizing luminance or chrominance components. Qmatrix Qmatrix luma chroma = (9) = Value of the Qscale parameter enables easy modifications of the quantization factors that will be used during the quantization. Dequantization is performed using the inverse expression of the expression in Eq. (8). III. ARCHITECTURE OF THE DEVELOPED D DCT/IDCT CORES After reviewing the efficient algorithms for the D DCT/IDCT calculations we can now present the basic architecture for the two developed cores, together with their interfaces. Fig. 4 presents the interfaces for the D DCT and D IDCT cores. Fig. 4 Interface of the D DCT and D IDCT Cores As can be seen from the Fig. 4 interface of both cores is the same. Input signal q_tab_i[5:] is used to define the required 5

65 FPGA Implementation of the D-DCT/IDCT for the Motion Picture Compression value of the Qscale parameter in the quantization and dequantization operations. Input signals, cb_en_i, cr_en_i and y_en_i are used to specify the type of the current 8x8 block. Both cores assume that the picture is represented in the YCbCr format. Signal mb_trig_i is a global synchronizing signal used to indicate the start of the next 8x8 block. Input signal data_i[7:] (data_i[:] in the case of D IDCT core) holds the pixel values (DCT component values in case of D IDCT core). Meaning of every output signal is identical to the input signal with the same name. Fig. 5 presents the basic architecture for the D DCT/IDCT cores. TABLE I SYNTHESIS RESULTS (OPTIMIZATION GOAL: SIZE) FPGA D DCT Core Family Core Size Core (# slices) Speed SpartanIIE MHz SpartanIII MHz Virtex MHz VirtexPro MHz D IDCT Core Core Size Core (# slices) Speed MHz 43.4 MHz MHz MHz TABLE II SYNTHESIS RESULTS (OPTIMIZATION GOAL: SPEED) Fig. 5 Interface of the D DCT (up) and D IDCT (down) Cores In every core there are three pipeline stages. This enables efficient calculation of the DCT or IDCT values that requires only 64 clock cycles per one 8x8 block. This is the maximum speed at which both cores can operate, but if needed they can work at slower speed. Fig. 6 illustrates the typical waveforms for the characteristic signals. FPGA D DCT Core Family Core Size Core (# slices) Speed SpartanIIE MHz SpartanIII MHz Virtex MHz VirtexPro MHz D IDCT Core Core Size Core (# slices) Speed MHz MHz MHz MHz Significantly smaller cores sizes in case of the SpartanIII and VirtexPro FPGA families are due to the fact that these families have dedicated multipliers that can be used to implement all the multiplications required in the DCT/IDCT calculations. In contrast, SpartanIIE and Virtex families don t have dedicated multipliers embedded on the chip, so every multiplier has to be implement using the general purpose logic resources, resulting in larger core sizes. V. CONCLUSION Fig. 6 Typical waveforms of the interface signals for the D DCT/IDCT Cores Previous figure illustrates the interface signal waveforms in case of 56 clock cycle duration of one 8x8 block. IV. SYNTHESIS RESULTS Both cores were coded in VHDL and synthesized using Xilinx Foundation ISE 6.i software. Following table presents the obtained results in terms or core size (number of slices required to implement the core) and maximum operating frequency for several Xilinx FPGA families. In this paper hardware implementation of the D DCT/IDCT cores was investigated. Efficient algorithms for the calculation of the D DCT/IDCT values were proposed. These algorithms were implemented in hardware using the FPGA technology. Using the Xilinx Foundation ISE software synthesis results for several available FPGA families were reported. REFERENCES [] M. Popović, Digitalna Obrada Signala, Beograd, Nauka, 997. [] Xilinx Application Note XAPP6, Video Compression Using DCT [3] Xilinx Application Note XAPP6, Video Decompression Using IDCT [4] Xilinx Application Note XAPP65, Quantization 5

66 Minutiae-based Algorithm for Automatic Fingerprint Identification Edin H. Mulalić, Stevica S. Cvetković and Saša V. Nikolić 3 Abstract - This paper describes a complete fingerprint identification algorithm that uses minutiae for representation and matching of fingerprints. The presented algorithm consists of three main steps: ) Image enhancement, ) Minutiae extraction and 3) Minutiae matching. Principal description of each step is given concluded with brief description of developed software with test examples. Keywords Fingerprint identification, Image processing. I. INTRODUCTION Among all the biometric characteristics, fingerprints are one of the oldest, the most widely used and highly reliable. The most comprehensive explanation of all aspects of fingerprint identification could be found in []. The most of the methods for automatic fingerprint identification presented in literature has the similar global structure as our algorithm. At the beginning, image enhancement is applied in order to improve the quality of the input fingerprint image. Then, algorithm extracts local characteristics of the fingerprint minutiae. Finally, minutiae matching is performed by comparing minutiae form input fingerprint with a set of one or more template fingerprints stored in database. The rest of the paper will give principal description of each step of the algorithm. II. IMAGE ENHANCEMENT The performance of minutiae extraction algorithms relies heavily on the quality of the input fingerprint images. Due to skin conditions, sensor noise and incorrect finger pressure, a significant percentage of fingerprint images (approximately %) are of poor quality. In general, there are two types of degradation that could be observed in fingerprint images: The ridges are not strictly continuous i.e. the ridges have small breaks (gaps), Due to the presence of noise which links ridges, parallel ridges are not well separated. The most widely used technique for fingerprint image enhancement is based on contextual filters. In conventional image filtering, only a single filter kernel is used for convolution throughout the image. In contextual filtering, the filter characteristics change according to the local context, where the context is often defined by the local ridge Edin H. Mulalić is student at the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, Stevica S. Cvetkovic is PhD student at the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, 3 Saša V. Nikolić is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, 53 orientation and local ridge frequency. Appropriate contextual filter should posses the following characteristics: It should provide a low pass (averaging) effect along the ridge direction with the aim of linking small gaps and filling impurities due to pores or noise. It should have a band pass (differentiating) effect in the direction orthogonal to the ridges in order to increase the discrimination between ridges and valleys and top separate parallel linked ridges. One of the first using of contextual filters for fingerprint image enhancement was performed by O'Gorman [] and Mehtre [3]. Since then, numerous filtering methods have been proposed in literature, both in spatial and frequency domain. Hong et al. [4] proposed an effective method based on Gabor filters and reported to achieve good performance. In order to improve performances of the previous method Yang et al. [5] modified method described in [4]. Although they reported better accuracy, computational complexity of their algorithm is too high. Method which is used in this paper is similar in principle to the one described in [4]. The method assumes that parallel ridges and valleys exhibit some ideal sinusoidal plane waves associated with some noise. In other words, the -D signal orthogonal to the local orientation is approximately a digital sinusoidal wave. After convolving the image with corresponding Gabor filter which is tuned to a specific local ridge orientation and frequency, image could be efficiently enhanced. An even symmetric two dimensional Gabor filter has the following form in the spatial domain: x (, ) exp θ y G θ θ, f, δ, δ x y = + cos(πfx ), x y θ σ x σ y xθ yθ = x cos(9 θ ) + sin(9 θ ) = x sin θ + y cosθ, = x sin(9 θ ) + cos(9 θ ) = x cosθ + y sin θ where [ x θ, y θ ] are the coordinates of [ x, y] after a clockwise rotation of the Cartesian axes by an angle of ( 9 θ ). The parameters of the previous equation are: a) θ is the local orientation of the ridges. It is an angle of the ridge with the x -axis computed by examining the gradients of pixel intensities in the x and y directions within the local block ( 6 6 )of the image. f b) is the local frequency of the sinusoidal plane wave. It corresponds to the reciprocal value of inter-ridge distance in fingerprints images. Although some algorithms in literature ()

67 Minutiae-based Algorithm for Automatic Fingerprint Identification calculate f for each block of the image, empirical value can be used [6]. Therefore, the frequency is set to f =. 8 c) σ x, σ y are the standard deviations of the Gaussian envelope along the x and y -axes, respectively. Based on empirical data [4], these values were set to σ x = σ y = 4. To apply Gabor filters to an image, the four parameters θ, f, σ x, σ y must be specified for each pixel. Values of f, σ x, σ y are based on empirical data, thus only θ should be calculated for each block of the image. To make the enhancement faster, instead of online computing the bestsuited contextual filter for each pixel, a set of eight filters (named filter bank ) for eight discrete values of π θ k = k, k =,,...,7 are a priori created and stored. Then, 8 after calculating local orientation which is discretized to the θ closest value of k, each pixel of the image is convolved with corresponding filter from precomputed Gabor filter bank. III. MINUTIAE EXTRACTION Most of proposed minutiae extraction algorithms operate on binary images. The binary images obtained by the binarization process are usually submitted to a thinning phase of algorithm which reduces ridge line thickness to one pixel. Some authors have proposed minutiae extraction approaches that work directly on the gray-scale images, without binarization and thinning, mainly because of following reasons []: a significant amount of information may be lost during the binarization process; binarization and thinning are time consuming; thinning may introduce a large number of spurious minutiae; in the absence of an a priori enhancement step, most of the binarization techniques do not provide satisfactory results when applied to low-quality images. But most of these drawbacks can be avoided by using efficient and effective algorithms for image enhancement, binarization, thinning, false minutiae recognition and matching. On the other side, binarization and thinning provide numerous advantages in minutiae recognition phase. Obvious motivation to apply thinning process is to simplify process of recognizing minutiae and minutiae s position. Method for minutiae extracting presented in this paper is based on images processed by binarization and thinning algorithms. For binarization, global threshold algorithm is used. Among many thinning algorithms described in [7], we chose to implement Nagendraprasad-Wang-Gupta iterative algorithm [8]. The most popular method for minutiae recognition is the Crossing Number (CN) approach. The in a binary image is defined by Arcelli and Baja in 984. Mathematically, crossing number CN(p) of a pixel p, can be calculated as half the sum of the differences between pairs of adjacent pixels in the 8- neighborhood of p: 7 CN( p) =.5* val( pi ) val( p( i+ ) mod8 ) i= where p is central pixel, p, p,..., p7 are ordered set of pixels which describes 8-neighborhood of p and val ( p i ) {,}. Using CN(p), each pixel can be classified into one of five categories described in Table. CN(p) Type of pixel Isolated point End of ridge Continuing ridge point 3 Bifurcation point 4 Crossing point Table. Classification of pixels according to its crossing number False Minutiae Rejection The presence of noise in poor-quality images and imperfect thinning are two main reasons that cause a large number of extraction errors - dropping of true minutiae and production of false minutiae. Therefore, additional post processing is necessary to filter extracted minutiae and keep only set of true minutiae. Although the main goal of the filtering process is to remove false minutiae, keeping true minutiae is more important for reliable minutiae matching. There are numerous methods to achieve that goal and most of them are based on set off heuristic rules. Examples of those rules are: if the break in the ridges is less than threshold value and no other pixel passes through it, then the break is connected; if the angle between the branch and the trunk is greater than 7º and less than º, then the branch is removed short ridges are removed on the basis of the relationship between the ridge length and the average distance between ridges; a ridge ending point that is connected to a bifurcation point, and is below a certain threshold distance is eliminated; two bifurcations are eliminated if distance between them is less than threshold value (-5 pixels). One effective and efficient method based on set off heuristic rules is presented in [9]. The setting of parameters, mostly some thresholds, should be obtained dynamically. Parameters adaptive to the image usually perform better than fixed values. Experimental results given in this paper are obtained by using an algorithm for filtering based on examining local neighborhood around potential minutiae point, proposed by Tico and Kousmanen []. () 54

68 IV. MINUTIAE MATCHING Fingerprint matching is a process of comparing an input fingerprint with a set of one or more template fingerprints. The return value is either a degree of similarity (for example, a score between and ) or a binary decision (matched/not matched). There are three main families of fingerprint matching techniques: correlation based matching, minutiae based matching and ridge-feature based matching []. Since it is widely accepted that minutiae set is the most discriminating and reliable feature, in modern systems for automatic fingerprints identification, that approach is the most popular. But, those are not the only reasons for popularity of minutiae based approach. Prior to the matching process, feature information has to be extracted from all the template images. The amount of extracted information from templates has to be low because of saving memory. Also, it has to be possible to efficiently traverse the structure which describes extracted information in order to check for necessary similarities. Those additional demands also favor minutiae based approach. In some applications it is possible to use hybrid approach, combining two or even all three approaches. This paper is focused on minutiae based approach. Each minutia may be described by a number of attributes, including its location in the fingerprint image, orientation, type (e.g., ridge termination or ridge bifurcation), a weight based on the quality of the fingerprint image in the neighborhood of the minutia, and so on. Local minutia structure can be described in many different ways. One simple and most obvious would be a triplet ( x, y,ψ), where x and y are minutia location coordinates and Ψ is the local ridge direction in given coordinate system. Yang and Verbauwhede [] proposed derived local structure as well as an algorithm based on it and experimental results given in this paper are based on implementation of that algorithm. The algorithm is described in the rest of the section. related radial angle between M and its i th nearest neighbor, ϑ i (i =,,...N) represents the related position angle of the th i nearest neighbor and Ψ is the local ridge direction of minutia M. An example is illustrated in Fig. x, denotes x and y coordinates of minutiae M, If ( ) M y M ( x i, y i ) denotes x and y coordinates of i th nearest neighbor, and diff ( α, β ) calculate the difference between angles α and β and converts the result to range [,π ), than parameters ( di, ϕ i, ϑi ) (i =,,..,N) can be calculated using following formula: d i = ( x x ϕ = diff ( Ψ i i ) Ψ), + ( y y yi y ϑi = diff (arctan( x x i M i i M M M ) ), Ψ) Proposed matching algorithm is based on calculating minutiae similarity factor and image similarity factor using vector of threshold values ( Thd, Thϕ, Thϑ, ThΨ, ThMSF, ThISF ). Let M be a minutiae from input (query) image with its local structure LM = {( d, ϕ, ϑ ),( d, ϕ, ϑ ),...,( d N, ϕn, ϑn ), Ψ}, and M be a minutiae from template (database) image described by its local structure LM ' = {( d', ϕ ', ϑ' ),( d', ϕ', ϑ' ),..., ( d' N, ϕ' N, ϑ' N), Ψ'}. Minutia similarity factor (MSF) represents degree of similarity between two minutiae M and M '. It can be calculated by observing their local structures L M and L M '. First of all, if ' Ψ Ψ > Th, than MSF =, M and M ' are not matched Ψ and another pair of minutiae can be checked. Otherwise, it is necessary to investigate similarity of neighborhoods of minutiae M and M '. If i th neighbor of minutiae M and th j neighbor of minutiae M ' satisfy set of conditions described by Eq. 5:, (4) d i ' d j < Th d ϕ i ϕ ' j < Th ϕ ϑ i ϑ ' j < Th ϑ (5) Fig.. Example of minutiae local structure Local structure L can be described by Eq. 3: M LM = {( d, ϕ, ϑ ),( d, ϕ, ϑ ),...,( d N, ϕn, ϑn ), Ψ} where N is the number of nearest neighbors of minutiae M, d i (i =,,...N) describes the distance between the selected minutia M and its i th nearest neighbor, ϕ i (i =,,...N) is the (3) than those two neighbors can be marked as ''matched neighbors''. MSF is equal to the total number of ''matched neighbors''. If MSF > ThMSF, than minutiae M and M ' represent ''matched minutiae pair''. After comparing all minutiae from input image with minutiae from template image, total number of minutiae matched pairs ( NUM MMP ) is obtained. Image similarity factor is calculated using Eq. 6: NUM MMP ISF = max( NUM input, NUM template ) (6) 55

69 Minutiae-based Algorithm for Automatic Fingerprint Identification where NUM input and NUM template are total number of minutiae in the input and template images, respectively. Two images are considered to represent the same fingerprint if ISF > Th. ISF V. EXPERIMENTAL RESULTS For implementation of the presented algorithm we used VS.NET (C#). All experiments have been done on a Pentium IV (3GHz, GB RAM) machine. Instead of using images obtained by our own sensors, we used FVC DB_ database []. We used 4, 5 and 6 and 7 nearest neighbors for minutiae matching step of the algorithm. FAR (False Acceptance Rate) and FRR (False Rejection Rate) factors had satisfying values only when 5 and 6-neighborhoods where used. Those results are consistent with results presented in []. Figure. presents some results of created testing software. VI. CONCLUSION AND FUTURE WORK Created software achieved excellent results with images that had good quality, but had some problems in recognizing pour quality images. Those problems were identified and some ideas for their solving are: a) using dynamically calculated frequency improves image enhancement process. Improving this part of the system will allows work with not so good quality images b) grouping images by type of fingerprint improves matching process by reducing number of images which has to be compared with input image. This will improve speed of software. Mentioned improvements will be focus of our future work. REFERENCES [] D. Maltoni, D. Maio, A. K. Jain, S. Prabhakar, Handbook of fingerprint recognition, New York: Springer-Verlag, 3. [] O'Gorman, L., Nickerson, J.V., An approach to fingerprint filter design, Pattern Recognition (), 9 38, 989. [3] Mehtre, B.M., Fingerprint image analysis for automatic identification, Machine Vision, Appl. (6), 4 39, 993. [4] L. Hong, Y. Wan, and A. Jain, "Fingerprint Image Enhancement: Algorithm and Performance Evaluation", IEEE Trans. Pattern Analysis and Machine Intelligence, Vol., No. 8, pp , 998. [5] J. Yang, L. Liu, T. Jiang, and Y. Fan. A modified gabor filter design method for fingerprint image enhancement, Pattern Recognition Letters, 4:85-87, 3. [6] A. Ross, A. K. Jain and J. Reisman, "A Hybrid Fingerprint Matcher", Pattern Recognition, Vol. 36, No. 7, pp , 3. [7] M. Couprie, Note on fifteen D parallel thinning algorithms, Internal Report, Institut Gaspard Monge, 6. [8] M.V. Nagendraprasad, P.S.P. Wang, and A. Gupta Algorithms for thinning and rethickening binary digital patterns, Digital Signal Processing 3, 97, 993. [9] H. Lu, X. Jiang and Wei-Yun Yau, "Effective and Efficient Fingerprint Image Postprocessing", Proc. ICARCV, Singapore, pp , Dec.. [] Tico, M., and Kuosmanen, P. An algorithm for fingerprint image postprocessing, in Proceedings of the Thirty-Fourth Asilomar Conference on Signals, Systems and Computers, vol., pp , November. [] S. Yang and I. Verbauwhede, "A secure fingerprint matching technique", Proc. ACM Workshop on Biometrics: Methods and Applications, pp , November 3. [] FVC - Fingerprint Verification Competition, Fig.. Example of tested fingerprint image (original, image after enhancement, image after minutiae extraction) 56

70 Performances of The Exponential Sinusoidal Audio Model Zoran N. Milivojevic, Predrag Rajkovic and Sladjana M. Milivojevic 3 Abstract In the first part of this paper TLS and Hankel TLS algorythms for determination of parameters for sinusoidal and exponential sinusoidal model of audio and speech signal are described. In the second part performances of exponential sinusoidal model are determined and a comparative analysis of a model for the case of segment processing with the distinguished and poorly distingushed transiency is performed. In the analysis tabular data and time and frequency diagrams are used. Keywords Exponential sinusoidal modeling, TLS algorithm. I. INTRODUCTION The sinusoidal model (SM) is suitable for representing the harmonic structure of the speech and audio segments. Special conveniences can be seen in speech analysis/synthesis [,], speech modification [3], speech coding [4,5] and audio coding [6,7]. The sinusoidal model for the speech and audio signal s(n) can be presented in the following form: s K k k= ( n) a ( n) sin( f ( n) n + φ ( n) ) π. () In the sinusoidal model the signal s(n) is presented as a summary of components with time variable amplitude a k, frequency f k and phase φ k. These parameters are often invariable or slowly variable in the time of analysis (duration time of an analyzed sequence, i.e. segment). Depending on the signal the length of the quasi-stationary segment varies from several ms to several hundreds ms [8]. Speech and audio signals often contain segments with superimposed noise as well as segments with transient sound. In such cases the model described by means of () does not give satisfying results. In [9] shows the model for presenting the audio signal created by enlarging the model described with () by adding the noise η(n) and transient segment τ(n): s K k k= ( n) a ( n) ( π f ( n) n + φ ( n) ) + η( n) + τ ( n) k k k sin. () In standards for audio signal coding, such as MPEG- LI, using of model () is not explicitly foreseen. The subcomprehensive coding structure [] is being used instead. The subcomprehensive coding is efficient for coding signal Zoran N. Milivojevic, Technical College, Aleksandra Medvedeva, Nis, Serbia, Predrag Rajkovic, Faculty of Mechanical Engineering, Aleksandra Medvedeva, Nis, Serbia. 3 Sladjana M. Milivojevic, Technical Faculty, Svetog Save 65, Cacak, Serbia, k with superimposed noise in a wide frequency range. However, when coding signals with transient segments, the efficiency is considerably smaller. Generally seen, transient sound is difficult to model by means of the sinusoidal model. More qualitative modeling can be achieved by enlarging the number of model parameters, which reduces coding efficiency. For that reason in some coding schemes detecting of transient segments and selecting the code structure with enlarged resolutions in time domain is done first. One way of solving this problem is audio signal modeling and coding by using of superposition of sinusoid with time slow exponential changes of amplitudes and quasistationary noise η(n): s K k k= d ( n) a ( n) e k ( n) n ( π f ( n) n + φ ( n) ) + η( n) sin, (3) where d k is damping factor of the k-th component. The exponential sinusoidal model (ESM) is described in []. Its efficiency in modeling transient segments is presented in [,3]. Determination of model parameters (amplitude a k, frequency f k, phase φ k and damping factor d k ) is numerically complex and demands a lot of calculation time. In this paper algorithms for determining parameters of the exponential sinuous model are described and their performances are determined. The organization of this paper is as follows. In Section II TLS-ESM algorithm is described. In Section III Hankel TLS algorithm for forming of model parameters is described. In Section IV results of the comparative analysis of the application of SM and ESM models in transient and nontransient sequences are presented. II. TLS-ESM ALGORITHM TLS (Total Least Squares) algorithm is used for determining parameters of the exponential sinuous model [4]. For the inlet segment s(n), n=,...,n, by TLS algorithm parameters of the model of L order ( b () l, l =,..., L ) are being determined on the condition of minimizing: where: N n= ( s( n) s( n) ) = ( Δs( n) ) k N n= k ˆ, (4) ( n) b( l) ( s( n l) + Δs( n l) ), n = ( L + ),..., N sˆ L = l=. (5) 57

71 Performances of The Exponential Sinusoidal Audio Model Equation (5) can be written in the form of: K d ( ) ( ) ( n) n n = a n e k sin( f ( n) n + φ ( n) ) sˆ k k= π, (6) where the damping factor d k can be positive, negative or zero. By comparing of equations (3) and (6) it can be seen that it is possible to apply TLS algorithm for determining parameters of ESM model, i.e. for automatic decomposition of an audio sequence in a certain number of damped sinusoids [5]. III. HANKEL TLS ALGORITHM Due to its great calculation efficiency, Hankel TLS (HTLS) algorithm is used for solving TLS problems. HTLS algorithm has found its intensive application in nuclear magnetic resonance spectroscopy. In the [5] HTLS algorithm is described that for inlet parameters: a) sequence s(n), n=,...,n; and b) model order K e ; generates parameters of the estimated sinusoids (amplitude â k, frequency fˆ k, phase k k ψˆ k, damping factors dˆ k ). The algorithm consists of the following steps: Step : Out of sequence elements s(n) Hankel matrix H with dimensions mxn is formed. Step : SVD (singular value decomposition) of matrix H is determined: H H = USV. (7) Step 3: Shortened matrices of Ke rank are constructed: H H = UK S e K V e Ke ˆ, (8) where U Ke contains the first K e columns of the matrix U, V Ke contains the first K e columns of the matrix V, whereas S Ke is the submatrix (K e x K e ) of the matrix S. Step 4: TLS is calculated for the predefined equation system: where the first row, UK U E, (9) e K e U K e is obtained from the matrix U Ke after eliminating U K is obtained from the matrix U Ke after e eliminating the last rowg. K e values of E are used for estimation of signal poles: zˆ k = e ( jπˆ f dˆ ) k k, k =,..., Ke. () Step 5: The equation of the model is formed on the base of signal poles: where K e s ( n n ) = c k z ˆ k k= ˆ, () ck aˆ k jψ e k e ˆ =. () Taking into consideration that the poles are conjugated complex, the model described with () can be presented in a reduced form: where K ( + ˆ φ ) d ( ) ( ) ( n n = aˆ n e k ) sin fˆ ( n ) sˆ k k= ˆ π, (3) ˆ φ k = ψˆ k + π, k =,...,K. (4) The model described with (3) is equivalent to the ESM model described with (3). Detailed description of HTLS algorithm can be found in [8, 6]. IV. PERFORMANCES OF ESM MODEL Performances of ESM model with implemented HTLS algorithm will be determined by means of signal-noise ratio (SNR) that is defined in (8): k ( n) SNR = log n=. (5) N ( s( n) sˆ ( n) ) n= Thus defined SNR represents a measure of precision of the modeled signal in relation to the original signal. Further analyses were carried out on the archivated speech signal whose sampling frequency is F S =.5 khz, by means of a mathematical packet MatLab. Comparative analyses will be performed by an analysis in time and frequency domains on: a) the original speech signal (s), b) the speech signal modeled by means of an sinusoidal model (s SM ) and c) the speech signal modeled by means of an exponential sinusoidal model (s ESM ). The following examples relate to two types of sequences: a) with not so outstanding transience (signal sequences where periodicalness is expressed) and b) with an outstanding trasiency. IV.A. Sequencies with not so outstanding transienceitle Examples of sequences of speech and audio signals with not so outstanding transience are presented in Fig. where the original signal s is shown and modeled signals s SM and s SM for Ke=3. In Fig. the same signals for Ke=8 are shown. In these sequences the signal periodicalness can be seen (pronunciation of vowels, musical signal etc.). IV.A. Sequences with outstanding transience Sequences of speech signal with an outstanding effect of transience are modeled for some values of model order K e (K e =4,8,6,3,64,8). Time forms of signals are presented in N s k 58

72 Zoran N. Milivojevic, Predrag Rajkovic and Sladjana M. Milivojevic Fig. 3 (K e =3) and Fig. 5 (K e =8). The signal spectra are determined by means of FFT and presented in Fig.4 (K e =3) and Fig. 6 (K e =8). Fig.. The sequence of the speech signal for the word 'five' with not so outstanding transience: a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of the estimated parameters of SM model (Fs=.5 khz,k e =3). Fig. 4. Spectrum of the transient segment of the speech signal for the word 'five': a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of the estimated parameters of SM model (Fs=.5 khz,k e =3) Fig.. The sequence of the speech signal for the word 'five' with not so outstanding transience: a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of the estimated parameters of SM model (Fs=.5 khz,k e =8). Fig. 5. Transient segment of the speech signal for the word 'five': a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of estimated parameters of SM model (Fs=.5 khz,k e =8). Fig. 3. Transient segment of the speech signal for the word 'five': a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of estimated parameters of SM model (Fs=.5 khz,k e =3) Fig. 6. Spectrum of the transient segment of the speech signal for the word 'five': a) s the original signal, b) s ESM reconstructed signal on the base of the estimated parameters of ESM model and c) s SM reconstructed signal on the base of the estimated parameters of SM model (Fs=.5 khz,k e =8). 59

73 Performances of The Exponential Sinusoidal Audio Model In Table results of SNR at the sinusoidal and exponential model for the transient and weakly present transience are presented. TABLE I ADVANTAGES OF SNR FOR A) TRANSIENT AND B) NOT SO OUTSTANDING TRANSIENT SEGMENT FOR THE CASE OF THE APPLICATION OF SINUOUS AND EXPONENTIAL SINUOUS MODEL, DEPENDING ON THE MODEL ORDER K e Transient Not so outstanding transient SNR SM SNR ESM SNR SM SNR ESM mean values On the base of time and frequency diagrams, as well as on the base of tabular data for SNR it should be concluded that the ESM model is superior in relation to the SM model. It special advantage is in regard to modeling of signals in transient sequences. In the transient sequence the relation of the mean values is 3.886/.3=43.99, whereas in the sequence with not so outstanding transience the relation is.54. V. CONCLUSION In this paper the exponential sinusoidal audio model with the implemented HTLS algorithm is described. In the second part of this paper the results of testing the application of the sinusoidal and exponential model in modeling the speech signal are presented. Modeling was performed for various operating parameters of the model. As a measure of successfulness, i.e. of precision of modeling, SNR was used. Analysis of the obtained results points to the greater efficiency of ESM in relation to SM in all the values of the model order. In addition to that, the results demonstrate great efficiency in transient segments, i.e., in the concrete example, times in relation to SM model. The results testify that the application of ESM model for the speech and audio signal compression is justified in archivating and transition by communication. REFERENCES [] E.B. George, M.J.T. Smit, Speech analysis/synthesis and modification using an analysis-by-synthesis/overlap-add sinusoidal model, IEEE Trans. Speech Audio Process. 5 (5) (September 997) [] R.J. McAulay, T.F. Quatieri, Speech analysis-synthesis based on a sinusoidal representation, IEEE Trans. Acoustics, Speech and Signal Processing 34 (4) (August 986) [3] Quatieri, R.J. McAulay, Speech transformations based on a sinusoidal representation, IEEE Trans. Acoustics, Speech and Signal Processing 34 (6) (August 986) [4] R.J. McAulay, T.F. Quatieri, Speech Coding and Synthesis, Elsevier, Amsterdam, 995, pp. 73. [5] L.B. Almeida, F.M. Silva, Variable-frequencysy nthesis: an improved harmonic coding scheme, in: Proceedings of the International Conference on Acoustics, Speech and Signal Processing, San Diego, CA, 984, pp [6] J. Jensen, R. Heusdens, A comparison of differential schemes for low-rate sinusoidal audio coding, in: Proceedings of the IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, October 3, pp [7] H. Purnhagen, B. Edler, C. Ferekidis, Object-based analysis/synthesis audio coder for very low bit rates, in: Proceedings of the 4th AES Convention, Amsterdam, The Netherlands, May998, Convention paper [8] K. Hermus, W. Verhelst, P. Lemmerling,P. Wambacq, S. Huffel, Perceptual audio modeling with exponentially damped sinusoids, Signal Processing 85 (5) [9] S.N. Levine, Audio representations for data compression and compressed domain processing, Ph.D. Thesis, Stanford University, December 998. [] T. Painter, A. Spanias, Perceptual coding of digital audio, Proceedings of the IEEE, vol. 88(4), April, pp [] K. Hermus, W. Verhelst, P. Wambacq, P. Lemmerling, Total least squares based subband modelling for scalable speech representations with damped sinusoids, in: Proceedings of the International Conference on Spoken Language Processing, Beijing, China, October, pp [] J. Jensen, S.H. Jensen, E. Hansen, Harmonic exponential modeling of transitional speech segments, in: Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Istanbul, Turkey, June, pp [3] P. Lemmerling, I. Dologlou, S. Van Huffel, Speech compression based on exact modeling and structured total least norm optimization, in: Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Seattle, WA, May998, pp [4] S. Van Huffel, J. Vandewalle, The Total Least Squares Problem: Computational Aspects and Analysis, Frontiers in Applied Mathematics, vol. 9, SIAM, Philadelphia, PA, 99. [5] K. Hermus, W. Verhelst, P. Wambacq, Perceptual audio modeling based on Total Least Squares algorithms, AES TH Convention, Munich, Germany, May -3,. [6] S. Van Huffel, H. Chen, C. Decanniere, P. Van Hecke, Algorithm for time-domain NMR data fitting based on total least squares, J. Magn. Reson. A (994)

74 Non-uniform Thresholds for Removal of Signal- Dependent Noise in Wavelet Domain Mitko Kostov, Cvetko Mitrovski and Momcilo Bogdanov Abstract This paper presents experimental results obtained using a method that we propose for signal denoising. Noisy signals are processed in a discrete wavelet transform domain with a non-uniform threshold adjusted to the noise level. Keywords Denoising, signal-dependent noise, threshold, wavelet domain filtering. I. INTRODUCTION There are many methods for noise removal, but very few of them focus on removing varying noise that depends on the local intensity of the signal. This kind of signal-dependent noise is commonly found in nuclear medicine (NM) images. Until now, the offered methods have been based on conventional filtering in time and frequency domain and lately, wavelet transforms. Research to date in waveletdomain filtering has focused on removing Gaussian noise by using a global threshold that is independent on the signal or by multiscale products of the detail coefficients [-3]. These methods are inappropriate for denoising signals that contain signal-dependent noise. One simple fix would be to work with the square-root of the image, since this operation is variance stabilizing []. Another method for Poisson noise removal in the wavelet domain uses a non-uniform threshold for filtering the noisy wavelet coefficients [4]. In this paper we present results obtained using our method for removal of signal-dependent noise. It is based on generating non-uniform threshold adjusted to the noise level in the signal. It is organized as follows. The standard wavelet shrinkage program is outlined in Section II. In Section III we discuss how to estimate the varying threshold. In Section IV we verify the validity of our approach on -D deterministic signal contaminated with artificially added noise proportional to the signal intensity. At the end, Section V concludes the paper. II. WAVELET DOMAIN FILTERING In series expansion of discrete-time function f using wavelets Mitko Kostov and Cvetko Mitrovski are with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, s:, Momcilo Bogdanov is with the Faculty of Electrical Engineering and Information Technologies, Karpos II b.b., P.O. Box 574, Skopje, Macedonia, j J J M M f ( t) = d jk jk ( t) + a Jk φ Jk ( t) j = k = k = ψ, () ψ jk and φ jk denote wavelet and scaling function, respectively, the indexes j and k are for dilatation and translation, and a Jk and d jk are approximation and detail coefficients. The most popular form of wavelet-based filtering, wavelet shrinkage [], is performed by weighting the corresponding detail wavelet coefficient by h ik ( h jk ) and calculating the inverse wavelet transformation. Conventionally, the filtration is performed either by using hard threshold nonlinearity h (hard), jk =, or by using soft threshold nonlinearity τ j sgn d h (soft) jk = d jk, if if ( ) jk d d jk jk τ < τ where τ j is a user-specified threshold level., if if d d j j jk jk τ < τ III. ESTIMATING NON-UNIFORM THRESHOLD Let y denotes a noisy signal that consists of a noise-free signal s and noise n with zero mean value and energy proportional to the local signal intensity: j j () (3) y = s + n. (4) For this signal the wavelet transform (WT) satisfies WT(y) = WT(s) + WT(n). (5) Let A and D denote the approximation and detail wavelet coefficients obtained with wavelet transform of the signal y. Since the noise is proportional to the local signal intensity, a threshold τ j for filtering of all the detail wavelet coefficients D should not be uniform, but it should follow the local signal intensity. Hence, a non-uniform threshold could be determined as τ = α A where α is a constant parameter which could be determined by equalizing the energy of the approximation and the detail coefficients: 5

75 Non-uniform Thresholds for Removal of Signal-Dependent Noise in Wavelet Domain (а) (c) (b) (d) will be smaller than the coefficients D where the signal portion in (5) is bigger, but bigger than coefficients D n where there is no signal. In general, since the noise is proportional to the local signal intensity, for the threshold τ the following can be written: n ( n τ i) = α A( i) + L+ α A( i) + α, i =, L, L, (9) where L is the length of the vectors A and τ. The coefficients α, α, can be obtained by minimizing the square measure E in the smallest squares sense: n ( D ( i) ( A ( i) + + α A ( i α ) E = α n L + () ) i For the purpose of simplicity, the threshold τ can take the form (6), and in the same time the error function E which is to be minimized is: ( D ( i) A ( i ) E = α ). () i 5 5 IV. EXPERIMENTAL RESULTS D i) = ( A( i) ) i In this paper we use eq. (6) for the determination of α, based on a set of two new vectors D and A (with lower dimension than the initial vectors D and A) which are created by using the following. The detail coefficients D are like waves and they frequently change their polarity. Therefore, the coefficients between the positive and negative peaks have magnitudes that are close to zero, and we discard their contribution by keeping the local extremes in D and zeroing the other coefficients: i D( i) > max( D( i ), D( i + ) ) D( i) if D( i) < min( D( i ), D( i + ) ) D i) (7) ( (e) = (f) In this Section, we illustrated our proposal on a deterministic -D signal contaminated with artificially generated noise in Fig.. The non-decimated wavelet transform [3] is performed using a NPR-QMF prototype filter [5], instead of wavelet filters. We obtained that the approximation coefficients follow the signal contour as it is illustrated in Fig. c. Since the noise is signal dependent, the detail coefficients D ( α. (Fig. (6) ) follow the signal level: the signal intensity in the otherwise Similarly, the vector A is constructed by zeroing the approximation coefficients A for those indices i where D (i) = : A = A sign( D ) Fig.. (а) Deterministic signal; (b) noisy signal; (c) first level approximation coefficients; (d) reconstructed signal using the proposed approach; (e) reconstructed signal using the universal global threshold with db7; (f) reconstructed signal using multiscale product. Since the coefficients D and αa have equal energy and at the same time, the coefficients D contain narrower and higher peaks compared to the coefficients αa, the coefficients αa interval -6 is higher than the signal intensity in the intervals 6-8 and 8-, so the noise is highest in the interval -6, while lowest in the interval 8-. By comparing Fig. а and Fig. b it can be seen that the coefficients D contain signal details D s with higher intensities around the positions 6 and 8 (jumps in Fig. а, i.e. peaks in Fig. b); while in the other regions, in the interval -, there is noise. Also, in Fig b it can be noticed that a significant portion of the detail coefficients have values close to zero as a consequence of the fast changing of their polarity. We experimented with non-uniform thresholds calculated in two ways: ) with eq. (6) by using energy equalizing of the new vectors A and D in (8); ) with eq. (9) for different polynomial order n and minimizing of (). The thresholds follow the height of the detail peaks as it is illustrated in Fig. 3: the noise level is higher; the thresholds are higher and vice versa. If we make a comparison of two thresholds obtained with ) and ), the first threshold is extended and closer to the peaks of the coefficients D, which means it is better generated than the second one. This is illustrated in Fig. 3. When a threshold is calculated by using (9), the number of terms (n), have not significant impact on the threshold. 5

76 Mitko Kostov, Cvetko Mitrovski and Momcilo Bogdanov () (b) Fig.. A part of the first level wavelet coefficients. (а) Approximation coefficients A; (b) detail coefficients D τ = α *A τ = α*sqrt(e D /E A ) τ = α *A +α *A+α τ = α 3 *A 3 +α *A τ = α *A+α.5 τ=α 4 *A 4 +α 3 *A Fig. 3. Details and different estimated non-uniform thresholds: Threshold obtained through energy equalizing when τ = αa (8) and thresholds obtained through minimization minimization of () for different order of the polynomial (9) (for clearer view, the values from to 3.5 are shown only). Moreover, the experiment illustrates that although the procedure of energy equalizing is simpler than the procedure of minimizing, it yields with better estimated threshold. Values of these coefficients α i in (9) obtained with minimization of () in the smallest squares sense and the error () for different polynomial order are given in Table I. Using non-uniform threshold preserve the signal coefficients while remove the noise ones. This is shown in Fig. 4 where noise-free detail coefficients and filtrated detail coefficients are given. The coefficients are filtrated with threshold τ = αa where α is estimated through the coefficients A and D in (8). If a global threshold was used, it was not be possible to reduce noise without removing part of the signal at the same time. Hence, the reconstructed signals when a global threshold or multiscale product are used, suffer from distortion at the signal jumps positions, while there is no distortion at the signal filtrated with the proposed approach. This distortion appears as a result of removing signal information contained in the detail coefficients when a global threshold is used. This can be seen from Fig. d, e and f. In order to quantitatively compare the proposed method to some known wavelet based methods, we use the energy of the remained noise in the filtrated signal s as a measure: E n = i ( s i) s ( i ) ) ( () 53

77 Non-uniform Thresholds for Removal of Signal-Dependent Noise in Wavelet Domain (a) Fig. 4. (а) First level detail coefficients of the noise-free signal; (b) Filtrated coefficients using the threshold τ = αa when α is estimated through coefficients A and D in (8). Table contains the results when the signal is reconstructed from the first level approximation and filtrated detail coefficients. It can be seen that when the proposed approach is applied, the noise energy is smaller compared to the other methods that use a global threshold. V. CONCLUSION In this paper we give some views and experimental results conducted with our proposed method for denoising signals that contain signal-dependent noise. The experiments give advantage to the proposed method over the known wavelet based methods. REFERENCES [] D. L. Donoho, "Wavelet Thresholding and W.V.D.: A - minute Tour", Int. Conf. on Wavelets and Applications, Toulouse, France, June 99; [] Y. Xu, J. B. Weaver, D. M. Healy, Jr., and J. Lu Wavelet Transform Domain Filters: A Spatially Selective Noise Filtration Technique, IEEE Trans. on Image Processing, vol. 3, no. 6, pp , Nov. 994; [3] E. J. Balster, Y. F. Zheng and R. L. Ewing Feature-Based Wavelet Shrinkage Algorithm for Image Denoising, IEEE Trans. on Image Processing, vol. 4, no., pp. 4-39, Dec. 5; [4] R. D. Nowak, R. G. Baraniuk, Wavelet-Domain Filtering for Photon Imaging Systems, IEEE Trans. on Image Processing, vol. 8, Iss. 5, p , May 999; [5] S. Bogdanova, M. Kostov, and M. Bogdanov, Design of QMF Banks with Reduced Number of Iterations, IEEE Int. Conf. on Signal Processing, Application and Technology, ICSPAT 99, Orlando, USA, Nov (b) TABLE I COEFFICIENTS α i IN (9) OBTAINED BY MINIMIZATION OF () AND THE ERROR () Polynomial Error Е Coefficients α order n n, α n,, α, α in () (α ) (α =) e-3 * e-4 * e-5 * TABLE II THE ENERGY OF THE REMAINED NOISE IN THE FILTRATED SIGNAL WITH THE PROPOSED METHOD AND KNOWN METHODS Known methods Proposed wavelet universal [] [4] sq.root [] [] sym sym coif coif db db

78 SESSION CSIT IV Computer Systems and Internet Technologies IV


80 GUI to Web Transcoding Tsvetan Filev, Julian Pankov and Ivan Pankov 3 Abstract - In this paper a transcoding (transformation) scheme of a graphical application working on a server or desktop computer to web environment will be presented. The main working environments and interfaces are listed as well the main graphical environments. A realization scheme is depicted. The main disadvantages of the current approaches are listed, while the advantages of the new approach are shown. Keywords - transcoding, environments, interfaces, graphical user interface, web interface I. INTRODUCTION So far the following approaches for application transformation are used: Web remote administration (Remote Desktop, VNC) ActiveX controls for IE connecting with remote Office Express Components FireFox add-ons: AutoCAD, SolidWorks, etc. Flash animations they act like ActiveX or addons The major minus of the listed methods is the lack of unification. Also the software looks differently on the different systems. The control is done by JavaScript and Visual Basic scripts the development is hard, expensive and insecure. A disadvantage is that the different components do not work the same way each time and there is no portability. In the web administration the control over the system is taken by the new remote user leaving the current user without control. II. TYPES OF ENVIRONMENTS FOR USER INTERFACE In more details the environments for user interface and an algorithm for detection are examined in []. They are: Tsvetan V. Filev is with the Faculty of Computer Systems and Control, Technical University, Kliment Ohridski 8,, Sofia, Bulgaria, Julian Pankov is with the Faculty of Mechanical Engineering, Technical University, Kliment Ohridski 8,, Sofia, Bulgaria, 3 Ivan Pankov is with the Faculty of Computer Systems and Control, Technical University, Kliment Ohridski 8,, Sofia, Bulgaria, Shell environment, Web environment, SOAP query, XML- RPC query, REST query, Pure web query GET or POST, WAP, Graphical environment (GTK), Different mobile devices (PDA, Smart phone) Of course the list is subject to changes and always more environments can be added like environment for sound control based on VoiceXML which is developed, standardized, documented and supported by W3C[3]. III. TRANSCODING - DEFINITION The transcoding [] is found in many areas of content adaptation, but here it will be presented in the area of adaptation of contents generated by computer systems and adopted for PDA or SmartPhone devices. In the area of mobile devices the data transcoding is obligatory because of the multicolor in order to assure good display. An example is taking a picture with high resolution and sending it to other phone which displays low resolutions. A transcoding of the image is needed to lower the resolution in order to be displayed properly on the remote device. Except that the image depiction is made better sometimes it is obligatory in order to display the image at all. The transcoding is reduced to data transformation for one system in a format appropriate for another one. Besides data transformation it is examined a transformation of computing power of the remote terminal. While showing the result the terminal is taking advantage of the computing power of the computer on which the application was started. In this way the computing power of the remote terminal is increased. One more functionality which it adds is the extension of the functionality of the desktop applications to network applications. In this way even the simple one user applications without networking can be transformed into networked without problems only with the help of a web server. IV. TYPES OF COMPUTER GRAPHICAL INTERFACES Windows this operating system is fully graphical and it gives up the users a very good developed interface which on its own allows to be developed a great deal of applications like video games, CAD systems, systems for 3 dimensional modeling, text processing, imaging and video. Unix/Linux/BSD these are text based operating systems which have graphical server called X server, allowing the creation of sterling software applications. Of course here exists a full scale of 57

81 GUI to Web Transcoding applications for all purposes. Mac OS this is the first graphical operating system. And as such it accounts for one of the best developed graphical user interfaces. The latest versions are UNIX based and as such they have integrated X server. helping server application and web services like SOAP, XML RPC or Windows services. V. TYPES OF WEB SERVERS IIS Internet Information Server was developed by Microsoft. Its beginning is placed like a part of the late versions of the Windows NT operating system with major functionality for its time. In the following Windows products the next versions appear allowing the creation of dynamic pages using ASP, and later applications. Apache this is a web server developed by Apache Software Foundation and it is fully open source. The first public version was published in 995. Currently pressing is second generation server with modular structure and good developed possibility for configuration. The servers allows the execution of all kinds of applications through Common Gateway Interface or through a handler loaded in the server. VI. TRANSCODING SCHEME A transcoding scheme is depicted on Fig.. On a given desktop computer or server are started in progress web server, transcoder and a given graphical application (for i.e. Windows, Apache, MathLab ). Remote web browser executes POST or GET query to the web server using the http protocol sending data from the mouse and the keyboard. The web server receives the query and executes the transcoder passing him the events. The transcoder through API communicates with the graphical application passing it the events. Next the transcoder generates image which is the current state of the application and returns the result through the web server to the web browser. Fig.. A modification of the transcoding scheme allowing the execution of graphical applications hosted remotely from the server. The privilege is that there can be executed applications from many hosts in one local area network only with one web server. This modification is appropriate for the development of administration software. Fig. 3. A modification of the transcoder eliminating the need of web server On Fig. 3 is depicted a modification of the scheme eliminating the web server while the transcoder is listening directly on port 8 or some other port. In this way remote or desktop PDA applications directly communicate with the graphical applications. This way the transcoding is speeded up and made easier. VII.NET REALIZATION On Fig. 4 is depicted realization. Windows services are used to realize the processes of data exchange. On Fig. 5 is depicted a realization with included gzip compression. Fig.. A transcoding scheme of graphical application from the hosting operating system to web page on the Internet. Fig. 4. UML diagram realization On Fig. a modification of the scheme from Fig. is depicted. Here the transcoder and the web server are located on different hosts communicating with each other through 58

82 Tsvetan Filev, Julian Pankov and Ivan Pankov user using the resources. One big area of application of the method is remote and mobile training. REFERENCES Fig. 5. Scheme with gzip compression VIII. CONCLUSIONS The major disadvantages of the current methods for transcoding were shown and a unified and universal method was described. The development can be transformed into middleware for transcoding. In the current realization the graphical interface started on a desktop computer works in background regime and does not interfere with the current [] T. Filev, Convergence of environments, TU-Sofia, 6 [] [3] [4] A wiki developers, 7, [5] Nweman M.,A controllable Notifying Thread Queue with Generics, 6, e.asp?df=&forumid=3369&exp=&select= [6] Libraries. Pocket PC Developer Network, 6, [7] Nikitas, M., Improve XML Web Services' Performance by Compressing SOAP, 3, [8].NET Zip Library #ziplib (SharpZipLib), 7, 59

83 This page intentionally left blank. 5

84 Classification of Classifiers G. Gluhchev, M. Savov and O. Boumbarov Abstract In this paper the relationship between three broadly used in practice classifiers Mahalanobis distance, K nearest neighbors and majority voting, and the optimal classifier in terms of minimum average losses is outlined. Their performance efficiency is experimentally tested on the real problem of signature recognition. Keywords Mahalanobis distance, K nearest neighbors, Majority voting. I. INTRODUCTION In this paper three of the most popular classifiers are discussed, namely, the Mahalanobis distance based classifier, the K nearest neighbors one and the majority voting. Using the theoretical set-up of the optimal classifier in terms of minimum average losses we make an attempt to show how different classifiers refer to the optimal (Bayesian) one. The statistical pattern recognition theory assumes that some a priori information is available, including prior probabilities Р(Ω ) and Р(Ω ) of the classes, feature density functions f (x) and f (x), and losses incurred by wrong classification с and с respectively. The optimal classifier minimizing the average losses is defined as [] or x Ω if P(Ω x) x Ω, otherwise P(Ω x) c / c, x Ω if f( x) f ( x) P(Ω ) c / P(Ω) c, x Ω, otherwise The condition () includes a posteriori probabilities of the classes, while the condition two is based on the maximum likelihood ratio. If P( Ω = P(Ω ) and c = c the decision is made according to the maximal a posteriori probability or likelihood ratio. Since these are constant we will assume they are equal. Georgi Gluhchev and Mladen Savov are with the Institute of Information Technologies of the Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Sofia 3, Bulgaria, {,} Ognyan Boumbarov is with the Technical University in Sofia, 8 Kl. Ohridski Str., Sofia, Bulgaria () () II. MAHALANOBIS DISTANCE In case of normal distributions the inequality () will look as follows t t -[( x-m - - e ) S (x -m)-(x-m) S ( x-m or after taking a logarithm )] ( x-m ) t S - ) (x-m ) t S - (x -m ( x-m) (4) Eq. (4) is actually a comparison of Mahalanobis distance of x to the centers m and m of the classes, and S and S are their covariance matrices. III. K - NEAREST NEIGHBORS When no justified assumptions could be made about the priors and class-conditional distributions, non-parametric classifiers are used. One of the most popular among them is the К- nearest neighbor, where a point x is attached to the class Ω i, provided the ratio к i /K of its к i representatives among the K nearest neighbors to K is maximal []. It is worth to note that without going by the statistical estimations this empirical classifier evaluates the average a posteriori probability P(Ω i x) in a neighborhood of x. Thus, one could conclude that the K nearest neighbor classifier is an empirical approach to the optimal Bayesian classifier. However, one has to pay attention that this classifier assumes implicitly that the quantity of the training samples corresponds to the prior probabilities of the classes. If this is not the case, the classification error may be too high. IV. PARZEN WINDOWS The Parzen window is used for the evaluation of the feature density function in a neighborhood of a point []. Therefore, according to inequality () the classifier based on Parzen windows could be optimal, as well. The accuracy of the evaluation depends on the quantity of samples, on the one hand, and on the volume of the neighborhood, on the other hand. This approach resembles to a large extends the K-nearest neighbors one. The difference is that instead of the number K here the volume of the neighborhood is predefined. The advantage consists in its independence from the prior probabilities of the classes, i.e., different size of the training sequences for different classes will not affect the evaluation. Using Parzen windows for the classification actually means that equal prior probabilities are assumed. (3) 5

85 Classification of Classifiers V. MAJORITY VOTING Often in practice a decision is made depending on the number of votes. Such an approach is applicable to the classification problem provided all the features are treated as independent voters of equal importance in the following way. The interval [ xminij, xmaxij] is determined for the i th feature and the j th class. All of the feature values of an unknown object x are tested for belonging to the corresponding interval of the classes. If the test result is positive for a particular class, its score is increased by. The winner is determined by the maximal score. This classifier could be treated as a relative to the above mentioned one, provided a Parzen window of size equal to the interval of the corresponding feature is determined for each class. A maximal number of votes could be assigned to more than one class when this approach is used. For some specific problems like signature verification additional samples from the same class may solve the problem. VI. EXPERIMENTAL COMPARISON To test how the above reasoning is supported by the practice, the classifiers have been applied to the real problem of signature authentication. For this signings of 4 volunteers has been captured by a TV camera. Every volunteer submitted signatures that have been used for training. The following 8 features have been measured from each signing: ) d signing length as a number of frames, ) α hand orientation, 3) β pen azimuth, 4) γ pen tilt, 5) δ = α β, 6) r / r - ratio of the distances between the pen center and hand contour, 7) Р perimeter of the polygon defined by the characteristic points of the upper hand contour, 8) А area of the polygon [3]. The classifiers authentication performance has been evaluated in terms of mean, minimal and maximal error. To do this, signatures of every volunteer have been simulated using the Matlab s random number generator and the assumption of statistically independent features []. The classification results are shown in Table. For the Mahalanobis distance an average classification error of,% was obtained (Table, line, column ). An absolute result of % errors was obtained for 6 volunteers, while the maximal error of % was obtained by one of them. For the K-nearest neighbors classifier an average error of about.% was obtained when one neighbor was used. For three or five neighbors the average error was slightly higher (Table, lines 3 и 4). For the majority vote about 6% of wrong classifications and a maximal error of 9.6% have been observed (Table, line 5). Table. Results from the experimental comparison of the classifiers Classifier Average error % Minimal error % Maximal error % Mahalanobis. neighbor neighbors Majority vote A separate investigation with Parzen windows has not been carried out due to the small number of the training data. VII. CONCLUSION In this paper three of the most popular classifiers have been analyzed and compared. The relationship between them was outlined, stemming from the assumptions about the available a priori information. It was shown that the Mahalanobis based classifier was quasi optimal in the sense of minimal losses and normal distribution of features. Empirical estimations of the a posteriori probabilities of the classes are obtained when K-nearest neighbor classifier is applied, provided the volume of the training sequences is proportional to the prior probabilities. Similar behavior could be expected if the class density functions are evaluated using Parzen windows. The majority vote could be thought as a degenerated variance of the above classifiers. The experimental comparison carried out with real data has confirmed the theoretical analysis. These results could be taken into account when practical classification problems have to be solved. ACKNOWLEDGEMENTS This investigation was supported partly by the Ministry of Education and Sciences in Bulgaria, Contract No BY TH- /6 and BioSecure Network of Excellence, Contract No 57634/4. REFERENCES [] R. O. Duda, P. E. Hart, Pattern Classification and Scene Analysis, Jon Wiley & Sons, N. Y., London, Sydney Toronto, 999 [] M. Savov, G. Gluhchev, O. Bumbarov, Experiments on Signature Identification with Hand-pen System Features, Proc. of the Int. Coference Automatics and Informatics, Sofia, 6, pp [3] M. Savov, G. Gluhchev. Signature verification via Hand-Pen motion investigation, Proc. Int. Conf. Recent Advances in Soft Computing, Canterbury, 6, pp

86 Methods of Graphic Representation of Curves in CAD systems in Knitting Industry Elena Iv. Zaharieva-Stoyanova Abstract This paper treats the problems related to curve generation and its application in knitting industry CAD/CAM systems. The methods can be applied for a curve generation in knitting object pattern design. Most commonly used methods are reviewed and some functions for curve generation created. They can be used as programming modules in a CAD system for knitting pattern design. Keywords Computer graphics, CAD/CAM systems, parametric cubic curves, Bezier curves, B-splines, FF knitting method. I. INTRODUCTION The development of CAD/CAM systems needs a solution of some problems related to graphics representation of designed objects in digital format. Generally, application of computer graphics in knitting industry CAD/CAM systems has two aspects: - knitting structures design; - design of knitting pattern shapes. The second one is related to knitting machine's capacity to make products by fully-fashion (FF) method. It means that the machine knits the cuts of products, or the whole products. This method allows to avoid cutting the materials, reduces the number of the operations and it leads to reducing the waste products to a minimum. Usually, as designed objects the patterns of knitting products are represented with straight-line polygons. [],[] This manner of pattern representation is used because of the following reasons: it is possible for the knitting products size to have a little tolerance within of ± cm; it is easer to process straight-lines than curves; knitting machines are not so precise in material production. This precision depends on a knitting machine performance called fine or gauche. The performance is related to the number of needles per inch or cm. It determines a loop size and a thread thickness, too. All these features distinguish a knitting patterns presentation from cloths patterns one. Because of these reasons the patterns of knitting products are described as straight-lines poligons without curves. If there is a curved section, the workers form it additionally by cutting material. Elena Ivanova Zaharieva-Stoyanova is with the Technical University of Gabrovo, 4 H. Dimitar str, 53 Gabrovo, Bulgaria, E- mail: This paper treats the problems related to curve representation as object in computer graphics. These methods can be applied for a curve s generation in knitting object pattern design. The most commonly used methods are reviewed. Similarly, the methods of curve generation used in cloths patterns representation are given in [3] but as was already mentioned knitting patterns shapes are more flexible. What is aimed here is to apply more methods in the design of knitting pattern objects. Functions drawing curves are created by these methods. Said functions will be used for graphic representation of knitting products' patterns. Their development is strongly related to the requirement of knitting product realization i.e. to obtain a form of knitting pattern which will match the design one. However, it is the main problem of FF knitting method, hence our effert to show some possibilities of usage in CAD/CAM systems in knitting industry automation. II. CUBIC CURVES REPRESENTATION Cubic curves are commonly used in computer graphics because they are quite flexible and not so complex as well [4],[5],[6]. These features determined the way the paper reviews the application of said curves. A. Parametric Cubic Curves The parametric form of curve representation is more usable: x( t) y( t) 3 = a3t + at + at + a () 3 = b3t + bt + bt + b Using a vector form, the first equation can be represented as followed: x( t) = [ t 3 t t a3 a ] a a Using a vector form equation () is transformed into: x( t) = T. A (3) y( t) = T. B Where T, A, and B are vectors. () 53

87 Methods of Graphic Representation of Curves in CAD systems in Knitting Industry The derivates of the curve with respect to t can be expressed as follows: [ ] [ ]B t t t y A t t t x. 3 ) (. 3 ) ( ' ' = = (4) Cubic curve definition needs 3 point two endpoints and one middle point, as well as the tangent vector at the middle point. Let the points P, P fix the endpoints; P the midpoint; and T the tangent at the P. The parametric curve can be determined as followed formula: M Y T t y X M T t x.. ) (.. ) ( = = (5) Using the first equation, the full formula is obtained as: = ' ] [ ) ( x x x x t t t t x (6) The product of multiplying T by M is so called basis (or blending) functions: 4 4 ) ( 6 4 ) ( 4 4 ) ( ) ( = + = + = + + = t t t f t t t t f t t t f t t t t f (7) The basis functions are so called weighting function for the points P, P, P, and tangent T because they determine how these points exert influence on the curve. The curve will be represented as follows: 4 ' 3 4 ' 3 ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( y t f y t f y t f y t f t y x t f x t f x t f x t f t x = = (8) B. Bezier Curves A Bezier curve in its most common form is a simple cubic equation that can be used in any number of useful ways. Originally, it was developed by Pierre Bezier for CAD/CAM operations. Bezier curves are also a variation of the parametric cubic curves specified by four control points P, P, P, P3. The expression for the matrix corresponding to formula (6) is: = Mb (9) The Bezier basis functions are as follows: ) ( 3 3 ) ( ) ( 3 3 ) ( t t f t t t f t t t t f t t t t f = + = + = + = () Alternatively, the basis functions are: ) ( ) ( 3 ) ( ) ( 3 ) ( ) ( ) ( t t f t t t f t t t f t t f = = = = () Bezier curve given in parametric format: = = m i i i m i i P t t C t P m ) ( ) ( () For better graphics interpretation let us transform this formula as: m m m i i i m i i o m P t P t t C P t t P m + + = = ) ( ) ( ) ( (3) C. B-splines A B-spline is a spline-function (part-polynomial function) which equals zero in all sub-segments except in the interval [m, m+] ([5],[6]). To represent a B-spline in sub-segment i, the following formula is used: = + + ), (,, ) (, i i i i i x x x x x x x N (4) An m-degree B-spline in [ x i, x i+m+ ] can be represented as: ) ( ) ( ) (,,, x N x x x x x N x x x x x N m i i m i m i m i i m i i m i = (5) Normally, square and cubic B-splines are used. Should B- spline can be of higher order, almost cubic B-splines are applied. The cubic B-spline basis functions work over four control points. The basis functions appears below (equation 9, ). Note that special basis function are needed for the first and last two sections of the B-spline so that it passes through the firs and last points. 54

88 Elena Iv. Zaharieva-Stoyanova The basis functions for first section are: 3 B ( t) = ( t) 3 9 B ( t) = t t + 3t 3 3 B3 ( t) = t + t 3 t B4 ( t) = 6 (6) The basis functions for the second section are: 3 ( t) B ( t) = t 7 B ( t) = t t B3( t) = t + t + t t B4 ( t) = 6 The basis functions for the middle sections are: 3 ( t) B ( t) = 4 3 B ( t) = t t B3 ( t) = t + t + t t B4 ( t) = 6 (7) (8) Fig.. Curve generation to form sleeve in knitting pattern using control points A function generating a Bezier curve by using of control points is created, too. These functions are applied in knitting pattern design. Fig. and are examples of their application. III. FUNCTION FOR CURVE GENERATION Using the formule given in previous section some functions for curve generation are created. Some of them have already been presented in [7], [8]. This paper gives their development and a creation of some new functions. The following function creates a Bezier curve by using of 3 control points: void CBezierCurvesView::BezierCurves3(CDC *pdc, CPoint *P) { double t, dt, x, y; double dx=, dy=; pdc->moveto(p[]); dt=.; t=; int i = ; while (t<=) { x=pow((-t),3)*p[].x+3*pow((-t),)*t*p[].x+3*t*t*(- t)*p[].x+t*t*t*p[3].x; y=pow((-t),3)*p[].y+3*pow((-t),)*t*p[].y+3*t*t*(- t)*p[].y+t*t*t*p[3].y; pdc->lineto(cpoint((long)x,(long)y)); t+=dt; }} Fig.. Curve generation to form sleeve in knitting pattern using 3 control points To generate a Bezier curve by n control points the following function is created: void CBezierCurvesView::BezierCurvesDraw(CDC *pdc) { double t, dt, x, y; double C, xk, yk; int i; pdc->moveto(p[]); dt=.; t=; while (t<=) { xk=; yk=; for (int i=; i<n-;i++) { C = (double)facturiel(n)/(facturiel(i)*facturiel(n-i)); xk += C*pow(t,i)*pow(-t,n-i)*P[i].x; yk += C*pow(t,i)*pow(-t,n-i)*P[i].y; } 55

89 Methods of Graphic Representation of Curves in CAD systems in Knitting Industry x=pow((-t),n)*p[].x + xk + pow(t,n)*p[n].x; y=pow((-t),n)*p[].y + yk + pow(t,n)*p[n].y; pdc->lineto(cpoint((long)x,(long)y)); t+=dt; } } This function can be applied for curve generation of knitting pattern but it has a disadvantage the curve doesn t go through the control points. To avoid it the function of curve generation splits the control points into some groups and applies the previous function simultaneously. The result is given on fig. 3. IV. CONCLUSION This paper treats the problems related to curve generation and its application in knitting industry CAD/CAM systems. The methods can be applied for a curve s generation in knitting object pattern design. The most commonly used methods are reviewed, especially parametric cubic curve generation, generation of Bezier curves and B-spline method. The paper traces out some methods for curve generation. Some functions for curve generation are created. Their application is as a part of CAD system for knitting pattern design. The development of curve generation functions are strongly related to the algorithm for an accuracy of knitting pattern shape form suggested in []. The project is realized as an application in MS Visual C++ 6. using MFC. REFERENCES [] Zaharieva E., J. Angelova, Methodics of Fully Fashioned Knitting on Cotton, Textil & Obleklo, 5/998, pp [] Angelova, J., E. Zaharieva-Stoyanova, Shape Precision at Set Outline Knitting, Scientific conference EMF 5, -4. September 5, Varna, pp Fig. 3. Curve generation to form sleeve in knitting pattern using Bezier method The function for curve generation using B-spline method is created, too. It uses the equation 5, 6, 7. The result of its application is represented in fig. 4. [3] Rakova R., Hr. Petrov, Methods for Curve Generation of Cloths Patterns by Polynomial Interpolation, Textil & Obleklo, - /. [4] Brourke P., Bézier curves, December 996. [5] Lukipudis E., Computer Graphics and Geometric Modeling, Sofia, Tehnika, 996. [6] Pavlidis T., Algorithms for Graphics and Image Processing, Bell laboratories, Computer Science Press, 98. [7] Zaharieva-Stoyanova E., Application of Bezier Curves in Knitting Industry CAD/CAM Systems, International conference ICEST'5, , Nish, Serbia and Montenegro, pp [8] Zaharieva-Stoyanova E., Application of Spline Functions in Knitting Products Graphic Design, International scientific conference UNITECH 5, 4-5 November 5, Gabrovo, pp. I-73 I-76. [9] Zaharieva-Stoyanova E., Algorithm for Computer Aided Design Curve Shape Form Generation of Knitting Patterns, International scientific conference on Automation, Quality and Testing, Robots, AQTR 6 (THETA5), May 5-8 6, Cluj-Napoka, Romania, pp TI Fig. 4. Knitted pattern generated by using a cube B-slpine curve 56

90 Cognitive Model Extension for HCI Nebojša D. Đorđević and Dejan D. Rančić Abstract This paper presents research results on Human Computer Interaction (HCI) methodologies. We present an extension of cognitive model for HCI - (XUAN/t), based on decomposition of user dialogue into elementary actions (GOMS). Using this model, descriptions of elementary actions performed by user and system are introduced sequentially, as they will happen. Based on the described model and psychometric concepts, we developed software tool for testing sensomotor abilities of a user in HCI. Software tool aranges tests into test groups for psychosensomotor and memory capabilities. User test results are persistently stored in a database and available for further statistical analysis. Keywords HCI, User interface. I. INTRODUCTION Research results from the past several years indicate significant influence of human-computer interaction (HCI) on computer system development, which, combined with technological development, enabled their application in almost every branch of human activity. HCI can be defined as a filed of study related to design, evaluation and implementation of interactive computer systems used by humans, which also includes research of the main phenomena that surround it []. Multidisciplinary nature of human-computer interaction requires contribution from different science disciplines; especially from computer science, cognitive psychology, social and organizational psychology, ergonomics and human factors, computer-aided design and engineering, artificial intelligence, linguistics, philosophy, sociology and anthropology. Main goal of HCI is to improve interaction between the user and the computer in order to make computers more user friendly and designed systems more usable. Determining the degree of usability is a process in which systems are evaluated in order to determine product-success using methods available to the evaluator. II. USER INTERFACE The most important element in HCI is user interface. User articulates his requests to the system via dialogue with the Mr Nebojša D. Đorđević is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia, Dr Dejan D. Rančić is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia, interface. Interface is the point at which human-computer interaction occurs. Physical interaction with end user is provided using hardware (input and output devices) and software interaction interface elements. User interface (UI), as an interaction medium of the system, represents software component of the application which transforms user actions into one or more requests to the functional application component, and which provides the user with feedback about the results of its actions []. Key concepts of graphic interfaces were established in the early 7ties. They were based on the WIMP metaphor, which includes key elements of the interface: Window, Icon, Menu and Pointer. Direct manipulation of graphic objects provides object manipulation on the computer screen via pointing devices as standard input devices of modern computer systems. III. HCI METHODOLOGIES The importance of human-computer interactions was noticed in the late 7ties. In 98 this caused a development of an independent research group, which had, in 99, formed HCI as a special discipline. The subject of HCI research is human being and everything related to a human being: work, environment and technology. Classification of HCI methodologies was made based on the method by which end user is incorporated into system development [3]: User centered development - provides system development FOR the user based on feedback information from the user during the entire process of system development. System development WITH users development of user participation which promotes system development in user environment (manufacturing facilities, offices, etc.) rather than within software companies. System development based on taking into account the user - this approach uses cognitive modeling of end users in order to understand user behavior in a certain situation and why one system is better than the other. IV. COGNITIVE MODEL OF HCI Cognitive modeling provides a description of user in interaction with the computer system; it provides a model of user s knowledge, understanding, intentions and mental processing. Description level differs form technique to 57

91 Cognitive Model Extension for HCI technique and ranges from high-level goals and results regarding thinking about a problem all the way to the level of motor activities of the user such as pressing a key on a keyboard or a mouse click. Research of these techniques is done by psychologists, as well as computer science specialists. Classification of cognitive models is based on whether the focus is on the user and its task, or on transformation of the task into interaction language []: Hierarchical presentation of user s tasks and goals (GOMS). Linguistic and grammar levels. Models of physical level. GOMS (Goals, Operators, Methods and Selection) [4] model consists of the following elements: Goals are results of user s task and they describe what the user is trying to accomplish. Operators are basic actions which the user must make while working with a computer system. Operators can act on a system (pressing a key) or on the mental state of the user (reading a message). Detail level of the operators is flexible and it varies based on the task, on the user and on the designer. Methods are step sequences which need to be performed in order to reach a given goal. A step in the method consists of operators. Selection rules provide prediction on which method will be used in reaching a given goal in case there are different possible methods to reach the goal. Models of the physical level relate to human motor skills and describe user s goals which are realizable in a short time period. An example is KLM model (Keystroke-Level Model) [5] used for determining user s performance with a given interface. In this mode, the task of accomplishing a goal is given in two stages: Task acquisition, during which user makes a mental picture of how to reach a given goal, and Task performance using the system. Task acquisition closely connects KLM with GOMS level which gives an overview of the tasks for a given goal. KLM decomposes the phase of task performance into five different physical operators (pressing a key on a keyboard, pressing a mouse button, moving a cursor to a desired position, moving a hand from keyboard to mouse and reverse, and drawing lines using a mouse), one mental operator (mental preparation of user for physical action) and one system response operator (user can ignore this operator unless he is required to wait for system response). Each operator is given a time period for its action. By summing these time periods we get estimated time for completion of those tasks for a given goal. Precision of the KLM model depends on the experience of the designer, because he is required to make a realistic decision about the abilities of end user. Obviously, the development of high quality user interface is impossible without cognitive modeling and techniques. In HCI practice there is no separate cognitive methodology; rather, some cognitive models and techniques are used within other methodologies, usually during evaluation. Cognitive models and techniques significantly contribute in determining (rationalizing) how acceptable is the designed solution. V. INTERACTION MODEL Interaction models are descriptions of user inputs, application actions and result displays. Interaction models are based on formalisms which ensure their implementation within interface development tools. One of the oldest and most general interaction models is PIE model [] which describes user inputs (from keyboard or mouse) and output to user (on a screen or a printer). UAN (User Action Model) model [6] was developed by system designers in order to understand the complexity of interactions with regard to the system, rather than the user. UAN model efficiently describes (and identifies) four elements of interaction in a way understandable to all participants in software development. Also, it does not differentiate between text and graphic interfaces, thus supporting every interaction technique. A drawback of this model is its approach to interactions by regarding the system only, without taking into account the other participant, the human being. This problem was overcome in the XUAN (Extended User Action Notation) model, which equally treats both the system and the user. XUAN model treats the user and the system in terms of their visible, in case of the user articulated, internal actions. XUAN model s advantage is that it includes human mental action. Its drawback is excluding the state of the interface, which can lead to its inconsistency. VI. EXTENDING XUAN INTERACTION MODEL In order to evaluate user performance as realistically as possible, we extend the mentioned interaction models (UAN, XUAN). Extended model (XUAN/t extended user action notification per time) treats equally the complexity of interactions, both from the system and from the user. This model is given in table form (Fig. ) which is divided into two parts. First part contains two rows in which descriptions of mental or sensory and articulated or motor activities of the user are given. Second part contains three rows in which interface descriptions (visible actions and interface conditions) and internal system actions (core) are given. Separation line dividing these two parts is highlighted in red because it represents a point at which human-computer interaction occurs, and it also represents a time scale. In addition to giving descriptions, activities are presented graphically on the time scale in proportion to time duration. Graphic presentation also provides visual interpretation of position, order and duration of activities. In order to efficiently estimate the number of actions and time duration of the entire task, a complex dialogue is decomposed into elementary actions using GOMS model. Descriptions of elementary actions by the user and by the 58

92 Nebojša D. Đorđević and Dejan D. Rančić system are entered sequentially in order of occurrence. Each activity is given the time needed for its completion. Mental or sensory activities USER Articulated or motor activities COMPUTER INTERFACE KERNEL Visible action Interface conditions Internal system actions t Locate button t t 3 t 4 Folow mouse cursor Button is selected Move mouse cursor to the button Press mouse button Execute button action Release mouse button Release focus Set focus to the button t 5 Initiation of certain physiological processes in nerve cells of the sensory organs, Transmission of nerve excitation by neurons to the primary sensor zone in cortex, Initiation of a psychological response which enables the human to become aware of the stimuli which acted on the sensory organ. In order to articulate his demands, user utilizes certain interaction elements of user interface (hardware and software) which enable his physical interaction with the computer. In physical interaction with hardware device, user makes a voluntary activity which is coordinated with visual senses (from the primary sensory zone) and kinesthetic senses (from the motor cortex). Kinesthetic senses provide muscule coordination and development of skills for performing different complex movements while working. Clasification into sensory, intellectual and motor activities is provisional, because they intermix during task performance. In order to investigate senso-motor abilities, based on the described model and psychometric concepts, we developed software CASE tool for evaluation of human cognitive characteristics in interaction with the computer. Fig.. XUAN/t model of a click-on-a-program-field of the interface Estimated time is determined by summing the times required for individual activities (see Eq. ). This way, proposed model provides interpretation of action descriptions with empirical variables which can be evaluated. T = n t k k= () In this model time component is based on the duration of individual elementary actions; it is limited by given events as reference points. These events are initiated by the user, but they occur in the system. The system can register them precisely in order to determine the beginning and the end of activity. This model is intuitive and it can be easily supported with available software tools. Fig.. User description input form VII. TESTING COGNITIVE CHARACTERISTICS Evaluation of user s cognitive characteristics is done by tests designed for evaluation of certain characteristics and obtaining the user profile. Test construction is based on recognition of activities in user-computer interaction, prominent user characteristics and the method of measurement of individual production results. There are several steps during user-computer dialogue which we grouped into sensory, intellectual and articulatory activities. Within sensory activities, we isolated the processes in which human being is gaining knowledge about phenomena and events around him such as: Impact of physical and chemical processes from the environment on human senses, Fig. 3. Form for determining the list of tests and defining general and particular test conditions Software tool provides user identification data input and user characteristics (Fig. ), determining a test list, and defining general and particular test conditions (Fig. 3). In order to test all subjects under the same conditions it is necessary to define general conditions (screen resolution, mouse speed, etc.) and determine particular conditions of the micro surrounding (noise, light, temperature, etc.). At the beginning of each test subject is given a test task. During 59

93 Cognitive Model Extension for HCI testing, tests are given in predetermined order and in designed time limits. Testing depends on the choice of tests given on the list. Test groups related to receiving, information processing and motor activities include tests of memory, sensory and psychomotor abilities. The goal of sensory ability tests (perception) is to determine reaction times of subjects to auditory and visual stimuli. Subject s abilities in domains of seeing, hearing and kinesthetic senses are tested. Test lasts seconds, during which time subject is presented with series of stochastic visual and auditory stimuli. Subject s task is to react as quickly as possible by pressing a certain key (LIGHT-OFF, RINGER- OFF), with which he confirms registration of the tested stimulus. System registers time lapse between giving the stimulus and subject response, as an evaluation parameter. The goal of psychomotor tests is to determine the precision in object manipulation, psychomotor orientation, reaction time, manipulation aptness and the ability of making visualmotor guesses. First group of tests, so called CLICK-A-FIELD, is aimed at probing psychomotor orientation, visual-motor guessing ability and coordinated manipulation of user-computer interaction tools, coordination of individual senses and body parts. Tests last seconds, and subject s task is to click a field ( cm) which cyclically, using random coordinate generator, appears on the screen. During the test, the system on-line continually registers times related to certain events (PRESS-MOUSE-BUTTON, RELEASE-MOUSE-BUTTON) and connects them in database with the user and the test. After the event, RELEASE-MOUSE-BUTTON field is erased form the screen and it appears at a new randomly generated coordinates. In order to determine the influence of different factors on user s psychomotor characteristics we developed four different tests. The goals of these tests are the same, however: PM field on the interface is darker shade of gray than the background, PM field is highlighted red on the interface, in PM 3 test the field is 3 cm on the interface, in PM 4 test after RELEASE-MOUSE-BUTTON event a beep sound is given in order to provide auditory stimulus. For determining precision and ability of fast, easy, correct and coordinated manipulation of visual objects with interaction technique of dragging objects on the screen, we developed PM 5 test (so called DRAG-ME ). Test lasts seconds, and subject s task is to click on a red rectangular object on the screen and drag it into a rectangular window with blue borders. After each attempt the object on the screen appears at a different randomly generated coordinate. System on-line registers successful attempts. The main goal of memory tests (TM ) is to investigate memory span through the ability of immediate reproduction of a series of elements after only one viewing of the series. This test is not time limited; it lasts until the first unsuccessful reproduction is made. Subject is presented, in a certain time interval, with a series of randomly generated numerical signs of given length. Presentation time of the series is inversely proportional to the length of series. Subject s task is to reproduce the entire series successfully. This step is repeated with each series one sign longer. We also developed two more tests with the same scenario as TM, with a difference: TM generated series are made of letter signs, and in TM 3 the series are made with alphanumeric signs. System registers the longest length of successfully reproduced series as a memory span parameter. VIII. CONCLUSION In order to evaluate user performance in interaction with interface, we extend the concepts of existing interaction models. Based on the described model and psychometric concepts we developed software tool for testing sensomotor abilities of user in human-computer interaction. Test concept allows program-led testing of the intent-group and precisely quantifies user performance. In this study we obtained an efficient tool for making user profiles. Differentiation of test users is utilized to determine compatibility of individual interaction models with given intent-groups. Qualitative result analysis provides recommendations for design of individual interface parts which are useful for the intent-group for which it is designed. REFERENCES [] A. Dix, J. Finlay, G. Abowd and R. Beale, Human-Computer Interaction, nd ed. Prentice Hall Europe, 998. [] B. A. Myers, and M. B. Rosson, Survey on user interface programming, In P. Bauersfeld, J.Bennett and G. Lynch, editors, CHI 9 Conference Proceedings on Human Factors in Computing Systems, pp. 95-, ACM Press, New York, 99. [3] J. Brown, HCI and Requirements Engineering - Exploring Human-Computer Interaction and Software Engineering Methodologies for the Creation of Interactive Software, SIGCHI Bulletin, vol. 9(), 997. [4] D. E. Kieras, and A. Arbor, Towards a Practical GOMS Model Methodology for User Interface Design, In M. Helander Handbook of Human-Computer Interaction, Elsevier Science Publishers B. V. (North Holland), pp. 35-, 988. [5] S. K. Card, T. P. Moran and A. Newell, The Keystroke-Level Model for user performance with interactive systems, Communications of the ACM, vol. 3, pp , 98. [6] M. D. Harrison and D. J. Duke, A review of formalisms for describing interactive behaviour, In the Software Engineering and Human-Computer Interaction Notes in Computer Science, vol. (896), Springer-Verlag, pp ,


94 Performance Analysis of a Suboptimal Multiuser Detection Algorithm Ilia Georgiev Iliev and Marin Nedelchev Abstract The paper presents a research of the possibilities of an algorithm for MUD, which uses a discrete successive search. The suboptimal methods for MUD in synchronous CDMA systems are prospective class in comparison to the optimal methods, because of the reduced number of calculations and computational complexity. A closed form formula is derived for the number of iteration of the algorithm for obtaining constant error probability. This allows rational use of the computation power of the machine and reduction of the number of the computations in MUD with successive search algorithm. Keywords Synchronous CDMA, Multi User Detection, Suboptimal algorithm. I. INTRODUCTION CDMA is an effective method for multiple access used in the mobile communications. In CDMA systems multiple users transmit signals in one and the same bandwidth simultaneously. To separate the received signal from a given user, it is required to observe some conditions. In the practice they are not fulfilled. Consequently, the signals from other users become interference MUI (multi user interference). As it is well known according to the information theory, the correlation receiver is optimal, when MUI is missing. If MUI exists, it may be considered as a noise with own probability and energy characteristics. Consequently the MUI existence will decrease the noise performance when using a common correlation receiver. For minimizing this drawback, Multi User Detection (MUD) is used. []. MUI carries useful information from the other users, that can be processed in a proper way to better the quality of the communication systems. The optimal receiver for MUD is based on the Maximum Likelihood (ML) criterion []. The estimation of optimal decision is connected with verifying of all possible transmitted symbols combinations. Therefore the great number of computations is the main drawback of ML MUD. The computations increase exponentially with the active users. This seriously complicates its application in the conventional mobile communication systems, nevertheless the high speed and huge computational power of the modern digital signal processors. There are many propositions of methods and algorithms for suboptimal receiving that decrease the Ilia Georgiev Iliev Assoc. Professor, PhD in Dept. of Radiotechnic in Faculty of Communications and Communication Technologies in TU Sofia Marin Veselinov Nedelchev -Assistant, PhD in Dept. of Radiotechnic in Faculty of Communications and Communication Technologies in TU Sofia necessary number of computations for detection In most cases, they are compromise between the computational complexity and the quality of the receiver [,3]. There exists MUD with parametric optimization. In this case the Maximum A posteriori Probability (MAP) [] criterion is used as a objective function for optimization. In the MUD case, the objective function is discrete, discontinuous, non differential, non-unimodal, therefore the most appropriate methods for optimization are the random search method, genetic algorithm, Evolutionary strategy etc. [4,5,6]. Their application in comparison to the optimal receiver with ML for MUD, decreases the computation number [6]. The paper presents a research of the possibilities of an algorithm for MUD, which uses a discrete successive search. The initial start point is derived after correlation single detection. The obtained results show that the increasing the number of the iterations leads reaches the theoretical curve for fully compensation the MUI or single detection. A closed form formula is derived for the number of iteration of the algorithm for obtaining constant error probability. This allows rational use of the computation power of the machine and reduction of the number of the computations in MUD with successive search algorithm. II. OPTIMAL MUD IN SYNCHRONOUS CDMA SYSTEM System model is shown on Fig.. The signal processing is done in baseband. Let the number of the user is K. They transmit synchronously direct spread spectrum (DSS) PSK modulated signals. The signal on the receiver input during bit interval Т b is: K j k rt () = Edc k k k() tαke θ + nt () (). k = r(t) is represented in a discrete in time matrix form as: rt () = caed + nt () (), where: d = [ d, d,..., d ] T K is a vector-column, containing the value of the transmitted symbol with duration T b from the k-th user. The symbols are bipolar NRZ coded dk {, + } ; 53

95 Performance Analysis of a Suboptimal Multiuser Detection Algorithm User User K d c (t) d k c k (t) d. c (t) E d K.. c K (t) E k α e θ j j K α e θ K n(t) r(t) c (t) c k (t) T b T b z z K d ˆF d ˆ F K MUD Algorithm ˆd d ˆK Flat Relay Fading Channel Fig. System model of MUD j j j K A = diag[ αe θ, αe θ,..., αke θ ] is a diagonal matrix and the elements in the main diagonal are the complex channel transmission coefficients of the corresponding user. The amplitudes are Relay distributed and the phases are uniformly distributed in the interval [,π). It is assumed that the channels for different users are statistically independent; E = diag E, E,..., E K is a diagonal matrix. E is the symbol energy of the k-th user; k c = [ c( t), c( t),..., ck ( t)] - is a matrix, each row of it consists of the elements of the spreading sequence for the ( n) corresponding use ck {, + }. The sequences length is N - N = Tb / T, T c c is the chip duration; n(t) is the realization of complex additive white Gaussian noise (AWGN) with independent real and imaginary components. Each of them has dispersion σ = No / [W/Hz]. MUD is connected with parallel receiving of the symbols from K users. The receiver consists of K single correlation receivers-k receiving channels (Fig.). Let there exists ideal synchronization and the complex channel transmission jθ jθ jθk coefficients are determined [ αe, αe,..., α K e ]. The vector z is on the correlator output and it consists of the following elements: z = [ z, z,... z ] T K = RAEd + n (3). R is the cross-corelation KxK matrix, which coefficients are the normalized cross-correlation functions of the spreading sequences: R T b = c () t c () t dt (4). ij i j N n is the AWGN after the correlator, introduced as a vector column n = [ n, n,..., n ] T K with a covariance matrix equal to: R =.5N R. The k-th element is: n n k o T b А. Single detection = n() t c () t dt. o k In the schematic shown on Fig., the decision is made in accordance with the ML criterion separately for each channel - single detection. The received symbols are further used for MUD. There exists, by definition, a synchronization of the symbol transmitting from multiple active users. Represented in matrix form, the decision device output vector in the single detection case is: d ˆ = [ dˆ, dˆ,..., dˆ ] = sign R ( A * z ) (5). F F F F K { } The variable on the decision device input z depends on the current transmitted symbol, MUI and the AWGN. This allows determining the noise performance for single detection. For the k-th user and Gaussian approximation of the MUI distribution for flat Relay fading channel, the error probability is: K Pek =.5 E / MUI k No + Ei + Ek N i = (6). Eq.6 considers the more difficult case of using of random instead of pseudorandom spreading sequences. The multiplier /N is connected to the dispersion of the cross-correlation functions of the random sequences. When the power on the receiver input from all users is equal and using of random sequences, the error probability is: 53

96 ( o ) Pe =.5 / N + ( K ) / N + (7). kmui If the signal-to-noise ratio (SNR) is much bigger than the signal-to-mui ratio, therefore the lower bound of the error probability, when MUI exists from K users and single detection is: ( ) Pe =.5 / + ( K ) / N (8). kfl B. Optimal MUD Optimal MUD is obtained when MAP criterion is applied. It is searched for the maximum of the received signal correlation with all possible transmit signals. The logarithmic likelihood function is presented in matrix form []: Ψ ( d) = R( d T EA * z) d T EARA * Ed (9). The symbolт () * means the complex conjugated value and () Т transpose matrix. The decision of the transmitted symbols is: d ˆ = arg max Ψ ( d ) (). { [ ]} d The optimal algorithm for MUD leads to eliminating the MUI influence. The error probability for one bit for a given user is equal to the error probability in the case of correlation single detection: III. ( ( ) ) Pe =.5 E / N + E ko k o k Ilia Georgiev Iliev and Marin Nedelchev (). SUBOPTIMAL DETECTION WITH SUCCESSIVE SEARCH ALGORITHM POSSIBILITIES А. Algorithm for successive search. Finding the optimal decision for MUD is considered as optimization task, which objective function is multiparameter, discrete, non-unimodal (9). One of the possible variants, proposed in [7] is to apply a successive search. The vector for optimization consists of the elements with the user symbols d. The number of the computations in the algorithm decreases if the start point for optimization is the data from the correlation receivers outputs. Based on this data, combined with the decision criterion (5), it is obtained a packet ot symbols- d ˆF. These criterions do not minimize the errors caused by MUI, just the contrary in the approximation of the error probability with (6), (7), (8), (), the total interferences are considered as independent Gaussian random value from the noise. The decision criterion () at MUD compensates the influence of MUI, and the error probability depends only on the power of the AWGN. If one bit in the packet is mistaken, it can be corrected using the decision criterion (). It is necessary to make only (K+) computations of the objective function (9) with vectors of the set M d, that have Hamming distance with the vector d ˆF equal to one: 533 M { d : H ( ˆ, ) } = d d = (). d d F Then iteration of the algorithm is related to search in a vicinity of the point in the K-dimensional space with Hamming distance of one. This can be realized as each step of the optimization algorithm consecutively changes one element of the vector d (changes the bit) and evaluates the objective function. After this, it is picked up the bit, which maximizes (9). The decision criterion transforms in: { [ ] } d ˆ = arg max Ψ ( d ) (3). d M d The search carries out errors so many times as are the errors number in one vector d. А. Parameters of the successive search algorithm. The receiver output symbols are parallel or the received vector ˆd is a packet, consists of K symbols. Let the channels of the different users are independent and the requirements for the use of Eq.8 are fulfilled. The number of symbol errors m in a given packet is binomial distributed and the probability is given with: m m K m P ( m) = C ( Pe ) ( Pe ) (4). K K kfl kfl Let s assume that the objective function (9) is decreasing for increasing of the Hamming distance against the optimal decision in (). If the errors in the packet are more than one, the upper described procedure may be repeated as times as the number of errors is. Each iteration of the successive search reduces only one error in the packet. In this reducing of the error probability, the lower bound of the error probability per bit, for a given user may be obtained with the following equation: m L P C P P (5), K m m K m e = K ( ekfl) ( ekfl) m= L+ K where L is the number of the corrected errors. In fact L defines the minimal number of iterations for successive search. Fig. shows the dependence of the error probability against the number of active users (the size of one packet) for a constant number of corrected errors (number of iterations in the criterion (3)). The dashed line shows the lower bound of the error probability in the case of single correlation detection- Eq.8. It is clearly seen that the increase of the number of iterations, decreases the error probability, due to compensation of MUI. Eq. (4) shows that the multiple errors are less probable. The obtained dependence (5) and Fig. make possible for a given error probability and number of active users to compute the minimal number of iterations of the algorithm L. In this way, one may adaptively to change the number of calculations depending on the mobile network loading. It is clear that in the successive search algorithm, the number of calculations of the objective function is significantly reduced.

97 Performance Analysis of a Suboptimal Multiuser Detection Algorithm L= L= - Sigle Detection K= Pe -5 L=3 L=4 L=5 L=7 L=6 L=8 L=9 L= Pe - L=3 L= Users K Fig. If an optimal algorithm is applied, the number of the computations J of the objective function are obtained by J= K. If L is used for termination criterion of the search, therefore the number of the computations depends on the number of the iterations and the active users. Furthermore, L may be used for a limit for termination of the search. IV. SIMULATION RESULTS The algorithm is simulated in MATLAB. The spectrum is spread with random sequence for each user with length N=3. The channel is AWGN with slow Relay fading. Fig.3 and fig.4 show the measured error probability in dependence of the mean value of Eb/No for K=, and different number L=,,3,4. The results show that the increasing the number of the iterations leads to reach the theoretical curve for fully compensation the MUI or single detection with Eq.. It is clearly seen from Fig.3 that for L=4, the lowest error probability is Pe= -5. In the rest cases for L Pe is bounded below and the value coincides with the value obtained by Eq.5. Pe K= N=3 Sigle Detection K= Theoretical Sigle Detection K= L= L= L=4 L= Eb/No [db] Fig.3 Error probability versus E b /N for active users K= -3 K= N=3 L=6 Theoretical -4 Sigle Detection K= L= Eb/No [db] Fig.4 Error probability versus E b /N for active users K= V. CONCLUSION The suboptimal methods for MUD in synchronous CDMA systems are prospective class in comparison to the optimal methods, because of the reduced number of calculations and computational complexity. The results presented in this paper allow changing adaptively the number of the iterations, respectively the number of the calculations of the MUD algorithm in dependence of the number of active users and a constant error probability. Additionally the number of the calculations is reduced because the initial start point for optimization is chosen after single detection. The obtained results show that the increasing the number of the iterations leads reaches the theoretical curve for fully compensation the MUI or single detection. REFERENCES [] S. Verdú, Multiuser Detection. New York: Cambridge Univ. Press,998. [] R. Lupas and S. Verd u, Linear multiuser detectors for synchronous code-division multiple-access channels, IEEE Trans. Inform. Theory, vol. 35, pp. 3-36, Jan [3] Z. Xie, R. Short, C. Rushforth, A family of suboptimum detectors for coherent multiuser communications, IEEE J. on Selected Areas in Commun., vol.8, pp , May, 99 [4] C. Ergun and K. Hacioglu, Multiuser detection using a genetic algorithm in CDMA communications systems, IEEE Trans. Commun., vol. 48, pp , Aug.. [5] M. J. Juntti, T. Schl osser, J. O. Lilleberg, Genetic algorithms for multiuser detection in synchronous CDMA, IEEE ISIT 97, (Ulm, Germany), p. 49, 997. [6] K. Yen,L. Hanzo, Hybrid genetic algorithm based multi-user detection schemes for synchronous CDMA systems, IEEE Vehicular Techn. Conference, Tokyo, May 5-8,. [7] Peng Hui Tan, Lars K. Rasmussen, Multiuser Detection in CDMA A Comparison of Relaxations, Exact, and Heuristic Search Methods, IEEE Tr. Wireless Comm., Vol. 3, N5, 4 [8] Arnaudov Rumen, Rossen Miletiev - "Analysis of the irregularly sampled signals above the Nyquist limit," Metrology and Measurement Systems, Vol.XIII, No. 3, 6, pp.3-36, Poland. 534

98 Realization of Train Rescheduling Software System Snežana Mladenović and Slavko Vesković Abstract - During the past decades numerous methods have been developed, resolving more or less successfully the NP-hard (re)scheduling problems. If, however, there is an ambition to bring into practice one of the scheduling methods, the research must focus on both the development of algorithms and of corresponding software systems. The paper presents the most interesting realization details of the first prototype of train rescheduling system. Keywords - Train rescheduling, Software system, OPL, CP I. INTRODUCTION The train scheduling problem belongs to a category of NPhard problems of combinatorial optimization, and hence is complex for both modeling and solving. On the other hand, that problem must be solved as a part of tactical planning process in real railway systems. The assignment of train rescheduling is that on a smaller fragment of railway network, over a shorter planning period an operational reconstruction of timetable is made, in respond to disturbances that have arisen. The rescheduling in general may be considered to be a more difficult problem than an initial scheduling because additional requirements are imposed to it [, ]: to find a solution in a given real time; to have a recovered schedule which will deviate from the initial one as little as possible; the solution if not optimal, to be at least "good enough" with respect to the assigned objective function, etc. Authors have been agreed that train rescheduling is quite a difficult work. According Norio et al. [3], major reasons of this are as follows: it is difficult to decide an objective criterion of rescheduling which is uniformly applicable; train rescheduling is a large size combinatorial problem; a high immediacy is required; no necessary information can be always obtained. Since train rescheduling is such difficult job, assistance of software systems have been longed for, and nowadays train rescheduling systems are being practically used. However, only a few published papers deal with train rescheduling software in real time. In fact, the current rescheduling systems test mostly if the solution proposed by the user is feasible one, and they are not doing full schedule regeneration. It also can be noted that authors simplify the scheduling problem in two ways: by simplification of the network structure and omitting and/or approximating constraints that govern the train movement. Thus, the model used by Isaai and Singh in [4] does not allow sequencing on single-track line between two consecutive stations. The first Authors are with Faculty of Transport and Traffic Engineering University of Belgrade, Vojvode Stepe 35, Belgrade, Serbia, s:; train must complete its arrival at the next station before the next train departs from the previous station. This rule relaxes problem but leads to a poorer solution, as slower trains can hold up other faster trains while traversing from one station to another. The design, development and implementation of train rescheduling system are a specific assignment of the software engineering. The paper presents recommendations to be followed as general during design, development and implementation of the rescheduling system. In accordance with these recommendations, the first prototype of train rescheduling system has been realized. The rest of the paper is organized as follows: Section points to some particularities in the rescheduling software life cycle. Section 3 records the expected design requirements. Section 4 deals with the implementation issues. A special attention must be paid to testing in the incremental software development, and this is discussed in Section 5. An analysis of the first prototype results is a prerequisite for specification of new requirements and development of a new, more perfect prototype, which is discussed in Section 6. Finally, Section 7 presents conclusions. II. RESCHEDULING SOFTWARE LIFE CYCLE The spiral model, according to authors opinion, is the model of the first choice for development of rescheduling applications. The basic assumption is that the specification of users requirements is not completely finalized before the stages of design and implementation. The software system is developed incrementally, by developing a series of prototypes, that being verified and validated, considering the new user requirements. A particularity of the proposed spiral model is a risk analysis that must be carried out before design of any new prototype. The risk in developing a rescheduling software system is not low; the prototype may simply "fail", if too tight time limits have been imposed Herein, only the details of the train rescheduling software life cycle that differentiate from other software systems will be highlighted. III. DETERMINATION OF REQUIREMENTS Essential requirements put to the train rescheduling software are: interaction with other software systems, existence of graphic user interface, respect to time limits. The rescheduling system must receive information from hierarchically higher planning levels. Thus, an initial train timetable and network topology must be accessible to the train rescheduling software. The rescheduling system also must 535

99 Realization of Train Rescheduling Software System receive the latest information regarding the resources availability, job progress, etc. from the process monitoring system. A general idea is for all data available to be found in the information system DataBase DB, wherefrom they will be accessible to the rescheduling software. Usually, a significant effort is required to adapt the real system database for the rescheduling system input. The requirement to make the database correct, consistent and complete often assumes a designing of a series of tests the data must be run through before being used. Since a real database from the railway company was not accessible during the research, a demo MS Access database was created, assuming that it is correct, consistent and complete. Within the database, it is possible to distinguish the static and dynamic data. The static data are all data on jobs and resources that do not depend on scheduling. E.g., data on train categories belong to static data. Data on resources are relatively static data, as well as on the regular timetable. The job priorities are also static data, not depending on scheduling, either. The priority may be based on the planner s assessment or may be result from the procedure taking into account other data from the information system database. The job priority change may depend on the scheduling period, but also by some external, hardly predictable events; thus suburban and urban trains may temporarily, during the peak hour period, get a higher priority than international trains. In our model, the change of priority procedure takes into consideration the expert assessments in the database made for individual sections and individual periods during the day. The dynamic base consists of all schedule-dependant data: job start and completion times, current job positions, number of delayed jobs etc. Some data may be considered as both static and dynamic, e.g. resource setup time. Occurrence of unexpected dynamic data in the DB is actually a trigger for the train rescheduling procedure. The scheduling Model Base MB holds an important place in our rescheduling system also. It collects the models that optimize one or the other objective function, imposing or relaxing certain constraints. A special procedure selects a model from the MB, taking into account the user s wish the function wants to optimize. The architecture of a hypothetical information system with incorporating a train rescheduling system schedule recovery module, is presented in figure in the form of data flow diagram between the processes. The schedule recovery module should enable the model management: choice, combining, sequencing, running etc. The user interfaces can determine the scheduling system usability. Obviously, the scheduling visualization must resemble the one the users are accustomed to during their work for many years. Also, since the inference engines and decision-making are hidden from planners users, the presentation of scheduling results must be such to make him assured quickly and easily on the validity of the "goodenough" solution found. The scheduling software must have an interface based on "WIMP paradigm" (Window, Interaction, Mouse, Pointer). The user interfaces for database modules are in a standardized form and are determined by a used database management package. The realized schedule recovery module contains 7 models corresponding to different objective functions (maximum tardiness, maximum weighted tardiness, total tardiness, total weighted tardiness, makespan, maximum slack of trains in stations, number of late trains). If we wish to offer the planner user a possibility of a direct model choice, we must furnish him with a user interface that will enable such manipulation. No other modes of interactive manipulations are necessary because the very objective of the rescheduling system is a full automation that eliminates the user s slow actions. The literature describes numerous standard user interfaces for presentation of scheduling information [5]. It is interesting that none of the standard graphic interfaces was fully suitable for schedule visualization in the train rescheduling problem for objective and subjective reasons. Objectively, none of the interfaces presents transparently enough the sequencing, overtaking, crossing, waiting and movement of trains. On the other hand, the planners users are used to visualization known as the train diagram, which is, actually a slightly modified Gantt chart. The modification consists of "touching" the resources on y-axis (in fact, the real infrastructure objects touch each other) and the replacement of bars by their diagonals, which symbolizes the train movement on a resource. Figure represents the realized graphic presentation of a found schedule of the train rescheduling system. The modified Gantt chart is made automatically by MS Excel. In the graphic presentation, "background" is always a tabular presentation of train diagram, presenting the start and end times of each job activity, along with the information on which resource it will be accomplished. IV. IMPLEMENTATION The implementation of the scheduling system is accompanied, from the very beginning by perplexities. Namely, dozens of software companies claim they have developed automatic scheduling generators, that can be applied in different real systems, these being either a choice of a commercially available scheduling generator, development of a "from zero" system, or a combined approach? In attempts to implement the train rescheduling system solely by scheduling generators available, a series of problems arose: an immensely oversized scheduling problem, difficulties in establishing a link between the scheduling system and the schedule implementation monitoring system, difficulties in coding special cases (e.g. station tracks under specific conditions behave as a single resource, and as separate resources otherwise), timing requirements,... On the other hand, there was a idea to speed up the stage of implementation by using the scheduling generators available. The Integrated Development Environment (IDE) OPL Studio [7] enables us to create and modify the Constraint Programing (CP) and scheduling models using the Optimization Programming Language (OPL), to compose and control models using the procedural language OPL Script, to run models by ILOG Solver and ILOG Scheduler as well as to presentation of results (schedules) in a tabular and graphic form. The OPL Studio trial version is available on site 536

100 and it was used for implementation of the first prototype of train rescheduling system. Snežana Mladenović and Slavko Vesković In order to improve the time performance the software tools available was added to modules implementing heuristics specific for the train rescheduling problem. By separation heuristics, a global problem was split up into a series of smaller size subproblems the ILOG Scheduler is able to solve up to optimality. The bound heuristics have initially limited the decision variable domains and objective functions, and the search heuristics have directed the search into the space promising good solutions. These heuristics were described in details in [6]. A declarative nature of OPL was used for a simple formulation of the scheduling model. The schedule recovery module itself represents in fact a procedure formulated in the OPL Script with controlling the optimization models in a suitable way constructs "good enough" schedule in the limited time. V. PROTOTYPE TESTING The software system testing is traditionally carried out through verification and validation. The prototype verification and validation must be carried out according to the standard criteria for software, these being efficiency: reliability, usability, modifiability, portability, testability, reusability, maintainability, interoperability, and correctness. The first train rescheduling system prototype was assessed for all quality criteria. The specificity of the train rescheduling application imposes to clarify in this point, in particular: efficiency the software must work within the envisaged time limit. This is a key property of the rescheduling system. The testing of systems that have to operate within the given time limit is extremely difficult. The idea is to Fig.. Position of schedule recovery module in a real information system reiterate each test example for numerous times, and a good direction for support to this idea is the automatic testing. The testing of the first train rescheduling system prototype was made manually, on a large number of input timetables with existing disturbances; each test example, however, was run at least times to determine the mean value of CPU time. It was found during the testing that the variation of CPU time still is not drastic (not more than ± % of the mean value); reusability the property of the software that, as a whole or in parts, it may be used for development of similar systems, thus increasing the development productivity of related systems. E.g., it is known that the train rescheduling is a core of the systems which deal in railway traffic with: timetable preparation, determining of economically acceptable utilization of capacity interval, estimate of costs of stopping and tardiness of trains for operational reasons, forecasting the effects of investments, identification of bottlenecks in infrastructure, choice of possible solution of a conflict point, testing the arrangement and layout of block sections and signals along the railway line etc. Standardization of software modules and their communication is a good direction for upgrading their reuse; modifiability expresses a property of easy modifications on the software in case of changed user s requirements. E.g., it is reasonable to expect that the set of constraints be to subject to changes. A declarative nature of CP approach, separation between the constraint 537

101 Realization of Train Rescheduling Software System component and search component, available CP tools, offer the programmer fantastic possibilities to achieve the highest level of modifiability kolbd kolpm Train diagram train#, cat: 5 train#, cat: 4 train#3, cat: train#4, cat: 4 train#5, cat: train#6, cat: 5 train#7, cat: 3 train#8, cat: train#9, cat: 4 train#, cat: 4 Beograd Dunav Pancevacki most resources kolkr Krnjaca Sebes kolov34 kolov time t[s] Fig.. Realized window of user interface of the first prototype of train rescheduling system a graph presentation of the recovered train schedule in the form of modified Gantt chart VI. ANALYSIS OF THE PROTOTYPE RESULTS The testing of the first train rescheduling system prototype was carried out on a large number of input timetables with existing conflicts Experiments have been carried out on a fragment of real railway network (a part of Belgrade Railway Junction), with actual train categories operating there, but with traffic frequency immensely exceeding the real one. The jobs (trains) are "piled up" on purpose to test the endurance of the method. All seven relevant objective functions participated in the experiment. Each set of jobs suffering disturbances includes trains of different categories and different movement directions. All experiments have been implemented on personal computer Intel (R) Pentium(R) 4 CPU, GHz. For example, CPU time needed for generation of schedule, given on figure, was seconds where objective function was maximum tardiness. From the analysis of experiment results the following conclusion may be drawn: CPU time of schedule recovery depends on the number of activities and number of conflicts; solving of initial conflicts may bring up additional conflicts; in most cases the time performance is satisfactory; a heuristic nature of the approach has been demonstrated (in an insignificant number of cases the best-known solution for the given objective function has not been found). Based on this analysis, we can formulate the proposals the acceptance of which should lead to the improved prototype. VII. CONCLUSION A major part of theoretical research carried out during the past decades in the field of scheduling has a limited application in real systems. Therefore, the research must be concentrated on the development of algorithms, but also on development of software systems. This paper is one of such attempts. Ovca The rescheduling software system should enable the planner to produce faster a better quality schedule. There are also other reasons for introducing automatic scheduling systems. The scheduling system requires advanced "disciplines" from other subsystems of the real information system. It oblige and ensure the real process to develop according to schedule. Experiments made on the first train rescheduling system prototype, on a real railway network fragment, with real traffic structure, and possible disturbances, make us believe that the approach proposed in this paper may offer a full support to the railway operational management. REFERENCES [] Cowling, P. and M. Johansson, Using real time information for effective dynamic scheduling, European Journal of Operational Research, 39(), pp. 3-44,. [] Vieira, E. G, J. W. Herrmann and E. Lin, Rescheduling manufacturing systems: a framework of strategies, policies and methods, Journal of Scheduling, 6, pp. 39-6, 3. [3] Norio T., Y. Tashiro, T. Noriyuki, H. Chikara and M. Kunimitsu, Train rescheduling algorithm which minimizes passengers dissatisfaction, in Innovations in Applied Artificial Intelligence, Lecture Notes in Artificial Intelligence 3533, Springer Verlag, pp , 5. [4] Isaai, M. T. and M. G. Singh, An object-oriented, constraintbased heuristic for a class of passenger-train scheduling problems, IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews, 3(), pp. -,. [5] Pinedo, M., Scheduling: Theory, Algorithms and Systems, Prentice Hall, 995. [6] Mladenović, S., M. Čangalović, D. Bečejski Vujaklija and M. Marković, Constraint programming approach to train scheduling on railway network supported by heuristics, th World Conference on Transport Research, CD of Selected and Revised Papers, Paper number 87, Abstract book I, pp , Istanbul, Turkey, 4. [7] ILOG OPL Studio 538

102 Throughput Analysis of EDCA in Multimedia Environment Blagoj Ilievski, Pero Latkoski and Borislav Popovski Abstract This paper addresses issues that arise when end-toend QoS has to be guaranteed in today s pervasive heterogeneous wired-cum-wireless networks. The basic IEEE 8. standard for local area networks can not cope with the emerging multimedia services such as voice, data and video. On the other hand, the wireless medium is very specific; there is no guarantee for any performances as in the wired medium, especially in the unlicensed spectrum. The new 8.e MAC which is based on both centrally-controlled and contention-based channel access provides means for needed QoS in such conditions. Here we analyze the.e s throughput performance and packet loss for different traffic types, compared with the basic standard, and its dependence of the network conditions. Keywords IEEE 8.e, Network Simulator, performance evaluation. I. INTRODUCTION IEEE 8. [] introduces DCF (Distributed Coordination Function) and PCF (Point Coordination Function) on the MAC layer. DCF does not support priority mechanism []; all packets are treated using first-come-first-serve philosophy. On the other hand, PCF sustains several resource reservation methods. Although it can support some kinds of time-critical traffic, many inadequacies have been identified, such as unknown transmission duration of polled station, difficulty to predict the amount of frames one wants to send, no management interface defined to build up and control PCF operations. We can summarize that DCF can not provide QoS, and PCF is not capable enough [3]. The 8.e standard is the result of the WLANs QoS issue. Due to the dynamic nature of these kinds of networks, it is impossible to apply QoS management techniques to negotiate quality between users and network. Nevertheless, it is possible to increase success probability of certain classes of traffic to get appropriate QoS. There are two kinds of QoS: Parameterized QoS A strict QoS requirement expressed quantitatively in terms of data rate, delay bound etc. Prioritized QoS Loose QoS requirement expressed in terms of relative delivery priority. For different types of traffic, there are different requirements, but in case of WLANs the most common are two: bound delay (for real time traffic) and jitter. On the other hand, certain traffic stream is described by the transmission rate (peak and average), service interval (minimum and maximum) and the burst size of the peak rate. All authors are with the Faculty of Electrical Engineering and Information Technologies Skopje, Macedonia. The term QoS of WLAN refers only to the MAC level. The new standard (IEEE 8.e) defines stations with QoS support - QSTAs, and access points with QoS support - QAPs, different from the stations and the access point defined in the original standard (IEEE 8.). Here, a new method is introduced to support the QoS requirements: Hybrid Coordination Function (HCF). HCF has two main parts. One is HCF Controlled Channel Access (HCCA), for the Integrated Services requirement. The other one is Enhanced Distributed Channel Access (EDCA/EDCF), for the Differential Services requirement. In other words, EDCF is responsible for contention, while HCF for contention free working regime. While the EDCF is appropriate for asynchronous data services, the HCCA provides means of time-bounded services. In the new IEEE 8.e standard, acknowledgement (ACK) for successful transmission of the frames sent by stations becomes no obligatory. It means that MAC layer will not send ACK frame after successful receiving of a data frame. This approach decreases the reliability, but the overall traffic transmission efficiency (e.g. VoIP) is upgraded. II. EDCA FUNCTION The EDCА function improves the basic DCF function by implementing priorities for different traffic classes. The EDCА defines four access categories (AC), in which the traffic is additionally classified into 8 different traffic classesuser priorities (UP). The traffic in a same class is considered to be of equal priority. Table shows the mapping between access categories and user priorities. Table. Traffic classes (TC) in IEEE 8.e Priority User 8.D Access category Priority Designation (AC) Designation Lowest BE AC_BK Background. BK AC_BK Background. _ AC_BE Best Effort. 3 EE AC_BE Video. 4 CL AC_VI Video. 5 VI AC_VI Video. 6 VO AC_VO Voice Highest 7 NC AC_VO Voice Every AC differs by the parameters variety and has its own queue. The parameters value determinates the AC and the type of the traffic. Three of the parameters are crucial for this standard: CW - Contention Window - A random number is drawn from this interval, or window, for the back off mechanism; 539

103 Throughput Analysis of EDCA in Multimedia Environment AIFS - Arbitrary Inter Frame Space It is equal to DIFS plus a number of time slots. The value of AIFS differs for every traffic class to enhance the differentiation based on the priority of classes. TXOP Limit Maximum aloud time for transmission, of one QSTA. During this period, medium belongs to the station. A station may implement up to eight transmission queues realized as virtual stations inside a station, with different QoS parameters that determine their priorities [4]. When two or more TCs in a single station start transmitting at the same time, a scheduler inside the station avoids the virtual collision. Before station starts to transmit, MAC layer classifies the traffic into appropriate AC. Every new MSDU frame finds its place into adequate AC. The frames from different categories compete for the EDCF-TXOP. Each class differs by varying the minimum contention window (CWmin) and the interframe space which are used for data transmission. A class with smaller default contention window will result in generating shorter backoff intervals and as a result it gains priority over a station with a larger CWmin [5]. STATION AC AC AC3 AC4 VIRTUAL COLLISION STATION AC AC AC3 AC4 VIRTUAL COLLISION.... The problem of the traffic differentiation is solved by adding field into the MAC header that describes the characteristics of the traffics (Table 3). Table 3. IEEE 8.e MAC header octets: n 4 Frame Duration Address Address Address Sequence Address QoS Frame FSC Control / ID 3 Control 4 Control Body There is an option in IEEE 8.e standard called packet bursting - CFB (Contention Free Bursting). This feature, improves the performance of smaller packets (time bounded services) in WLANs [6]. The CFB decreases the overhead and in such a way the delay is decreased and the throughput is increased. The station with included CFB sends multiple small packets as a burst without intermediate contention, as soon as the station gains access to the medium. It is possible to send packets to different destinations in one burst frame. Between an ACK and the following packet only a time interval of SIFS (Short IFS) is required. Therefore the station maintains control over the medium for the whole burst (not longer then TXOP). Sending multiple small packets in a burst avoids contention for each single packet and increases the efficiency. However, the medium access time might be increased because packet bursts occupy the medium for a longer period, therefore the overall network jitter and delay may increase. By adjusting the parameters, especially TXOP Limit one can optimize the network functioning. CONTENTION FOR MEDIA COMMUNICATION WITH AP Fig.. EDCF function In the IEEE 8.e EDCA, different ACs use different AIFS values and contention window size when contend for the channel access. The value of AIFS depends on the AC, and the value of the aslottime parameter depends on the used PHY Layer (in our case 8.b). The number of backoff procedure slots is uniformly distributed random variable between and CW-. The CW is the contention window whose value is between CWmin and CWmax. After each successful transmission, the CW is reset to CWmin, and on each failure packet transmission the backoff procedure doubles the CW value until the value reaches the CWmax. In IEEE 8.e the values of CWmin and CWmax are different for each AC and used PHY Layer (Table ). Table. Parameters for three types traffic in IEEE 8.e Type AC AIFS CWmin CWmax TXOP Limit Voice Video Data The winner virtual station of the internal competition has right to compete with the rest of the winners of the other stations to transmit over the medium. (4) (3) III. SIMULATION RESULTS The network will be analyzed using NS (Network Simulator) [7]. In fact, this simulator is the most widely used simulator for analysis of the wireless networks. The scenario is consisted of one access point (AP) connected with a host via switch and surrounded by six wireless stations (WS) (Fig.). We assume two directions of the traffic stream: from the station towards the server (uplink) and from the server towards the station (downlink). We shall analyze the uplink throughput performance. WS AP () (5) Fig.. Simulated WLAN scenario () (6) 54

104 Blagoj Ilievski, Pero Latkoski and Borislav Popovski Table 4 contains the used types of traffic and its parameters. It will be discussed the impact of the e standard over these three different types of traffic. All the stations will send packets with included CFB option. In this scenario two stations (5 and 6) generate and receive voice traffic (with the highest priority AC4), two stations (3 and 4) generate and receive video traffic (AC); stations and generate and receive data traffic (AC). The AP transmits three types of traffic generated from the host towards wireless nodes (downlink) and receives also three categories of traffic from wireless nodes (uplink). Voice and video traffic is assumed to be of constant bit rate (CBR). Data traffic is assumed to be FTP traffic. Table 4. Three types of traffic used in the simulation Type Agent/ Application Frame Size (bytes) Data Rate (Mbps) Voice UDP/CBR 3.3 Video UDP/CBR 8 Data TCP/FTP 536 We measure the uplink throughput performance and its dependence of the frame size, load and the number of the active stations in the network. First parameter that is changed is packet size. The network is loaded with 4% of its capacity. Throughput [kbps] VoIP 8.e VoIP Packet size [B] Fig.3. Voice traffic throughput dependence from pck size Lost packages [%] Data, 8. Data, 8.e Packet size [B] Fig. 5. Lost data packets dependence from pck size Simulation results show opposite behavior when voice and video traffic are analyzed. The reason is TXOP, which for voice is smaller than the TXOP for video traffic. Data traffic with implemented QoS option is very sensitive of packet size change (Fig.5). Next three figures will give the dependence from the traffic load parameter. Throughputh [kbps] VoIP 8.e with CFB VoIP Load [%] Fig.6. Voice traffic throughput depends from Load 7 Video, 8. Throughput [kbps] Video, 8. Video, 8.e Throughput [kbps] Video, 8.e Packet size [B] Load [%] Fig.4. Video traffic throughput dependence from pck size Fig.7. Video traffic throughput depends from Load 54

105 Throughput Analysis of EDCA in Multimedia Environment 8 Data, 8. Data, 8.e 7 Data, 8.e Data, Lost packages [%] Lost packeges [%] Load [%] Number of stations Fig.8. Lost data packages depends from traffic Load When we discuss the load in the network, the e standard shows improvement compared to the basic standard. For voice the throughput is not changing significantly (% difference in the throughput, for 5% and 9% load Fig. 6). But the other two categories are very sensitive and deteriorate rapidly compared to the basic standard (Fig. 7, Fig. 8). The number of the stations in the network is another important parameter. When the number of active stations is large the probability of collision increases as well. Throughput [kbps] Number of stations VoIP, 8.e VoIP, 8. Fig.9. VoIP traffic dependence from number of stations Throughput [kbps] Video, 8.e Video, Number of stations Fig.. Video traffic dependence from number of stations Fig.. Lost data packages dependence from number of active stations When the number of the wireless stations in the network is increased, e standard gains the voice traffic wining probability over the medium. Video and data has no significant chance when medium is shared by very large number of stations (Fig. 9, Fig., Fig. ). IV. CONCLUSION Our simulator implements the new EDCF function. This function is an upgrade of wireless stations and enables QoS support. By simulations we find that EDCF shows weakness at low-level categories of traffic. It is evidently that the high priority categories of traffic dominantly occupy the medium. The throughput of the different services traffic is very sensitive of the changes in the network (number of the stations, traffic load), or the size of the packets. Also the characteristics of the data traffic deteriorate compared to the basis standard. This implies requirement for further improvements in the 8. MAC to increase the quality of the data traffic compared to the basic standard performance. REFERENCES [] IEEE Standard for Wireless LAN Medium Access Control and Physical Layer Specification, IEEE 8.. [] P. Latkoski, Z. Hadzi-Velkov, B. Popovski Extended Model for Performance Analysis of Non-Saturated IEEE 8. DCF in Erroneous Channel, The Third IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 6), Vancouver, Canada, October 9-, 6.. [3] Qiang Ni, Lamia Romdhani, Thierry Turletti A Survey of QoS Enhancements for IEEE 8. Wireless LAN, Journal of Wireless Communications and Mobile Computing, Wiley. 4 [4] Sameer Mehta and Amit Goel Enhancing the 8. MAC to incorporate VoWLAN for seamless convergence, National Symposium on Electronics Technology organized by the IEEE at Kurukshetra University in March 4. [5] Xiang Fan Quality of Service Control in Wireless Local Area Networks, M.T. University of Twente, November 4. [6] S.Wietholter, C. Hoene, A. Wolisz Perceptual Quality of Internet Telephony over IEEE 8.e Supporting Enhanced DCF and Contention Free Bursting, Berlin, Sep 7th, 4. [7] The Network Simulator, 54

106 Software Tools and Technologies in Steganography Julijana Mirčevski, Biljana Djokić, Milesa Srećković 3 and Nikola Popović 4 Abstract: In the paper was exposed the performance estimation of more available program tools for steganographic application and message implementation method. The results were classified and illustrated by the tested procedures. The message creation, implementation and detection was shown in oneself programs tool that operate in Visual Basic environment. Keywords: steganography, invisible communication, message embedding, steganographic file detection, wavelet transform I. INTRODUCTION Steganography as a way of invisible communication is very present in current internet communication. A number of steganographic application available with a various performance in domain of message implementation, system requirements, encoding security and executing reliability. The digital contents database requires a more complexes software tools in order to searching, analysis s, compression and reproduction. Wavelet theory provides the fast discrete algorithms suitable to computer programming application. The clear mathematical theory is a good base to creating a programs environment and a number programs packages. The wavelet transform application are thinkingless without a programming software package but the most used are WaveLab, LastWave, MegaWave i Rice Wavelet Toolbox. Because the mentioned software products belong to free ware software category, it is possible to use that with a registration legality. Steganography hides the covert message but not the fact that two parties are communicating with each other []. The steganography process generally involves placing a hidden message in some transport medium, called the carrier. The secret message is embedded in the carrier to form the steganography medium. The use of a steganography key may be employed for encryption of the hidden message and/or for randomization in the steganography scheme. In summary: steganography_medium = hidden_message + carrier + steganography_key On the followed figure it is presented scheme of steganography systematization according to []. Only the main items would be explained in text. Julijana Mirčevski, Informatička škola Educon, Beograd, Biljana Djokić, Informatička škola Educon, Beograd, djokicb@eunet.yu 3 Milesa Srećković, Elektrotehnički fakultet, Beograd, 4 Nikola Popović, Ministarstvo inostranih poslova, Beograd Fig.. Classification of Steganography Techniques -Technical steganography uses scientific methods to hide a message, such as the use of invisible ink or microdots and other size-reduction methods. - Linguistic steganography hides the message in the carrier in some nonobvious ways and is further categorized as semagrams or open codes. - Semagrams hide information by the use of symbols or signs. A visual semagram uses innocent-looking or everyday physical objects to convey a message, such as doodles or the positioning of items on a desk or Website. A text semagram hides a message by modifying the appearance of the carrier text, such as subtle changes in font size or type, adding extra spaces, or different flourishes in letters or handwritten text. - Open codes hide a message in a legitimate carrier message in ways that are not obvious to an unsuspecting observer. The carrier message is sometimes called the overt communication, whereas the hidden message is the covert communication. This category is subdivided into jargon codes and covered ciphers. - Jargon code, as the name suggests, uses language that is understood by a group of people but is meaningless to others. Jargon codes include warchalking (symbols used to indicate the presence and type of wireless network signal [ 3]), underground terminology, or an innocent conversation that conveys special meaning because of facts known only to the speakers. A subset of jargon codes is cue codes, where certain prearranged phrases convey meaning. - Covered or concealment ciphers hide a message openly in the carrier medium so that it can be recovered by anyone who knows the secret for how it was concealed. A grille cipher employs a template that is used to cover the carrier message. 543

107 Software Tools and Technologies in Steganography II. FEATURES AND APPLICATIONS Data-hiding techniques should be capable of embedding [3] data in a host signal with the following restrictions and features:. The host signal should be nonobjectionally degraded and the embedded data should be minimally perceptible. (The goal is for the data to remain hidden. As any magician will tell you, it is possible for something to be hidden while it remains in plain sight; you merely keep the person from looking at it. We will use the words hidden, inaudible, imperceivable, and invisible to mean that an observer does not notice the presence of the data, even if they are perceptible.). The embedded data should be directly encoded into the media, rather than into a header or wrapper, so that the data remain intact across varying data file formats. 3. The embedded data should be immune to modifications ranging from intentional and intelligent attempts at removal to anticipated manipulations, such as a channel noise, filtering, resampling, cropping, encoding, lossy compressing, printing and scanning, digital-to-analog (D/A) conversion, and analogto-digital (A/D) conversion and other. 4. Asymmetrical coding of the embedded data is desirable, since the purpose of data hiding is to keep the data in the host signal, but not necessarily to make the data difficult to access. 5. Error correction coding should be used to ensure data integrity. It is inevitable that there will be some degradation to the embedded data when the host signal is modified. 6. The embedded data should be self-clocking or arbitrarily reentrant [3], [4]. This ensures that the embedded data can be recovered when only fragments of the host signal are available, e.g., if a sound bite is extracted from an interview, data embedded in the audio segment can be recovered. This feature also facilitates automatic decoding of the hidden data, since there is no need to refer to the original host signal. Trade-offs exists between the quantity of embedded data and the degree of immunity to host signal modification. By constraining the degree of host signal degradation, a datahiding method can operate with either high embedded data rate, or high resistance to modification, but not both. As one increases, the other must decrease. While this can be shown mathematically for some data-hiding systems such as a spread spectrum, it seems to hold true for all data-hiding systems. In any system, you can trade bandwidth for robustness by exploiting redundancy. The quantity of embedded data and the degree of host signal modification vary from application to application. Consequently, different techniques are employed for different applications [5]. Several prospective applications of data hiding are discussed in this section. An application that requires a minimal amount of embedded data is the placement of a digital water mark. The embedded data are used to place an indication of ownership in the host signal, serving the same purpose as an author s signature or a company logo. Since the information is of a critical nature and the signal may face intelligent and intentional attempts to destroy or remove it, the coding techniques used must be immune to a wide variety of possible modifications. A second application for data hiding is tamper-proofing. It is used to indicate that the host signal has been modified from its authored state. Modification to the embedded data indicates that the host signal has been changed in some way. A third application, feature location, requires more data to be embedded. In this application, the embedded data are hidden in specific locations within an image. It enables one to identify individual content features, e.g., the name of the person on the left versus the right side of an image. Typically, feature location data are not subject to intentional removal. However, it is expected that the host signal might be subjected to a certain degree of modification, e.g., images are routinely modified by scaling, cropping, and tonescale enhancement. As a result, feature location datahiding techniques must be immune to geometrical and other nongeometrical signal modification. III. DIGITAL CARRIER METHODS There are many ways to be hidden messages in digital media. The most common steganography method in audio and image files employs some type of least significant bit substitution or overwriting. The least significant bit term comes from the numeric significance of the bits in a byte. The high-order or most significant bit is the one with the highest arithmetic value (i.e., 7 =8), whereas the low-order or least significant bit is the one with the lowest arithmetic value(i.e., =). As a simple example of least significant bit substitution, imagine "hiding" the character 'G' across the following eight bytes of a carrier file (the least significant bits are underlined): A 'G' is represented in the American Standard Code for Information Interchange (ASCII) as the binary string. These eight bits can be "written" to the least significant bit of each of the eight carrier bytes as follows: In the sample above, only half of the least significant bits were actually changed (shown above in italics and colored). This makes some sense when one set of zeros and ones are being substituted with another set of zeros and ones. Least significant bit substitution can be used to overwrite legitimate RGB color encodings or palette pointers in GIF and BMP files, coefficients in JPEG files, and pulse code modulation levels in audio files. By overwriting the least significant bit, the numeric value of the byte changes very little and is least likely to be detected by the human eye or ear. Least significant bit substitution is a simple, albeit common, technique for steganography. Its use, however, is not necessarily as simplistic as the method sounds. Only the most naive steganography software would merely overwrite every least significant bit with hidden data. Almost all use 544

108 Julijana Mirčevski, Biljana Djokić, Milesa Srećković and Nikola Popović some sort of means to randomize the actual bits in the carrier file that are modified. This is one of the factors that make steganography detections so difficult. All steganographic methods are trying to achieve the minimal amount of modification in order to minimize the view changes through introducing detectable artifacts. However, if the cover-image, was initially stored in the JPEG format (as it is frequently the case), message embedding in the spatial domain will disturb but not erase the characteristic structure created by the JPEG compression. It is possible to recover the JPEG format table from stego-image by carefully analyzing the values of DCT coefficients in all blocks. After message embedding, the cover image will become, with a high probability, grather then pure image and, yet not fully compatible with JPEG format. This can be indicated the steganography file. IV. STEGANOGRAPGIC SOFTWARE TOOLS There are a number of steganographic software tools and steganographic techniques today available on the internet [ ]. They can be ranging from freeware software to commercial products of high price. Most of them are creations of amateur enthusiasts available for free while others are products of private companies and can be purchased for a small fee. The very important function of steganography detection software is to find possible carrier files. Ideally, the detection software would also provide some clues as to the steganography algorithm used to hide information in the suspect file so that the analyst might be able to attempt recovery of the hidden information. The detection of steganography software continues to become harder for another reason the small size of the software coupled with the increasing storage capacity of removable media. S-Tools, for example, requires less than 6 KB of disk space and can be executed directly, without additional installation, from a floppy or USB memory key. Under those circumstances, no remnants of the program would be found on the hard drive. In text followed bellow, would be present some small and powerful software tools. ChinCrypt is a small and easy to use textmode programme for crypting data. It can be encrypt a text file, executable file or, image file etc. ChinCrypt is the size of kb and work under Windows, Linux and Unix. Gifshuffle is a command-line-only program for Windows which conceals messages in GIF images by shuffling the colourmap. The picture remains visibly intact, only the order of color within the palette is changed. It works with all GIF images, including those with transparency and animation, and in addition provides compression and encryption of the concealed message. Gifshuffle v. is freeware and requires only 33kB. JPegX is an encryption program that hides a important information inside standard JPEG image files. The image is left visually unchanged and messages are encrypted and password protected. To decrypt the message, it s need to open the JPEG file that holds it and enter the password if prompted. JPegX is a size of 8kB and freeware category. Shadow is the powerful data encoder/decoder that has the ability to encode/decode everything and anything that fits on/in computer hard disk drive. It can be encoded texts, pictures, movies, music, applications, and so on. Windows is required to Shadow running and the program size is about 56kB. Hide4PGP v. is a command-line steganographic program for Windows, DOS, OS/, and Linux that hides data within BMP, WAV, and VOC files. It is designed to be used with both PGP and Stealth, but also works well as a standalone program. Version. has several new features, including a new stego format which is much more robust against format conversions - only lossy compression formats will loose the hidden data. The source is also included and should compile on any platform without major problems.hide4pgpv. is size of 4kB and freeware. V. THE HIDDEN MESSAGE EMBEDDING APPROACH IN AN IMAGE The practical steganography implementation was executed with program which realized in Visual Basic 6 (VB6) environment. Program is very simple application without intention to demonstrate a high programming efficiency. This program is not comparable with the known steganographic tools, in mentioned sense. The significant characteristics of this program are the simple implementation, wide possibility of modification, and very simple using [6]. The program is realized in order to rich the practical approach to understanding of the fundamental processes to hidden message writing in an image. The following image contents the hidden message: Breza do hramot, osvetlena od sveki livčinja raga. Fig.. The JPEG image file containing hidden message Every pixel of the above image has assigned the adequate color value. The text was embedded in the image through changing of the previous color values. The VB6 functions SetPixel and GetPixel was used for that. The approach begins with opening new forms in VB6 environment [6], [7]. It is launched a text box and assigned name and then a list box, also. Thesse called the Textprvi and Listaprva. Further, it introduse a picture box and named as Prvaslika. The scalemode of this box must be set on the pixel and parameter autodraw on the true value. It put in program Common Dialog control and Active X control function. The image reading way was solved by oneself procedure. There are defined commands to image decoding, encoding, writing and reading. The image was added in Design regime, properties function. During the testing procedures was used the both: a JPEG picture format and a bitmaps files. Especially is written the 545

109 Software Tools and Technologies in Steganography program part which makes the data file from picture file. The goal of this subprogram is to convert image file in hexadecimal code date file. These hexadecimal code date file is made for pure image and steganographic image. In comparing these files is clear visible the differences between files because the hexadecimal code of steganography image contain secret information [7]. Although even a trained eye wouldn t know the difference, visually, it will change the statistical properties of the pixel values of the before and after photo. The data hexadecimal files are different, also. Steganography image Pure image FEFEFE FFFFFF F8F8EC F8F8F8 F4F4E F4F4F4 8 AA99 AAA B BAF BAC 3 5 A F9 F C 7 9 ABAAA6 ABAAAC 6 F9F8E8 F9F8FA 8 FCFCEF FCFCFC 3 FCFCFB FCFCFC F8F9E8 F8F9F7 5 FDFDE9 FDFDFD 3 4 F8F7F9 FDFDFD 39 FAF9E FAF9FB 7 FDFCEF FDFCFE 5 FEFDEC FEFDFF 9 F9F8E4 F9F8FA FCFBF4 FCFBFD 9 FFFEEB FFFEFF FBFAFA FBFAFC FCFBF4 FCFBFD 9 FCFBEF FCFBFD 4 FCFBF FCFBFD 3 FCFBE FCFBFD 7 FCFBEE FCFBFD 5 FCFBF3 FCFBFD FCFBE FCFBFD 7 FCFBEA FCFBFD 9 FCFBE7 FCFBFD FEFEFE FFFFFF F9F9F9 F8F8F FFFFFF F4F4F FFFFFF AAA CECDCF BAC F E3D4 F ACABAF 39383C FCFBFD ABAAAC FFFEFF F9F8FA F7F7F7 FCFCFC FCFCFC FCFCFC FFFFFE F8F9F F7F8F6 FDFDFD F8F7F9 FDFDFD 39 F6F5F7 FAF9FB 637 FAF9FB FDFCFE FCFBFD FEFDFF 3586 F8F7F9 F9F8FA FBFAFC FCFBFD FCFBFD FFFEFF F3FF4 FBFAFC FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 FAF9FB FCFBFD 3586 Table. The data hexadecimal file contents for steganographic image and pure image The program code it written be in code window. The user interface is designed very simple and available only with the necessary function. The user interface is shown at the figure 3. VI. CONCLUSION Invisable communication is very present in wholle internet virtuel communication today. Two important uses of data hiding in digital media are to provide proof of the copyright, and assurance of content integrity. Other applications of data hiding, such as the inclusion of augmentation data, need not be invariant to detection or removal, since these data are there for the benefit of both the author and the content consumer. Fig 3. User interface with during embedding message Thus, the techniques used for data hiding vary depending on the quantity of data being hidden and the required invariance of those data to manipulation [8]. Since no one method is capable of achieving all these goals, a class of processes is needed to span the range of possible applications. The technical challenges of data hiding are formidable. Any holes to fill with data in a host signal, either statistical or perceptual, are likely targets for removal by lossy signal compression. The key to successful data hiding is the finding of holes that are not suitable for exploitation by compression algorithms. REFERENCES [] Ames Laboratory, Finding Computer Files Hidden In Plain Sight, May 4, 6, [] Gary C. Kessler, An Overview of Steganography for the Computer Forensics Examiner, Forensic Science Communications, July 4, Volume 6, Number 3, page -7 [3] Bauer, F. L. Decrypted Secrets: Methods and Maxims of Cryptology, 3rd ed. Springer-Verlag, New York,. [4] Artz, D. Digital Steganography: Hidding data within data, IEEE Internet Computing,, Vol. 3, pag [5] Farid, H. Detecting steganographic messages in Digital Images, Technical Report TR- 4, Dartmouth College, Computer Science Department,, [6] Julijana Mirčevski, Biljana Djokić, Nikola Popović, Moderne softverske tehnike u prepoznavanju proskribovanih kompjuterskih sadržaja, II Konferencija ZITEH, Tara, novembar 6 [7] Julijana Mirčevski, Biljana Djokić, Nikola Popović, The wavelet transform based software suitable for a digital contents analyze, V naucno-strucni skup Nove tehnologije i standardi: digitalizacija nacionalne bastine, -3. jun, Beograd [8] Steganography, hiding text in images, 546

110 SESSION CS&R Control Systems & Robotics


112 Cascade Synchronization of Chaotic Systems on the Basis of Linear-Nonlinear Decomposition Dragomir P. Chantov Abstract In this paper a method for cascade synchronization of three or more chaotic systems is proposed. The method is based on the so called Linear-Nonlinear decomposition of the systems. The advantage of this approach is in the possibility for exact analysis of the stability of the synchronization manifold, because the error systems are always linear. The results of the application of the method on a well known continuous chaotic system (the Chua system) are proposed. Keywords Chaotic synchronization, Cascade synchronization, Linear-Nonlinear decomposition, One-way coupling. I. INTRODUCTION One of the most specific and at the same time one of the most interesting fields of the nonlinear dynamics is the chaos theory. During the last 7 years (after 99) a great effort is made in two main trends in the chaos theory - the synchronization of chaotic systems and the control of these systems []. The primary practical benefit of the chaotic synchronization is in the fact that this interesting phenomenon can be used in the secure communications to protect (hide) the transmitted information from unautorised access []. The cascade synchronization can be considered as a subfield of the chaos synchronization, by which three or more chaotic systems have to be synchronized by such way that they will evolve identically but at the same time chaotically in the phase space. Different methods for synchronization of chaotic systems, respectively for cascade synchronization of such systems exist [3]. It is common that because of the fact that all chaotic systems are nonlinear systems, there is no universal synchronization method, which can always be applied and which guarantees the synchronization for a particular system. Therefore the work of searching for new approaches or new modifications of the existing ones, which overcome some of their drawbacks or limitations, continues in the last years. In this paper the author proposes a modification of the linear-nonlinear decomposition (LND) method for synchronization of chaotic systems, by which in the Slave systems auxiliary driving signal, proportional to the difference function, is lead in. Thus the main limitation of the LND method (only one variant for coupling of the systems, which for most known chaotic systems doesn't guarantee stable synchronization) is overcome by the great variety of auxiliary couplings, which enhance the chance for achieving stable synchronization. At the same time the proposed approach, called modified linear-nonlinear decomposition method (MLND), retains the main advantage of the LND method, being the possibility of exact analysis of the stability of the synchronization manifold. This is conditioned by the fact that the difference system (systems), in contrast to all other synchronization methods, is always linear. The proposed method is previously tested on simple synchronization of two chaotic systems. In this paper the MLND method is applied for achieving serial cascade synchronization of three or more chaotic systems. The results for the Chua chaotic system are proposed. II. MODIFIED LINEAR-NONLINEAR DECOMPOSITION METHOD FOR CASCADE CHAOTIC SYNCHRONIZATION A. Cascade synchronization of chaotic systems The chaotic synchronization is a phenomenon, by which two identical (most frequently) chaotic systems tune up their dynamics to each other and evolve identically in the phase space. This phenomenon can be used in the communications to secure the transmitted information []. For more complex communication systems the receiver and transmitter can be arrays of cascade coupled three or more chaotic systems, or there can exist one or more mediator chaotic systems between the transmitter and receiver. This fact conditions the development of the sub-field of cascade chaotic synchronization. By the most-frequent type of cascade synchronization three or more identical chaotic systems are coupled sequentially and the proper coupling is searched to achieve stable synchronization between the systems. The first system is called Master system, the second one - Slave, but it is also a Master system for the next system of the chain, the third system - Slave and so on. The systems for the most common case of three chaotic systems can be defined as follows: Master x = f ( x,t) &, () & ~, () Slave ~ x = f ( ~ x, x,t) Slave ~ & ~ x = f ( ~ x, ~ x,t), (3) Dragomir P. Chantov is with the Faculty of Electrical Engineering, department of Automation, Information and Control Technics, 4 Hadji Dimitar Str., 53 Gabrovo, Bulgaria, 549 ~ n ~ n n 3 where x R, x R, x R and the initial conditions x ~ x ( t ) ~ x ( t ). For n = n = n3 and are ( t )

113 f ~ ( x) f ( ~ ~ x) = f ( ~ x) Cascade Synchronization of Chaotic Systems on the Basis of Linear-Nonlinear Decomposition = the systems ()-(3) are identical, which is the most common case. The system () is in fact a mediocre system for the synchronization of the systems () and (3). For identical systems, which are considered in this paper, it is called that the three systems are synchronized, if: where () t = lime lim e () t =, (4) a t e e a b b t () t x( t, t, x( t )) ~ x( t, t, ~ x( t )) =, (5) () t ~ x( t, t, ~ x( t )) ~ x t, t, ~ x( t ) ( ) = (6) are the difference functions between the solutions of the first and the second (5), and between the second and the third (6) systems. The eventual synchronization can also be illustrated directly by observing the difference function between the first and the third systems: e c ( ) () t x( t, t, x( t )) ~ x t, t, ~ x( t ) =. (7) The systems () - (3) will be synchronized, if: () t lim e =, (8) t but in general it is possible the systems ()-() to achieve marginal synchronization, and the systems ()-(3) - reciprocal to the first marginal synchronization and thus the condition (8) can be fulfilled without the fulfillment of Eq. (4). B. Modified linear-nonlinear decomposition (MLND) method for cascade chaotic synchronization One little known decomposition method for synchronization of two identical chaotic systems is the linearnonlinear decomposition method [4]. The essence of the method is the formal decomposition of the Master system in linear and nonlinear parts: c Master x &( t) = f ( x, t) = Аx( t) + h( x( t), t), (9) where Ax(t) is the linear part, and h( x( t), t) - the nonlinear part of f ( x, t). Then the Slave system is constructed in such way, that it is driven with the nonlinear part of Eq.(9): Slave ~ ~ x & ( t) = f ( ~ x, x, t) = Ax ~ ( t) + h( x( t), t). () Subtracting Eq. () from Eq. (9) one gets the error (difference) system: e &( t) = x& ( t) ~ x& ( t) = A( x( t) ~ x( t)) = Аe( t). () The eventual synchronization between systems (9) and () will be stable, if lim e () t =, i.e. the point e= of the error t system () is stable. Since Eq. () is a linear system, this 55 analysis is easy to made (the stability is proved from the sign of the eigenvalues of A), which is the main advantage of the LND method. However this method has one major limitation - it offers only one variant of coupling of the systems (9) and (). There is no guarantee, that this variant will give stable synchronization. The more variants of coupling available for any synchronization method, the greater the chance for obtaining stable synchronization. The author suggests the addition of a second coupling, proportional to the error function: Master x &( t) = Аx( t) + h( x( t), t), () Slave ~ x & ( t) = Ax ~ ( t) + h( x( t), t) + αe( x( t) ~ x( t)), (3) where α and Е are the coupling gain and the coupling matrix which defines the exact form of the coupling. Without loss of generality one can choose the so called standard one-way coupling, by which the connecting nonzero element is in the main diagonal of E. The error system: e& ( t) = x& ( t) ~ x& ( t) = A( x( t) ~ x( t)) αе( x( t) ~ x( t)) = ( А αe) e( t) (4) is again linear and thus retaining the advantage of the LND method one can now choose between great number of coupling variants. This concept can be applied for the cascade chaos synchronization. In this paper, without loss of generality, cascade synchronization of three identical chaotic systems is considered. Then the Master, the Slave and the Slave systems, when the modified linear-nonlinear decomposition coupling is applied, are: Master x &( t) = Аx( t) + h( x( t), t), (5) Slave ~ x & ( t) = Ax ~ ( t) + h( x( t), t) + α E ( x( t) ~ x( )), (6) t Slave ~ & x( t) = Ax ~ ( t) + h( x( t), t) + α E ( ~ x( t) ~ x( )), (7) t where in general α α and/or E E. The two error systems are: e& t) = x& ( t) ~ x& ( t) = ( А α E) e ( ), (8) a( a t ~ ~ & e& t) = x& ( t) x( t) = ( А α E ) e ( ). (9) b( b t Both systems (8) and (9) are linear, so when designing the two couplings one can easily prove the stability of each of them. C. Application of the MLND method on particular chaotic systems Since most of the known chaotic systems are continuous, some 75% of them being of third order, here the results of applying the modified linear-nonlinear decomposition method

114 Dragomir P. Chantov for cascade synchronization of one of the well known thirdorder systems are presented. The model of the Chua's chaotic electronic circuit is described with the following equations: 3 [ x ( + b) x f ( x )], x& = σ x& = x x + x3, x& = β x, () whereσ =, β = 4. 87, b =. 68. The only nonlinearity is ( x + ) a b f ( x) = bx + x with a =. 7. The typical chaotic attractor of the system is shown in x... Fig.. The initial conditions are [ ] T = additional coupling between the Slave and Slave systems is OW with α =. The matrixes of the linear error systems (8) and (9) for the chosen coupling schemes are: σ( + b) σ ( А α E) = α β σ ( + b) α σ ( А α E ) = β with the corresponding eigenvalues:, (4), (5) eig( А α E ) =.9,.6. 3j, (6) ± eig( А α E ) = 3.9,. 3. 8j. (7) ± Fig.. Chua's attractor If Eq. () is considered as a Master system, it can be decomposed in the form of Eq. (9), where: σ( + b) σ А =, h ( x( t ), t) = f ( x ). () β Two of the eigenvalues of A are positive - λ =.7, λ =. 3. j and the synchronization manifold 4, 3 ± will be unstable, i.e. the basic LND method cannot be applied neither for plain nor for cascade synchronization. One of the variants of the MLND method (5)-(7) will be shown. Let the Master system is described with Eq. () and the two Slave systems are constructed as follows: Slave Slave ~ x& = α ~ x& ~ x& = β ~ x. 3 [ ~ x ( + b) ~ x f ( x )], = ~ x ~ x + ~ x + α ( x ~ x ), ~ & x = α 3 [ ~ x ~ ( + b) x f nl ( x) ] ~ & x ~ ~ ~ = x x + x3, ~ & x = β ~ x. 3 nl + α ( ~ x ~ x ), () (3) The additional coupling between Master and Slave systems is obtained by applying the second variant of the standard one-way coupling (OW) with α. The = Since all real parts of the eigenvalues (6) and (7) are negative, the necessary conditions for the synchronization stability between each pair of systems are fulfilled. The simulation with Matlab/Simulink confirms the synchronization. The errors eia = xi ~ xi and eib = ~ xi ~ xi are shown on Fig.. After a period of approximately 3 seconds the three systems, started with different initial conditions each, are completely synchronized. At the same time the chaotic nature of the systems evolution is retained. One can confirm the cascade synchronization also by observing the error functions between the Master and the Slave systems - eic = xi ~ xi, or by viewing the evolution in the state space ( e c, ec, e3c ). The latter is shown on Fig.3. After the transient period the error system between the first,,, i.e. and the third system is stabilized into the origin ( ) there is identical synchronization between the first and the third system. Generally it is arguable if this automatically means there is also identical synchronization between the first and the second, and between the second and the third chaotic systems. In some cases it is possible the first pair of systems to exhibit marginal synchronization, where the error after achieving synchronization is a nonzero constant, which depends on the initial conditions. Very unlikely, but not impossible in general, the second pair of chaotic systems can also exhibit marginal synchronization, where the error stabilizes in the same constant with different sign. Then the Master and the Slave systems will exhibit identical synchronization without the presence of identical synchronization between Master and Slave, and between the Slave and Slave systems. The influence of the coupling gain α i is also investigated. By OW and OW couplings for the Chua system the increasing of α i leads to decreasing in the transient process, e.g. for α = α the cascade synchronization is almost = two times faster. However the conclusion about the influence of the coupling gain cannot be made in general. For other 55

115 Cascade Synchronization of Chaotic Systems on the Basis of Linear-Nonlinear Decomposition chaotic systems or even for the OW3 coupling for the Chua system, the increasing of α i leads to the loss of synchronization. It is also not recommendable to choose very large gain constant, because this will increase the influence of the eventual noise which is always present in the synchronization channel and therefore is included by the couplings in the Slave systems. a. ea,eb ea eb the Slave systems, the rows show the additional coupling between the Slave and the Slave systems. The coupling constants for all OW and OW couplings are α i =. The coupling constant for the OW3 couplings is α i =, because for greater value the error system becomes unstable. The length of the transient before complete identical synchronization between the three systems in simulation seconds is shown. One can see that as long as the basic LND method does not work for the Chua system, all nine possible couplings of the proposed MLND method guarantee stable synchronization. This conclusion however cannot be generalized for all chaotic systems, since each such system as a nonlinear system have its own properties and until now no universal method for chaotic synchronization is proposed. b. c. ea,eb e3a,e3b t t ea e3a eb e3b TABLE I RESULTS FOR DIFFERENT COUPLINGS M-S S-S OW OW OW3 OW 45s 3s 8s OW 3s 5s 9s OW3 3s 7s s The MLND method is also tested on other chaotic systems. For example for the Rossler system the LND method again does not yield stable synchronization while the OW and OW additional couplings of the MLND method with properly chosen coupling gains stabilize the error systems and the three Rossler systems exhibit identical synchronization. However the OW3 additional coupling for the Rossler system cannot stabilize the error system for all possible coupling gains, so synchronization is not possible. - III. CONCLUSION Fig.. Error functions: a - e3c t ec initial error between M and S e a e b, ; b - e a, eb ; c - e3 a, e3b -.5 Fig.3. State space (, e e ) e c c, 3c The results for the other possible variants of the MLND method for the Chua system are generalized in Table I. The columns show the additional coupling between the Master and - ec In this paper a new modification of the linear-nonlinear decomposition method was presented by which the standard method is combined with additional coupling, proportional to the error function. Thus retaining the main advantage of the LND method, being the possibility for exact stability analysis, one can choose between different types of couplings so the possibility of finding proper synchronization scheme increases. REFERENCES [] Boccaletti, S., J. Kurths, G. Osipov, D. Valladares, C. Zhou. The synchronization of chaotic systems. Physics Reports 366 (), pp.-. [] Carroll, T. Noise-robust synchronized chaotic communications. IEEE Transactions on Circuits and Systems-I, Vol.48, No.,, pp [3] Pecora, L., T. Carroll, G. Johnson, D. Mar, J. Heagy. Fundamentals of synchronization in chaotic systems, concepts, and applications. Chaos 7(4), 997, pp [4] Yu, H., L. Yanzhu. Chaotic synchronization based on stability criterion of linear systems. Physics Letters A preprint (3), pp

116 Sensorless Vector Control of Induction Motors Emil Y. Marinov, Kosta D. Lutskanov and Zhivko S. Zhekov 3 Abstract In this paper, two structures based on the partial induction motor model and optimization procedures for speed and flux rotor of the motor are overviewed. Simulation researches of sensorless direct vector control of induction motors using the proposed estimators are accomplished. The researches demonstrate their sufficient performance during motor parameter (rotor resistance) variations, disturbance and wide range variable references input control signals. Keywords Sensorless vector control, Induction motor, Flux estimation, Speed estimation. I. INTRODUCTION The interest for sensorless induction motor drives has been constantly rising during the last decade. They fill the middle ground between high-performance closed-loop control and simpler open-loop (V/Hz) control of induction motor (IM). The advantages of using these systems are: reduced hardware complexity, reduced size, no sensor cable, increased reliability, less maintenance requirements, lower cost, better noise immunity. Most sensorless AC drives are based on flux vector methods, hence loosely called sensorless vector (SV) control. There are many models of sensorless speed controllers described in the literature based on the extended Kalman filter theory [,], linear state Luenberger observer [,3], neural network [3] and others [4-7]. Basic problem in the area of sensorless control is the estimation of the low speed. Here two structures based on the partial induction motor model and optimization procedures for speed estimation and for flux rotor of the motor, are proposed. The paper is organized as follows: Section II describes the induction motor model, in Section III are described the proposed structure estimators, Section IV investigates the simulation testing of sensorless vector drive, and Section V summarizes the conclusion about results. II. INDUCTION MOTOR MODEL The equations for electrical equilibrium of voltages of the IM with cage rotor in the stationary α β frame, introduced in Emil Y. Marinov, Technical University of Varna, Faculty of Computing and Automation, Studentska, 9 Varna, Bulgaria, Kosta D. Lutskanov, Technical University of Varna, Faculty of Computing and Automation, Studentska, 9 Varna, Bulgaria, 3 Zhivko S. Zhekov, Technical University of Varna, Faculty of Computing and Automation, Studentska, 9 Varna, Bulgaria, a complex form are: dψs us = Rsis + dt () dψr = Rrir + jp pωr Ψr dt () Ψ = L i + L i (3) s r s s m s m r Ψ = L i + L i (4) where: u s, is, Ψ s, ir, Ψr are representation vectors of the stator variables (voltage, current and flux) and rotor variables (current and flux); ω r rotor angular speed; R s, R r - stator and rotor phase resistance; L s, L r stator and rotor phase inductance; L m mutual inductance; p p number of the pair poles. For each vector x (where х means voltage, current, flux) is valid x = x α + jx β where х α, х β are representation vector projections on α and β axes. After elimination of Ψ and i r from ( 4), it is obtained: u s R L r r r m m s = Re ( Te p + ) is Ψr + j p pωr Ψr (5) Lr Lr i s = ( Tr p + ) Ψ L m r T j L r m p L p ω Ψ where: Lm Ls Lr Lm Le Lr d Re = Rs + Rr ; Le = ; Te = ; Tr = ; p =. Lr Lr Re Rr dt For the purpose of vector control system synthesis, electromechanical processes in the IM and the mechanism (for one-mass mechanical part) in orthogonal coordinate system d q, orientated on the rotor flux vector are described by the following equations: u u i sd sq sd ω M M g e e = R ( T p + ) i e e = L m = p ω p m e = R ( T p + ) i e r r = k Ψ i c Rr L + L r sq r sd sq ( T p + ) Ψ r m L ω i r i sq r e e g sq + L ω i Ψ g sd R L L + L m r r m Lr p p Ψ r r ω Ψ r r (6) M = Jpω (7) 553

117 Sensorless Vector Control of Induction Motors where: i q, i d, u q, u d - active and excite components of stator current and stator voltage, ω g co-ordinate system speed, M e, M c electromagnetic and resistant moments, J - moment of inertia, k m =(3p p L m /L r )i sq /Ψ r. III. SPEED END FLUX ESTIMATION When designing a sensorless direct vector control of IM, there are a few considerable problems: determination of rotor speed, transitory position and magnitude of the support vector, which is most often the rotor flux vector. Two variants for dealing with this problem are proposed. They are based on IM partial model and the optimization procedure us. А. Estimator Estimator is composed of two partial models of the motor (Model, Model ) and optimization procedure (OP) fig. (with thick line are shown vector variables and with thin line scalar variables). Model estimates the rotor flux Ψˆ r and jθ forms e. Model gives estimation of the stator current î s. ω r u s i s Estimator e jθ Model Ψ r Ψ r OP ω~ r Model Fig.. Structural scheme of Estimator Model is described by the following equations: L u R i i s i s + L L L e e J ˆ r s s s s r m Ψ r = is ; (8) Lm p Lm ˆ ˆ ˆ ˆ Ψr = Ψr = Ψrα + Ψr β ; (9) Rr Lm ˆ Lm u + Ψ ω Ψ ˆ s p r j p r r ˆ Lr Lr is = () R ( T p + ) The Optimization criterion is: e e = i ˆ s is J = e () Through OP ω ~ r varies for each sampling period so that the criterion J be minimized and the estimation of the rotor speed ωˆ is obtained: r ωˆ = ~ in case of J = min (3) r ω r In the simplest case OP may be realized by method of consecutively searching in definite interval. For k tact ω ~ r ( k ) will be changed in the interval ωˆ r ( k ) ± Δωmax. The determination of Δω max is realized on the basis of following reasons. When working with constant flux (Ψ r =const) and in the presence of current restriction, the maximum value of the electromagnetic torque is M emax = kmψr I sq max, where I sqmax is the maximum permissible value of the active component of stator current with referent current restriction. The result for maximum dynamical torque is M d max = M e max + M c max = M e max + M n, where M n is nominal motor torque. From equation of motion (7) is obtained maximum possible change of speed: M maxt Δ ω d max = (4) J min where Т sampling period, J min =J m, J m inertia moment of motor. B. Estimator Estimator is composed by one partial model of IM (Model 3) and optimization procedure fig.. The model gives simultaneously estimations for the rotor flux Ψˆ r and stator current î s and forms jθ e. Ψˆ θ = arctg Ψ ˆ rβ rα, () where θ is the angle between vector Ψˆ r and axis α. The equation (8) is a result of () after expressing trough Ψr and i s. Ψ s by The estimated value of the stator current î s is obtained from Model on the basis of equation (5), as for the purpose is used the measured stator voltage and obtained in Model rotor flux: ω r u s e jθ OP ω~ r Model 3 Ψ r i s + i s Estimator e e J Fig.. Structural scheme of Estimator 554

118 Model 3 is described by equations (5) and (6). The optimization procedure is the same as this one, described in the first variant. Optimization criterion is defined by () and the estimation of the rotor speed ωˆ r is obtained according to (3). On the basis of the proposed speed and rotor flux estimators is obtained the direct vector control system of the IM fig. 3. In the figure are used the following symbols: FC flux controller; SC speed controller; CCB and CCA controllers of the active and excite components of the stator current; BC block for compensation; abc/αβ, αβ/dq, dq/αβ, αβ/abc coordinate transformers, PC power converter, IM induction motor, M mechanism; ω ref, Ψ ref - referent values of speed * * * and rotor flux; u uαβ, u - control signals. Ψ ref ω ref FC SC dq, abc CCB CCA i sq i sd BC * u dq Emil Y. Marinov, Kosta D. Lutskanov and Zhivko S. Zhekov dq/αβ αβ/dq e jθ * u αβ αβ/abc abc/αβ * u abc i s PC ω r Ψ r Estimator abc/αβ u s IM Fig.3. Functional diagram of vector control system of IM M Fig.4. Low speeds in absence of noise and R s =R skat и R r =R rkat (with Estimator ) IV. SIMULATION RESEARH AND REZULTS The proposed control algorithm has been tested using a 5 kw induction motor with rated torques 36.4Nm and nominal current I sn =.А. In the simulation model coordinate transformations, non-linear and discrete PC properties are rendered in account. The researches are carried out using variable stator and rotor resistance, without/with noise in the input signals. A part of the simulation research of the direct vector control system is shown on fig On the fig.4 and fig.5 is represented the system performance using Estimator and on the fig.6 and fig.7 - using Estimator. Motor speed and rotor flux are compared with the ω ref and Ψ ref (fig.4а 7а) and with the estimations ωˆ r and Ψˆ r, derived from the estimators (fig. 4b,c,d,e 7b,c,d,e). Fig.4,6 are obtained bay using the basis value of the motor electric parameters in absence of noise, fig.5,7 with R s =.R skat and R r =.R rkat (R skat, R rkat basis values of R s и R r ) and in case of noise addition to the stator currents. The noise addition is simulated by additive white noise, ratio noise/signal=.5. In all cases the estimators work with the basis values of the motor parameters. The resistive moment M c is accepted to be reactive and it changes in range of (. )M n as follows: M c =.M n for t.s; M c =.5M n for.<t s; M c =M n for <t.6s; M c =.5M n for <t s. Where differences between motor s resistances and basis values exist, a static fault comes out. Fig.5. Low speeds in presence of noise and R s =.R skat и R r =.R rkat (with Estimator ) 555

119 Sensorless Vector Control of Induction Motors When there is difference between motor resistance and their basis values, a static fault occurs. This fault is proportional of the load and of the diversion of the real resistances from the basis values. The researches accomplished confirm the capability of the system to work with both estimators. The second estimator variant is more robust regarding the parametric and signal disturbances. It makes a filtration of the estimated values of the rotor flux end for this reason it is actually a smooth curve. That is why the estimator from figure is preferable. V. CONCLUSION Two structures using partial induction motor model and optimization procedures of speed and rotor flux estimation are proposed. On this basis two structures of direct vector control of IM drive are formed. Simulation researches are done, confirming the efficiency of the proposed estimators and the vector control systems schemes. The research is carried through under the following conditions: wide range of speed control, variations of motor parameters (rotor end stator resistances), noise implemented in the signals of phase stator currents. The researches demonstrate sufficient performance of sensorless IM drive control. Fig.6. Low speeds in absence of noise and R s =R skat и R r =R rkat (with Estimator ) REFERENCES [] B. Akin, U. Orguner, A. Ersak and M. Ehsani, Simple Derivative-Free Nonlinear State Observer for Sensorless AC Drives Mechatronics, IEEE/ASME Transactions on, Vol., Issue: 5, pp , Oct. 6. [] V. Bostan, M. Cuibus, C. Has, and R. Magureanu, High performance sensorless solutions for induction motor control, Power Electronics Specialist Conference, 3. PESC 3. 3 IEEE 34th Annual, Vol., pp , June 3. [3] M. Cuibus, V. Bostan, S. Ambrosii, C.Ilas and R.Magureanu, Luenberger, Kalman and Neural Network Observers for Sensorless Induction Motor Control, Power Electronics and Motion Control Conference,. Proceedings. IPEMC. The Third International, Volume 3, pp. 56-6,. [4] H. Zidan, S. Fujii, T. Hanamoto and T. Tsuji, "A Simple Sensorless Vector Control System for Variable Speed Induction Motor Drives", T.IEE, Japan, Vol. -D, No., pp.65-7, [5] G. Edelbaher, K. Jezernik and E. Urlep, "Low-speed Sensorless Control of Induction Machine," Trans. on Industrial Electronics, vol. 53, no., pp. - 9, Feb 6. [6] T. Chun, M. Choi and B. Bose, A Novel Startup Scheme of Stator Flux Oriented Vector Controlled Induction Motor Drive without Torque Jerk, IEEE Transactions on industry applications, Vol. 39. No 3, pp , May/June, 3. [7] H. Madadi Kojabadi Simulation and Experimental Studies of Model Reference Adaptive System for Sensorless Induction Motor Drive, Simulation Modelling Practice and Theory, Vol. 3, Issue 6, pp , Sep. 5 Fig.7. Low speeds in presence of noise and R s =.R skat и R r =.R rkat (with Estimator ) 556

120 Genetic Algorithms applied in Parameter Optimization of Cascade Connected Systems Bratislav Danković, Dragan Antić, Zoran Jovanović 3 and Marko Milojković 4 Abstract In this paper, rubber cooling system in tyre industry, as a represent of complex, nonlinear, stochastic, cascade-connected systems, is considered. A simple genetic algorithm has been applied in adaptive and optimal control of the system. Keywords - Genetic algorithms, Nonlinear system, Parameter optimization I. INTRODUCTION In every tyre factory in the world, there are one or more tyre thread cooling systems. That tyre thread is used to form external (stripped) part of a tyre. It's estimated that there are about 5 systems, like that, all over the world, mostly in China, India, USA and Brasil. These systems consist of a large number (4-4) of cascade-connected transporters along which the tyre thread moves, passing from one transporter to another. Thereby, the rubber is cooled by the water which flows in opposite direction. The velocities of individual transporters are adjusted using local controllers which determine the velocity of the next transporter according to the length of rubber between two consecutive transporters. In this manner, a dynamic system with a lot of cascades is obtained (see Fig..). During the tyre thread movement along a transporter, rubber runs cold and contracts. Because of that, velocity at transporter s end is smaller than velocity at transporter s beginning, with contraction coefficient μ. Coefficient μ is stochastic because it depends on rubber quality and environment temperature which are stochastic parameters. Influence of stochastic parameters μ i on cascade systems stability is analized in []. Due to cascade structure and nonlinearities, the system is prone to oscillations [],[3]. Under certain conditions, deterministic chaos may appear in the system [4], [5]. Because of the stated properties, the referred system is very complex and difficult to control [6]. The only way for successful control is local control of transporters velocities at every transition (points 5 at Fig. ) and also a compensation for the entire system using adjustable parameters. Until now, these parameters have been adjusted manualy. This paper presents a new method for adjusting parameters using genetic algorithms with optimal control in the sense of the mean square error. II. CASCADE CONNECTED SYSTEM FOR THE RUBBER STRIP COOLING Figure. shows a cascade connected transporters for the rubber strip cooling. This system in a real factory is given at figure. Fig.. Cascade system for the rubber strip transportation (-extruder -rubber strip 3-balance 4-transporters 5-transitions) Following properties of these systems impact dynamics, stability and system quality: Tyre thread accumulates at transition places (points 5 at Fig. ), because of integration of velocities difference. Nonlinear dependencies are formed at the cascade transitions, between transporters. Bratislav Dankovic is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia Dragan Antic is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia, 3 Zoran Jovanovic is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia, 4 Marko Milojkovic is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Nis, Serbia, Fig.. Rubber cooling system in tyre industry Tigar-Michelline, Serbia The rubber strip comes from extruder (point at Fig..), pass through the balance (point 3 at Fig..) and goes to the cooling system. It's necessary to cool down the rubber strip to the room temperature. When rubber runs through the cooling system, it is beeing cooled and contracts with contraction coefficent μ<. During that contraction, rubber velocities at transporter s ends are not equal to the transporter s velocities, 557

121 Genetic Algorithms applied in Parameter Optimization of Cascade Connected Systems producing the effect of rubber slipping relatively to transporter. The length change of the rubber strip between two transporters is described with the following equations: W () s Vi = u i ( s) = () s T T s + ( T + T ) s + (8) dli () () = Vg, i V, i=,,...,n g, i dt () () () Vg, i = Vi, Vg, i = Vi μ () dli Vi Vi dt μ (3) Δl = i Vi Vi s μi (4) where: l i is the length of rubber strip between i-th and (i+)-th transporter, () V is rubber velocity at the end of the (i-)-th transporter, g, i () V g,i is rubber velocity at the beginning of the i-th transporter, n is the number of transporters Δ l i is length change of rubber strip between two consecutive transporters, V i is the velocity of the (i-)-th transporter, V i is the velocity of the i-th transporter, μ i is the rubber compression coefficient for the i-th transporter. Figure 3. shows a transition between two transporters. To regulate transporter s velocities, it s necessarily to measure the lenghts of rubber between transporters ( Δ li ). These measurements are being done by special sensors (potenciometers P at Fig. 3.). Measurer s (potenciometer) angle β i satisfies the following relation: β i = Φ( Δl i ) (5) where Φ represents nonlinear dependency. The value of β i is between and 9 degrees. Potenciometer voltage is: ui = K P β (6) i where K P is the potenciometer coefficient [V/rad]. Potenciometer s voltage is being amplified and, through tiristor s regulators, the velocities of drive motors are being controled. Dynamics of i-th transporter with controller and drive motor can be described with following well known equation: dvi dvi T T + ( T + T ) + Vi = u (7) i dt dt where T and T are mechanical and electrical time constants of electromechanical drive. According to (7), the transfer function for i-th transporter has the following form: Fig. 3. Measuring the length of the rubber between transporters Using stated equations ()-(8), the block diagram of the entire system, given at Figure 4, is obtained. Integration of velocity between transporters can cause statical error when parameter μ changes (change of used rubber quality or change of ambience temperature). At fig. 3, middle position of the sensor (position ()) correspondes to normal operating. If μ magnifies, sensor comes in positon () and statical error Δli occures (the rubber streches). If μ decreases, sensor comes to positon (3) and statical error + Δl i occures (the rubber accumulates). Compensational potenciometers ( K ri at fig. 4) are introduced in order to compensate statical errors, so their adjustement bring system back to normal operating (position () at fig. 3). Today, these parameters are being adjusted manually (manual system adaptation). This paper presents a new method, based on genetic algorithms for automatic adaptation and optimization of disscused systems. III. GENETIC ALGORITHMS The principles of genetic algorithms were first published by Holland in 96 [7]. Genetic algorithms are optimization techniques based on simulating the phenomena that takes place in the evolution of species and adapting it to an optimization problem. Genetic algorithms have been used in many areas such as function optimization, image processing, system identification... They have demonstrated very good performances as global optimizers in many types of applications [8], [9]. A brief description of genetic algorithms is given below. Encoding: The first step in building a genetic algorithm is to choose the parameters of interest in the search space and to encode and concatenate them in order to form a string or chromosome. Thus, each string represents a possible solution to the problem. The genetic algorithm works with a set of strings, called the population. This population then evolves 558

122 Bratislav Danković, Dragan Antić, Zoran Jovanović and Marko Milojković from generation to generation through the application of genetic operators. Initialization and population size: The initial population for a genetic algorithm is a set of solutions to the Fig. 4. Block diagram of the cascade connected system in the tyre industry optimization problem. A common method of population generation is random generation. Population size plays an important role in the success of the problem-solving process. A small initial population size can lead to premature convergence. On the other hand, a large population results in a long computational time. Fitness function: In genetic algorithms, the fitness is the quantity that determines the quality of a chromosome, in the gene pool and it must reward the desired behavior. The fitness is evaluated by a fitness function that must be established for each specific problem. The fitness function is chosen so that its maximum value is the desired value of the quantity to be optimized. Selection methods: Reproduction is based on the principle of survival of the fittest. The purpose of selection is to emphasize the fitter individuals in the population in hopes that their offspring will in turn have even higher fitness. The most common method is the roulette wheel selection, where, the number of times the gene can be reproduced is proportional to its fitness function. This technique involves selecting the top performers and allowing multiple reproductions of the best performers. Genetic operators: In each generation, the genetic operators are applied to selected individuals from the current population in order to create a new population. Generally, the three main genetic operators are reproduction, crossover and mutation. By using different probabilities for applying these operators, the speed of convergence and accuracy can be controlled. Reproduction: A part of the new population can be created by simply copying without change selected individuals from the present population. This gives the possibility of survival for already developed fit solutions. Crossover: New individuals are generally created as offspring of two parents. One or more so-called crossover points are selected (usually at random) within the chromosome of each parent, at the same place in each. The parts delimited by the crossover points are then interchanged between the parents. The individuals resulting in this way are the offspring. Mutation: A new individual is created by making modifications to one selected individual. The modifications consist of changing one or more values in the chromosome. In genetic algorithms, mutation is a source of variability. Figure 5. shows the main steps in genetic algorithm procedure stated above. Fig. 5. Genetic algorithm procedure IV. EXPERIMENTAL RESULTS For experimental purposes, MATLAB model of the cascade connected system with four transporters, based on block diagram at fig. 4., is made. A real system is also made for student laboratory exercise (see fig. 6). Genetic algorithm is applied in a way presented with fig. 7. The purpose of genetic algorithm is to optimize parameters K ri on the bases of measured Δ l. i Fig. 6. Laboratory system with four transporters for experimenting 559

123 Genetic Algorithms applied in Parameter Optimization of Cascade Connected Systems There are four parameters of interest which should be adjusted by genetic algorithm: K r, K r, K r3 and K r4. They are encoded binary with five bits each (3 values). In this way, bits chromosome is obtained. Initial population is generated randomly with population size of. Fig. 7. Block diagram of the laboratory system Fitness function is the sum of the mean sqare errors at all transporters (9). Smaller fitness fuction means lower error and, therefore, better chromosome. 4 i= ( ) f = Δ (9) l i Selection method is roulette wheel with the fittest individual carried forward the next generation in every evolution cycle. Genetic algorithm was performed for 3 generations. To perform the genetic algorithm, Matlab was used in konjunction with the SIMULINK and GAOT toolbox, which is open-source code. The results for consecutive generations are shown in figure 8. The full line is the fitness function of the best individual, and the dotted line is average fitness function for entire generation. Algorithm converges very fast to the set of K ri parameters which are optimal for the system and give the lowest fitness function. Fig. 8. The best and the average fitness function V. CONCLUSION This paper presents the new method for parameter optimization in systems for rubber strip cooling. In this purpose, genetic algorithm is used. Parameters are beeing adjusted in discrete-time intervals and that gives good results for inert systems like the one considered in this paper. The gained results are better than those which are obtained by classical methods in the sense of better speed and adaptation accuracy. REFERENCES [] B. Dankovic, Z. Jovanovic, On the Reliability of Discrete-Time Control Systems with Random Parameters, Quality Technology and Quantitative Management, Vol., No, March 5. [] B. Dankovic, On the Oscilations in the automated cascade systems for rubber thrades transport, Jurema, Zagreb, 989. [3] D. Trajkovic, B. Dankovic, Analyzing, Modeling and Simulation of the cascade Connected Transporters in Tyre Industy using Signal and Bond Graphs, Machine Dynamics Problems, Vol. 9, No3, Warsaw University, 5. [4] B. Dankovic, M. Stankovic, B. Vidojkovic, On the Appearance of Chaos in the Automatic Control Cascade Systems, Proc. 7 th Symposium of Mathematics and its Applications, Univ. of Timisoara [5] B. Dankovic, B. Vidojkovic, On the Chaos in Cascade Systems for Rubber Strip Transportation, Proc. HM, 4 th International Conference, Heavy Machinery (pp. A97-A) [6] B. Dankovic, B. Vidojkovic, Z. Jovanovic, Dynamical Analysis of the Protector Cooling System in Tyre Industry, Proc. ICMFMDI, XVII International Conference on Material Flow, Machines and Devices in Industry (pp ) [7] J. H. Holland, Outline for a logical theory of adaptive systems, J. ACM, vol. 3, pp , July 96; also in A. W. Burks, Ed., Essays on Cellular Automata, Univ. Illinois Press, 97, pp [8] Blasco, F.X., M. Martínez, J. Senent and J. Sanchis An application to control of non-linear process with model uncertainty, In: Methodology and tools in Knowledge based systems (Springer, Ed.)., 998. [9] T.K. Teng, J.S. Shieh and C.S. Chen, Genetic algorithms applied in online auto tuning PID parameters of a liquid-level control system, Transactions of the Institute of Measurement and Control 5, 5 pp , 3. 56

124 Building 3D Environment Models for Mobile Robots Using Time-Of-Flight (TOF) Laser Scanner Sašo Koceski and Nataša Koceska Abstract In this paper, we present system and an intuitive method to generate visually convincing 3D models of indoor environments from data collected using TOF laser scanner. The method allows line and surface detection and mesh representation of the input data. The method shows accurate 3D environment representation useful for different applications in mobile robotics. Keywords TOF Laser Scanner, Environment modelling, Mobile robots, 3D mesh, 3D triangulation. I. INTRODUCTION Since standard computers allow efficient processing and visualization of three-dimensional data, the interest of using 3D graphics has increased in numerous fields. Car design, architecture, or the modern movie industry are merely a few examples of fields which are unimaginable nowadays without the use of 3D graphics. The increasing need for rapid characterization and quantification of complex environments has created challenges for data analysis. Mobile systems with 3D laser scanners that automatically perform multiple steps such as scanning, gaging and autonomous driving have the potential to greatly advance the field of environment representation. 3D information available in real-time enables autonomous robots to navigate in unknown environments, e.g., in the field of inspection and rescue robotics. Autonomous navigation of a mobile robot requires basic capabilities for sensing the environment in order to avoid obstacles and move in a safe way. The problem of the reconstruction can be expressed as a procedure of learning the topology from the data set and reduction of the data set cardinality. We address the problem via the analysis and realistic representation of an unknown indoor environment. Our objective is to build environment models from the real measured data of building interiors using intuitive methods and low-cost data acquisition systems. Some groups have attempted to build 3D volumetric representations of environments with D laser range finders. Thrun et al. [],[] and Früh and Zakhor [3] use two D laser range finder for acquiring 3D data. One laser scanner is mounted horizontally and one is mounted vertically. The latter one grabs a vertical scan line which is transformed into 3D Sašo Koceski is with the Applied Mechanics Laboratory, DIMEG, University of L'Aquila, 674 Roio Poggio (AQ), Italy, Nataša Koceska is with the Applied Mechanics Laboratory, DIMEG, University of L'Aquila, 674 Roio Poggio (AQ), Italy, points using the current robot pose. But the accuracy in these cases was not satisfactory and also the costs were high. Our approach is based on D Time-Of-Flight (TOF) laser scanner and extended by a low cost rotation module based on a servo command. Combining such an extension with a set of fast algorithms has resulted in good environment representation system. II. DATA ACQUISITION SYSTEM A. Main components and connections The main component of the system is the sensor for data acquisition Laser Radar (LADAR) LD-OEM. It is placed in one aluminum carrying construction and attached to the mount with Degree Of Freedom (DOF), so that it can be rotated. On the top of the mount, berth for camera is foreseen (Fig. ). The rotational axis is horizontal. Annotation: An alternative approach is to rotate the LADAR around the vertical axis. Throughout this paper we will only discuss the approach based on a horizontal rotation, but all presented algorithms can be used in the same way. The differences between both approaches are the orientation of the apex angle and the effects of dynamic objects moving through the scene, e.g. persons. Using vertical scanning, a perturbation either appears with low probability within a few scans making them useless for further data processing, or does not appear at all. Fig.. Photo of the created system For the rotation of the scanner s movable element, a servo command (Hitec HSC 5955TG) was chosen. In order to move the servo command, a PPM (Pulse Position Modulation) was used. It is connected to the main processing unit (Standard 56

125 Building 3D Environment Models for Mobile Robots Using Time-Of-Flight (TOF) Laser Scanner PC) via PCI NI 6 Data Acquisition board (DAQ). For the LADAR, CAN data interface was chosen (instead of a RS-3 which is the standard interface), because it enables high grabbing speed, up to MBit/s, and it can be easily connected via low cost CAN to USB adaptor to an USB port. resolution of /6. Having in mind the vertical rotation of the system (the value for the azimuth angle) as well as the dimension of the structural components of the system (Fig. 3), these values are subsequently transformed into spherical coordinates according to the Eqs. () and (). B. Working principle The LADAR measures its environment in two-dimensional polar coordinates. When a measuring beam strikes an object, the position is determined with regard to the distance and direction. The measured data can be transferred in real time to the connected computer for evaluation. The scan is carried out in a 36 sector. The LADAR ranges on bright; natural surfaces (a white house wall, for example) are approx. m. (Fig. ). Fig.3. Coordinates transformation x' = r sinα y' = r cosα X = r sinα Y = r cosα cosϑ h sinϑ + b cosϑ Z = r cosα sinϑ + h cosϑ + b sinϑ () () D. Pre-processing algorithms Fig.. Scanning the environment The distance to the object is calculated from the propagation time required by the light from the point at which the beam is emitted until the reflection is received by the sensor. The scanner head rotates with a frequency of 5 to Hz (programmable). A laser pulse is emitted in accordance with a variable angle step, thereby triggering a distance measurement. The maximum angular resolution is.5. This is set by the angle encoder at 5,76 steps. The angular resolution can be selected as an integral multiple of.5. C. Data acquisition and interpretation Working principle and the given setup determine an intrinsic order of the acquired data. The data coming from the D LADAR is ordered counterclockwise. In addition the D scans (scanned planes) are ordered due to the rotation. While obtaining horizontal profiles LADAR is sending CAN data packets, with different useful information, among which a pairs of two by two values for distance and corresponding angle. The distance value is represented by a 6-bit binary value with a resolution (step width) of 3.9 mm (/56 m). The angle is also represented by a 6-bit binary value with a Different pre-processing algorithms for line and surface detection were applied to the data. First algorithm is a simple straightforward matching algorithm running in O(n) (n is the number of points), with small constants. The algorithm implements a simple length comparison. The data of the LADAR (points a,a, a n ) is ordered counterclockwise so that one comparison per point is sufficient. We assume that the points a i, a j are already on a line. For a j+ we have to check if the condition in Eq. (3) is satisfied. ai, a j+ j = at, at+ t i < ε ( j) To obtain better results this algorithm runs on preprocessed data. Data points located close together are joined so the distance from one point to the next point is almost the same. This process minimizes the fluctuations within the data and reduces the points to be processed. Hence this algorithm runs very fast, but the quality of the lines is limited. The quality could be increased by additional analysis (using Hough transformation for example) of the photos obtained with camera during the scanning process. After line detection is done the data is converted into 3D. Based on the detected lines, the following algorithm tries to detect surfaces in the 3-dimensional scene. Scanning a plane surface, the line detection algorithm will return a sequence of lines in successive D scans approximate (3) 56

126 Sašo Koceski and Nataša Koceska the shape of this surface. The task is to recognize such structures within the 3D data input and to concatenate these independent lines to one single surface. The surface detection algorithm proceeds the following steps:. The first set of lines - coming from the very first D scan - is stored.. Every other line is being checked with the set of stored lines. If a matching line is found, these two lines are transformed into a surface. 3. If no such matching line exists, the line may be an extension of an already found surface. In this case, the new line is matching with the top line of a surface. This top line is being replaced by the new line, resulting in an enlarged surface. 4. Otherwise the line is stored as a stand-alone line in the set mentioned above. To achieve real time capabilities, the algorithm makes use of the characteristics of the data as it comes from the LADAR, i.e. it is the order by the scanned planes. Therefore the lines are sorted throughout the whole scene (with regard to their location within the virtual scene) due to their inherited order. Thus an efficient local search can be realized. Two criteria have to be fulfilled in order to match lines: On one hand the endpoints of the matching line must be within a ε-area around the corresponding points of the given line. On the other hand the angle between the two lines has to be smaller than a given value. The second constraint is necessary for correct classification of short lines, since they fulfill the distance criterion very easily. These algorithms enables that the robot or a user gets much information about objects in the scenery right during the scan, which is essential for path planning and collision avoiding during the movement of mobile robots inside indoor environments. E. Post-processing algorithms Post-processing algorithms were used to create a 3D mesh representation of the room that will give more qualitative information for safe navigation of mobile robots. Despite of the prior algorithms, this step requires information about the whole scene and has to be done after the scan process is finished. This procedure is based on the Delaunay triangulation algorithm that typically projects the data set onto a plane for finding the most probable connections between close vertices and re-locates in space the connected vertices at the end creating a triangular mesh. At any stage of the triangulation process one has an existing triangular mesh and sample point to add to that mesh. The process is initiated by generating a supertriangle, an artificial triangle which encompasses all points. At the end of the triangulation process any triangles which share edges with the supertriangle are deleted from the triangle list:. All the triangles whose circumcircle encloses to point to be added are identified, the outside edges of those triangles form an enclosing polygon. (The circumcircle of a triangle is the circle which has the three vertices of the triangle lying on its circumference).. The triangles in the enclosing polygon are deleted and new triangles are formed between the point to be added and each outside edge of the enclosing polygon. 3. After each point is added there is a net gain of two triangles. Thus the total number of triangles is twice the number of sample points. (This includes the supertriangles, when the triangles sharing edges with the supertriangle are deleted at the end the exact number of triangles will be less than twice the number of vertices, the exact number depends on the sample point disturbation). The triangulation algorithm may be described in pseudo-code as follows: subroutine triangular input: vertex list output: triangle list initialize the triangle list determine the supertriangle add supertriangle vertices to the end of the vertex list add the supertriangle to the triangle list for each sample point in the vertex list initialize the edge buffer for each triangle currently in the triangle list calculate the triangle circumcircle center and radius if the point lies in the triangle circumcircle then add the three triangle edges to the edge buffer remove the triangle from the triangle list endif endfor delete all doubly specified edges from the edge buffer this leaves the edge of the enclosing polygon only add the triangle list all triangles formed between the point and the edges of the enclosing polygon endfor remove any triangles from the triangle list that use the supertriangle vertices remove the supertriangle vertices from the vertex list end The above can be refined in a number of ways to make it more efficient. The most significant improvement is to presort the sample points by one coordinate, the coordinate used should be one with the greatest range of samples. If the X axis is used for pre-sorting then as soon as the x component of the distance from the current point to the circumcircle centre is greater than the circumcircle radius, that triangle need never to be considered for later points. With the above improvement the algorithm presented here should increase with the number of point as approximately O(N.5 ). The algorithm does not require a large amount of internal storage. The algorithm only requires one internal array and that is a logical array of flags for identifying those triangles that no longer need to be considered. 563

127 Building 3D Environment Models for Mobile Robots Using Time-Of-Flight (TOF) Laser Scanner III. EXPERIMENTAL RESULTS The scan shown on Fig. 4 which represents indoor of our laboratory in the form of cloud of points, was obtained by our system, with the following settings: Resolution -,5 x,5 ; Frequency Hz. The scan covers an area of 8(h)x6(v) degrees and includes in total points. Wired model of the entire scene after triangulation contains 3795 triangles. Details of one part of the same scene in the form of wired mesh is presented on Fig. 5. Standard PC with AMD Athlon 64 Processor.4GHz and GB RAM was used. The software based on the above algorithm was developed in C++ using the Computational Geometry Algorithms Library (CGAL) and VTK (The Visualization Toolkit) which are freely available data processing and visualization libraries. Fig. 4. Point cloud representation of indoor environment V. FUTURE WORK The future work from one side regards hardware modifications and supplements of the system. First of all, replacement of the current servo command with step motor that could be connected to the PC via CAN bus or USB connection is foreseen. This way we will avoid the usage of the DAQ card and will reduce the cost of the solution. The usage of the camera for photo capturing is also planned. This would lead to efficient texture mapping of the scanned environments and objects. On the other side improvement of the current algorithms and algorithms for terrain maps creation and feature extraction from the scans is planned to be implemented. REFERENCES [] S. Thrun, D. Fox, W. Burgard, A real-time algorithm for mobile robot mapping with application to multi-robot and 3D mapping, in: Proceedings of the IEEE International Conference on Robotics and Automation (ICRA ), San Francisco, CA, April. [] D. Hähnel, W. Burgard, S. Thrun, Learning compact 3D models of indoor and outdoor environments with a mobile robot, in: Proceedings of the Fourth European Workshop on Advanced Mobile Robots (EUROBOT ), Lund, Sweden, September. [3] C. Früh, A. Zakhor, 3D model generation for cities using aerial photographs and ground level laser scans, in: Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR ), Kauai, Hawaii, December. [4] G. Guidi, J.-A. Beraldin, C. Atzeni, High accuracy 3D modeling of Cultural Heritage: the digitizing of Donatello s Maddalena, IEEE Transactions on Image Processing, Vol. 3-3, 4, pp [5] Terzopoulos, D., The Computation of Visible Surface Representation. IEEE Transactions on PAMI, Vol., No 4, 988. [6] G. Guidi, J.-A. Beraldin, S. Ciofi, C. Atzeni, Fusion of range camera and photogrammetry: a systematic procedure for improving 3D models metric accuracy, IEEE Transactions on Systems Man and Cybernetics Part B-Cybernetics. Vol. 33-4, 3, pp [7] VTK ( [8] CGAL ( Fig. 5. Details of the environment in the form of wired mesh IV. CONCLUSION In this paper low-cost, precise and reliable 3D sensor and methods for environment modeling are presented. With the proposed approach the mobile robot navigation and recognition could be significantly improved. The 3D sensor is built on base of D TOF laser scanner, which senses the environment contactless, without the necessity of landmarks. The implemented software, based on presented algorithms, give accurate mesh representation of input data, where particular objects could be easily identified. 564

128 High-Performance Velocity Servo-System Design Using Active Disturbance Estimator Boban Veselić and Čedomir Milosavljević Abstract This paper considers design of digitally controlled high-performance velocity servo-system, featuring fast response without overshoot. The proposed control structure contains main PI controller and active disturbance estimator, to further improve disturbance rejection dynamics. Resulting response meets the high-performance requirements. The designed control system is experimentally verified in induction motor velocity control. Keywords Servo-system, Velocity control, Digital PI controller, disturbance estimator. I. INTRODUCTION Advanced production technologies have imposed more rigorous demands on servo-systems performances. Velocity servo-systems in high-performance applications must comply with the requirements of fast response without overshoot, high steady state accuracy, good rejection of external disturbances and robustness to parameter perturbations. In general, a servosystem must have at least one pure integrator within the closed loop. Since control plant in a velocity servo-system does not have an integrating property, PI controllers are conventionally used. Such servo-system has no steady-state error on step references and completely rejects constant loads. Beside standard simple regulation contour, two degrees-offreedom controllers found their applications in servo-systems [], in order to improve robustness. Reference and disturbance responses can be separately designed using this control structure. Similar results were obtained using internal model principle and internal model control, combined into IMPACT structure []. Another approach in disturbance compensation is introduction of disturbance estimators [3], which may be interpreted as a special case of the above two structures. This paper deals with the design of a digitally controlled high-performance velocity servo-system. Both pole placement and zero-pole cancellation design methods of PI controller are considered. To further improve disturbance rejection dynamics an active disturbance estimator [4] is introduced. Fast response without overshoot and excellent disturbance rejection properties are ensured. The proposed servo-system is experimentally verified in induction motor velocity control. II. VELOCITY SERVO-SYSTEM STRUCTURE Most of velocity servo-systems, regardless of the employed drive, may be described by the simplified generalized blockscheme depicted in Fig.. The cascade structure consists of two distinct control loops. Inner current control loop, responsible for adequate torque generation, is enclosed by a main speed control loop. Bandwidth of the inner loop is usually much higher then bandwidth of the speed loop, thus the current control subsystem dynamics may be ignored in the main controller design. Nevertheless, dynamic delay of the inner control subsystem has certain impact on overall system dynamics, acting as an unmodeled dynamics within the speed control loop. The current loop is commonly realized with a bandwidth around khz, implementing PI controller. Since exogenous disturbances, such as load torque M o (t), enter directly into the speed control loop, PI speed controller is usually employed in order to eliminate steady-state error in the case of step-like disturbances. Fig.. Generalized velocity servo-system structure Encouraged by huge technological advance of microcontrollers, digital implementation of control algorithms overcomes analog counterpart. Correct system analysis should be carried out in discrete-time domain. Block diagram of a digitally controlled velocity servo-system is given in Fig.. G r (z) is a discrete-time transfer function of digital controller, G h ( s) is sample and hold transfer function, k T is a torque constant, J and B are moment of inertia and viscous friction coefficient, respectively. Hence, motor dynamics is described by first order function G ( s) = k m /( + stm ), where k = k B is motor gain and T m = J / B is time constant. m T / Boban Veselić is with the Faculty of Electronic Engineering, Aleksandra Medvedeva 4, 8 Niš, Serbia, Čedomir Milosavljević is with the Electrical Engineering Faculty, Vuka Karadžića 3, Lukavica, 73 Istočno Sarajevo, Bosnia and Herzegovina, Fig.. Digitally controlled velocity servo-system 565

129 High-Performance Velocity Servo-System Design Using Active Disturbance Estimator If load torque is a step function or is slowly varying between two consecutive sampling instants, which is true for small sampling periods, discrete-time model of the closed loop system is given as Gr ( z) G( z) G( z) Y ( z) = R( z) M + G ( z) G( z) + G ( z) G( z) G( z) = { r { G km ( a) ( s) G( s)}} =, a = e z a ( z), T s / Tm h r oe, () of the closed loop system should be larger then motor time constant. This is unacceptable since notion of feedback control is to improve not to degrade system dynamics, so the desired pole must be located inside region < z < a. This introduces dominant zero, which produces unwanted overshoot. Notice that system response with respect to load torque is free of undesirable zero. where T s denotes sample period and M oe = M o / kt. As mentioned earlier, PI digital controller is employed within speed control loop. Controller transfer function is defined with where z k p z b k p Gr ( z) = k p + ki =, b =, () z b z k + k k p, k i are gains of proportional and integral action. p i III. CONTROLLER DESIGN According to the high-performance requirements, recounted in the introduction, controller () parameters should be tuned to provide fast critically aperiodic response. Due to () (), characteristic equation + G r ( z) G( z) = is of second order, whose roots z, z (poles of the closed loop system ()) determine system response. As it is well known, second order system dynamics as well as nature of its response is defined by doublet ς, ω n, which represents relative damping factor and undamped natural frequency, respectively. Closed loop poles are given by s, = ζω n ± ωn ζ. In the case of critically aperiodic response, ζ = yielding double real pole s = s = ω n. In discrete-time domain these poles are ωn s mapped into locations z = z = e T, < z. To obtain desired closed loop dynamics defined by z using pole placement design technique [5], k p and k i of () should be k p a z z + z =, ki =, (3) k ( a) k ( a) m which results in a closed loop dynamics of the form z z km ( a)( z ) Y z) = ( + a z) R( z) M ( z) oe,(4) ( z z ) ( z z ) ( showing that the desired poles are ensured. Consequently, a zero z = ( a z ) /( + a z) arises in the response with respect to reference. Its location depends on the location of the desired pole z, which is plotted in Fig. 3. For zero z not to be dominant, condition z < z must hold. According to Fig. 3, valid selection of desired pole z is a < z < a which gives T < / ω < T. This indicates that the time constant m n m m Fig. 3. Location of zero with respect to location of chosen pole One approach in overshoot elimination is introduction of referent signal filtering, which would slow down the system. This is contradictory to the high-performance requirements. Another design approach is zero-pole cancellation method, [5]. Namely, controller gains should be set in such manner that controller zero b cancels plant pole a, ( b = a ), and gain k p determines location of the desired closed loop pole z 3. Hence, controller parameters are obtained in the form a( z z =. (5) 3) 3 k p, ki = km ( a) km Closed loop system behavior is then described by z3 km ( a)( z ) Y ( z) = R( z) M oe ( z). (6) z z ( z a)( z z ) 3 The system has first order dynamics defined by desired pole z 3 with respect to reference, providing fast response without overshoot. However, complete cancellation does not occur in disturbance related term, which is described by second order dynamics. Furthermore, behavior with respect to disturbance is determined by slow pole of the plant z = a, which is now dominant. Reference response is quite satisfactory, whereas disturbance rejection dynamics is unacceptable. In order to obtain high-performance servo-system it is necessary to improve disturbance rejection performance. IV. ACTIVE DISTURBANCE ESTIMATOR A way to improve system robustness to parameter perturbation and exogenous disturbances is introduction of disturbance estimator. The concept of disturbance estimator is that the external disturbances and model uncertainties, usually regarded as an equivalent disturbance, can be efficiently 3 566

130 Boban Veselić and Čedomir Milosavljević compensated by feedback of the estimated value. Consider the control structure in Fig. 4, consisting of a real plant G (z) and disturbance estimator in the local loop. Equivalent disturbance q is evaluated inside the disturbance estimator employing discrete transfer function of the plant nominal model G n (z). A local feedback for the disturbance compensation is closed via digital filter G k (z). Due to the uncertainties of the plant parameters, the mismatch between real plant and nominal model inevitably exists. The real plant may be described as G( z) = Gn ( z)( + δg( z)), where the perturbation is limited by the multiplicative bound of jωt δg( e ) γ ( ω), ω, π / T. Plant output is uncertainty [ ] Depending on the applied controller inside estimator certain error between q and qˆ exists in general case, implying that complete equivalent disturbance rejection cannot occur and the obtained plant behavior is almost nominal. Gn ( z)( + δg( z)) Y ( z) = U ( z) + G ( z) G ( z) δg( z) k n Gn ( z)( + δg( z))( Gk ( z) Gn ( z)) + M + G ( z) G ( z) δg( z) k n oe ( z). (7) Suppose that G ( ) k z = Gn ( z), i.e., digital filter represents nominal plant inverse dynamics. Using (7) plant output becomes Y ( z) = Gn ( z) U ( z), which indicates that the disturbances are completely rejected and the nominal plant behavior is obtained. Unfortunately, such G k (z) is not a causal filter, which cannot be realized. It is evident from (7) that the model perturbation δ G(z) affects the stability of the system. Robustness of the proposed structure against model uncertainties is, therefore, limited to the level of the model perturbation δ G( z) quantified in a suitable way for which input-output transfer function (7) remains stable. Fig. 5. Servo-system with active disturbance estimator V. SERVO-SYSTEM SYNTHESIS The proposed servo-system is depicted in Fig. 5. Both controllers, in the main loop G r ( z) and within estimator G r ( z), governs nominal plant and model, respectively, since disturbance estimator forces the real plant to exhibit nominal behavior. Hence, identical controllers designed for a nominal plant may be used in the main loop as well as in the estimator. For sake of expressions simplicity suppose that plant parameter identification is done correctly and parameter uncertainties are not significant, ( δ G ). The proposed servo-system dynamics is then given by Gn ( z) Gr( z) Y ( z) = R( z) + G ( z) G ( z) n ( + G n r ( z) G r G ( z) n ( z))( + G n ( z) G r M ( z)) oe ( z). (8) Fig. 4. Structure of digital disturbance estimator In [4] an active disturbance estimator is proposed, where passive digital filter G k (z) is replaced with an active control subsystem, Fig. 5. The signal qˆ is an estimate of the compensated part of the equivalent disturbance. If controller G r ( z) ensures q ˆ = q, U e ( z) = Gn ( z) Q( z) holds, which is equivalent to the passive structure with ideal digital filter G ( z) = G ( z). Hence, nominal plant behavior is obtained, k n Y ( z) = Gn ( z) U ( z). From the control design aspect, problem of equivalent disturbance compensation is here transformed into tracking control problem with referent signal q (k). Clearly from (8), system performance with respect to reference is directed only by the main controller and identical to the system without disturbance estimator (Fig., eq. ()). Since cancellation design method results in a satisfactory response to the reference, the main controller may be realized as PI type (eq. ()) with the parameters tuned according to (5). However, both controllers participate in disturbance rejection. Since the main controller already has integral action, in case of step-like exogenous disturbances it is sufficient for estimator controller to be P type, G r ( z) = k p. Gain k p is tuned using pole placement technique, where z 4 is desired pole introduced by the estimator. Consequently, k p a z4 =. (9) k ( a) m 567

131 High-Performance Velocity Servo-System Design Using Active Disturbance Estimator With the controllers tuned according to (5) and (9), servosystem dynamics become z3 km ( a)( z ) Y ( z) = R( z) M oe ( z). () z z ( z z )( z z ) 3 It is obvious that the proposed structure with active disturbance estimator offers possibility to obtain both responses, with respect to the reference and exogenous disturbances, acceptable for high-performance servo-systems. Overshoot does not arise and system dynamics is completely defined by the freely adopted poles z 3, z 4. To completely reject ramp-like exogenous disturbances estimator controller must be of PI type with the parameters defined with eq. (3). System dynamics is then described by 3 4 Finally, active disturbance estimator is activated in the third experiment (Fig. 6. trace (3)). The main PI controller remains unchanged from the previous experiment, while P controller in the estimator is set by applying pole placement under condition z 4 = z3, resulting in k p =. 34 by virtue of (9). The proposed servo system is superior to the other two, with fast velocity response without overshoot and equally fast dynamics in disturbance rejection. z3 km( a)( z ) Y z) = R( z) M oe( z). () z z ( z z )( z z ) ( 3 3 VI. EXPERIMENTAL INVESTIGATION The effectiveness of the proposed control structure has been investigated by experiments, which have been conducted on a servo-system with a three phase, 5Hz,.37kW, Seiber LS7 induction motor with.3 Nm nominal torque. Control part of the servo-system is realized by dspace DS4 R&D controller board. Indirect rotor flux oriented vector control of induction motor is employed. The control scheme contains measurement of two line currents and rotor shaft angle position, coordinate transformations, decoupling circuits, electrical angle estimator, two local current control loops with Hz bandwidth and khz sampling frequency, and a main speed control loop with khz sampling frequency. By neglecting current control loops along with other unmodeled dynamics and nonlinearities, present in such complex system, in certain approximation speed control system may be considered as one in Fig.. In this representation motor parameters are: k T =.648 Nm/A, J = kgm 4, B = 3 Nms/rad. Reference signal is given with r ( t) = h( t.5) rad/s, and the system is subjected to load torque M o ( t) =.65 h( t.5) Nm, which is 5% of nominal torque. In the first experiment (Fig. 6., trace ()), active disturbance estimator is deactivated and the main PI controller is designed using pole placement. Desired dynamics is defined by ζ =, 4 ω n = rad/s ( z =. 98 ). k p =. 7, k i =.8 are obtained using (3). An unwanted overshoot arises due to the dominant zero z =. 99. The main controller is then tuned by cancellation method in the second experiment (Fig. 6., trace ()). Desired bandwidth of Hz results in pole z 3 =. 939, which according to (5) 5 gives k p =. 33, k i =.89. System has good response to reference, but very slow dynamics of disturbance rejection, caused by not cancelled plant pole a. Fig. 6. Velocity servo-system step responses VII. CONCLUSION The paper considers design of high-performance velocity servo-system with active disturbance estimator, whose introduction drastically improves system behavior with respect to exogenous disturbances. This ensures fast response without overshoot and system dynamics is completely defined by freely adopted poles. Analytically predicted performance has been experimentally verified in case of induction motor servo system, in which significant modeling error and parameter uncertainties exist. The proposed servo-system has exhibited excellent exogenous disturbance rejection property as well as robustness to parameter perturbations. REFERENCES [] T. Umeno, Y. Hory, Robust speed control of DC servomotors using modern two degrees-of-freedom controller design, IEEE Trans. on Ind. Electronics, Vol. 38, No. 5, pp , 99. [] YA.Z. Tsypkin, U. Holmberg, Robust stochastic control using the internal model principle and internal model control, Int. J. Control, Vol. 6, No. 4, pp. 89-8, 995. [3] Y. Choi, K. Yang, W.K. Chung, H.R. Kim, H. Suh, On the robustness and performance of disturbance observers for second-order system, IEEE Trans. Automatic Control, Vol. 48, No., pp. 35-3, 3. [4] B. Veselić, Č. Milosavljević, D. Mitić, Robust servo-system design based on discrete-time sliding mode control with active disturbance estimator, Transactions on Automatic Control and Computer Science, Vol. 5, No., pp. 5-, 5. [5] K. Ogata, Discrete-time control systems, N. Jersey, Prentice- Hall International, Inc.,

132 Optimal Control Using Neural Networks D. Toshkova and P. Petrov Abstract In the present paper a literature review concerning the problem of optimal control synthesis using neural networks is presented. Key words neural networks, optimal control І. INTRODUCTION During last ten years a large number of papers treating the application of neural networks for optimal control synthesis for plants, whose dynamics is described by linear and nonlinear ordinary and partial differential equations are published. It is determined by the main property of the neural networks to approximate any linear and non-linear function. The most widely used neural networks are feedforward neural networks. Another structure which is also applied is the recurrent neural network. A tendency to start using a broader range of structures emerges which is related to developing of new structures. ІІ. OPTIMAL CONTROL SYNTHESIS USING NEURAL NETORKS In [6] a survey of the possibilities of using neural networks for modeling, identification and control of the systems is presented. Here only the optimal decision control and model prediction control will be mentioned. In the case of model prediction control the plant is modeled through a neural network. By using the neural network model the future plant response are predicted over a specified time horizon. On the basis of predicted future response a specified performance index is minimized to obtain the optimal control. In optimal decision control case the state space is partitioned into separate regions, in which the control action is assumed constant. The control surface is realized through a training procedure of the neural networks. In [4] model predictive optimal controller for nonlinear discrete systems is considered. A block for system input state realization is used, which transforms the system into quadratic system with equal number of inputs and states in order to develop optima receding horizon controller which leads to decreasing the amount of the calculations in comparison to traditional optimal controllers. A nonlinear feedback law is derived where a neural network in feedback loop is used to generate of optimal input action. The generated input approximates the solution which is minimal in respect of quadratic cost function with parameters Daniela G. Toshkova, Department Production Automatization, Technical University of Varna,, Studentska Str., 9 Varna, Bulgaria, Petar D. Petrov, Department Production Automatization, Technical University of Varna,, Studentska Str., 9 Varna, Bulgaria controlling the final states, the value and variation of the input action for the control purposes. An analysis of the local stability and robustness of the controller is presented. In [] the problem of determining optimal controls for nonlinear dynamical systems by using neural networks is considered. Through a few examples the possibilities of the neural networks for on-line solution of the optimal control problems are demonstrated. In the major part of the publications [, 3,,, 3, 4, 5, 6, 7, 8,, ] the optimal control problem is derived by using the dynamic programming method and the obtained solutions are approximated by using neural networks. Although the presented new approaches bear different names adaptive critic methodology [,,, 3, 4, 5, 6], neuro-dynamic programming [3], neural dynamic optimization [, ] they do not differ from each other substantially. Their main advantage is that so called curse of dimensionality problem is solved. The adaptive critic methodology is introduced in []. The control law for linear or nonlinear system is determined through consecutive adapting of two neural networks action network and critic network. The action network captures the relationship between the state and control and the critic neural network captures the relationship between the state and costate. Through this methodology the control law is determined for a large set of initial conditions. It is not necessary the control law to be determined analytically. The neural network, which is used (multilayered perceptron) do not need external training; it is necessary the functional form of the control law to be known preliminarily. In [] the necessary conditions for solutions obtained through the adaptive critic methodology to converge are presented and it is shown that the obtained solution is optimal. In [, 5] the a.m. methodology is developed for distributed parameter systems. In [3] the method is applied for optimal control synthesis for distributed parameter systems, whose dynamics is described by coupled nonlinear partial differential equations. In [4] the proper orthogonal decomposition concept is used for reducing of distributed parameter system to lumped parameter system of law order. The optimal control problem is solved in time through applying the adaptive critic algorithm. Then the control solution is given in the spatial domain by using the same orthogonal functions. In [6] the adaptive critic algorithm is elaborated. It is shown that the necessity of action neural networks drops off. In [7] three adaptive critic methods, which are used for designing of neural controllers heuristic dynamic programming, dual heuristic programming and globalized dual heuristic programming are described. Two modifications of the globalized dual heuristic programming as well as generalized training procedure are suggested. The developed approaches do not differ substantially from the methodology 569

133 Optimal Control Using Neural Networks suggested in []. In both approaches two neural networks are used for approximating the solution for the optimal control action and critic neural networks. The only difference is that a recurrent neural network is used instead of multilayered perceptron. In [8] the approach is developed for discreet distributed parameter systems. Moreover the algorithm is elaborated and the necessity of action neural networks as in [6] drops off. In [, ] neural dynamic optimization is presented as a method for synthesis of optimal feedback control for nonlinear MIMO systems. The main characteristic of neural dynamic optimization is that the solution for the optimal feedback, whose existence is proved through dynamic programming method is approximated by using neural networks. In [] the background and motivation for development of neural dynamic optimization is described and in [] the neural dynamic optimization theory is presented. One major drawback of this approach is the big memory requirements although this requirement is not so severe compared to the classical dynamic programming method. Another methodology having for a theoretical basis the dynamic programming and using neural networks for approximation is so called neuro-dynamic programming According to the definition given in [3], neuro-dynamic programming enables the to learn how to make good decisions by observing their own behavior, and use built-in mechanisms for improving their actions through a reinforcement mechanism. This methodology is used not only for solving the optimal control problems but for a broader class of problems. In [9] a recurrent neural network is introduced for the N-stage optimal control problem. The first step of the presented approach is reformulating the N-stage optimal control problem and then the gradient method is used for deriving the dynamics equation of the recurrent neural network. Although the approach enables obtaining real-time solutions it possesses two drawbacks. First, the rigorous mathematical analysis for the stability of the neural network lacks. Second, a neural network which combines the structure of the N-stage optimal control problem and a faster optimization method needs to be explored. In [] an approach for synthesis of optimal control for nonlinear systems, which incorporates the N-stage optimal control problem as well as least square support vector machines approach for mapping the state space into action space. SVM with radial basis function kernel are used. The solution is characterized by a set of nonlinear equations. An alternative formulation as a constrained nonlinear optimization problem in less unknowns is given, together with a method for imposing local stability in the LS SVM control scheme. Advantages of LS SVM control are that no number of hidden units has to be determined for the controller and that no centers have to be specified for the Gaussian kernels when applying Mercer s condition. The curse of dimensionality is avoided in comparison with defining a regular grid for the centers in classical radial basis function networks. This is at expense of taking the trajectory of state variables as additional unknowns in the optimization problem, while classical neural network approaches typically lead to parametric optimization problems. In the SVM methodology the number of unknowns equals the training data, while in the primal space the number of unknowns can be infinite dimensional. A drawback of this approach is the large number of the unknowns. In [5] the problem of multistage optimal control is considered. The problem is solved by using wavelet neural networks (WNN) as its capability for learning and generalization of functions are bigger. The control law is approximated by using WNN. The Langragian is constructed in order to from optimal control problem to come to optimization problem. A weight is introduced to regulate the balance between control system and its good performance by using WNN for mapping of the function from the state space into action space after which the optimal control is achieved. The value of the weight has effect on the simulation result. In [9] an interactive fuzzy satisfying method for the solution of a multiobjective optimal for a linear distributed parameter system governed by heat conduction equation is suggested. In order to reduce the control problem to an approximate multiobjective linear programming problem a numerical integration formula is used and the suitable auxiliary variables are introduced. By considering the vague nature of the human judgment, the decision maker is assumed to have fuzzy goals for the objective functions. Having elicited the corresponding linear membership functions through the interaction with the decision maker, if the decision maker specifies the reference membership values, the corresponding Pareto optimal solution can be obtained by solving the minimax problems. Then a linear programming based interactive fuzzy satisfying method for deriving a satisfying solution for the decision maker efficiently from a Pareto optimal solution set is presented. In [3] an approach for optimal control synthesis, in which a fuzzy neural network is used as a controller through simulation of the process of the controlled system is suggested. In [] the designing of a neural networks based regulator for nonlinear plants is considered. Both state and output feedback regulators with deterministic and stochastic disturbances have been investigated. A multilayered feedforward neural network has been employed as the nonlinear controller. The training of neural network utilizes the concept of so called block partial derivatives. The suggested approach may also be used for optimal control synthesis for plant with state and control constraints. In [7] a neural network based algorithm for a discreet constrained optimal control synthesis for nonlinear systems is presented. In [5] a recurrent learning algorithm for optimal control synthesis for continuous dynamic systems is suggested. The designed controllers are in the form of unfolded recurrent neural networks. The proposed learning algorithm is characterized by its double forward recurrent loops structure for solving both the temporal recurrent and the structure recurrent problems. The first problem is resulted from the nature of general optimal control problems, where the objective functions are often related (evaluated at) to some specific instead of all time steps or system states only. This causes missing learning signals at some time steps or system states. The second problem is due to the high-order discretization of the continuous systems by the Runge-Kutta 57

134 D. Toshkova and P. Petrov method that is performed to increase the control accuracy. The discretization transforms the system into several identical subnetworks interconnected together, like a recurrent neural network expanded in the time axis. Two recurrent learning algorithms with different convergence properties are derived: the first- and second-order learning algorithms. The stability and the robustness of the designed controllers have to be studied in details. In [4] a multilayered recurrent neural network is suggested for synthesizing linear quadratic optimal control systems by solving the algebraic matrix Riccati equation in real time. The suggested recurrent neural network consists of four bidirectionally connected layers. It is shown to be capable of solving the algebraic matrix Riccati equations, which enables synthesizing linear quadratic control systems in real time. In [8] a new alternative for finding of the optimal control for discrete systems, which is based on using the continuous neural network of Hopfield (CNNH) is developed. The quadratic cost function is transformed into energy function of CNNH and the control is the output vector of CNNH. As CNNH works in parallel and in real time, the method may meet all the requirements for control in real time. ІІІ. CONCLUSION As a conclusion of the survey of the publication considering the optimal control synthesis problem for distributed parameter systems of parabolic type and also those discussing the utilizing neural networks in the optimal control synthesis problems it may be noted the following Neural networks enable optimal control synthesis in real time.; Important characteristics of the neural networks is their property to be universal function approximators but on the other hand the good approximation is hindered from the possibility of getting trapped in a local minimum; The problem for neural networks application for synthesis of optimal control for distributed parameter systems is not investigated entirely (as far as it is known to author only one approach is suggested adaptive critic [-6]); It is not pointed out definetely in the publications how stable is the suggested controller performance. REFERENCE [] Ahmed, S. and M.A. Al-Dajani, Neural Regulator Design, Neural Networks, 998, Vol., No.9, pp [] Balakrishnan, S. N. and V. Biega, Adaptive critic Based Neural Networks, Proc. Am. Contr. Conf., Seattle, WA, 995, pp [3] Bertsekas, D.P. and J.N. Tsiklis, Neuro-dynamic programming, Anthena Scientific, Belmont, Massachusetts,996 [4] Foley, D.C. and N. Sadegh, Short horizon optimal control of nonlinear systems, Proceedings of IEEE Conference on Decision and Control, 3, Vol., pp [5] Hu, X., H. Lue and J. He, Research upon Multistage Optimal Control by Wavelet Neural Network, The Sixth World Congress on Intelligent Control and Automation, 6, Vol., pp [6] Hunt, K. J., D. Sbarbaro, R. Zbikowski and P. J. Gawthrop, Neural Networks for Control Systems A Survey, Automatica, 99, Vol. 8, No. 6, pp.83- [7] Irigoyen, E.; Galvan, J.B.; Perez-Ilzarbe, M.J., Neural networks for constrained optimal control of non-linear systems, Proceedings of the International Joint Conference on Neural Networks, IEEE, Piscataway, NJ, USA,, Vol. 4, pp [8] Li, Ming-Ai and Ruan Xiao-Gang Optimal control with continuous Hopfield neural network, Proceedings on 3 IEEE International Conference on Robotics, Intelligent Systems and Signal Processing, 3, Vol., pp [9] Liao, L.-Z., A Reccurent Neural Network for N-stage Optimal Control Problems, Neural Processing Letters,999, Vol., pp [] Liu, Xin; Balakrishnan, S.N., Convergence analysis of adaptive critic based optimal control, Proceedings of the American Control Conference, IEEE, Piscataway, NJ, USA,, Vol. 3, pp [] Narendra, K. S. and S. J. Brown, Neural Networks for Optimal Control, Proceedings of the 36th Conference on Decision and Control, San Diego, California, USA, 997, pp [] Padhi, R. and Balakrishnan, S.N., Infinite time optimal neuro control for distributed parameter systems, Proceedings of the American Control Conference, IEEE, Piscataway, NJ, USA,, Vol.6, pp [3] Padhi, R.; Balakrishnan, S.N., A systematic synthesis of optimal process control with neural networks, Proceedings of the American Control Conference,, Vol. 3, pp [4] Padhi, Radhakan; Balakrishnan, S.N., Proper orthogonal decomposition based feedback optimal control synthesis of distributed parameter systems using neural networks, Proceedings of the American Control Conference,, Vol. 6, pp [5] Padhi, R., S.N. Balakrishnan and T. Randolph, Adaptive Critic Based Optimal Neuro Control Synthesis for Distributed Parameter Systems, Automatica,, Vol. 37, pp [6] Padhi, R., Optimal Control of Distributed Parameter Systems Using Adaptive Critic Neural Networks, Dissertation, University of Missouri Rolla, [7] Prokhorov, D.V. and D. Wunsch, Adaptive Critic Designs, IEEE Transactions on Neural Networks, September 997, pp [8] Prokhorov, D.V., Optimal Neurocontrollers for Discretized Distributed Parameter Systems, Proceedings of the American Control Conference, 3, Vol., pp [9] Sakawa, M., M. Inuiguchi, K. Kato and T. Ikeda, An Interactive Fuzzy Satisficing Method for Multiobjective Optimal Control Problems in Linear Distributed Parameter Systems, Fuzzy Sets and Systems, 999, Vol., pp [] Seong, C.-Y. and B. Widrow, Neural Dynamic Optimization for Control Systems Part I: Background, IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics,, Vol. 3, No.4, pp [] Seong, C.-Y. and B. Widrow, Neural Dynamic Optimization for Control Systems Part II:Theory, IEEE Transactions on Systems, Man, and Cybernetics Part B: Cybernetics,, Vol. 3, No.4, pp [] Suykens, J.A.K., J. Vandewalle and B. De Moor, Optimal Control by Least Squares Support Vector Machines, Neural Networks,, Vol.4, pp [3] Tam, Peter K.S.; Zhou, Z.J.; Mao, Z.Y., On designing an optimal fuzzy neural network controller using genetic algorithms, Proceedings of the World Congress on Intelligent Control and Automation (WCICA),, Vol., pp [4] Wang J. and G. Wu, A Multilayer Reccurent Neural Network for Solving Continuous time algebraic Riccati Equations, Neural Networks, 998, Vol., pp [5] Wang, Y.-J. and C.-T. Lin, Reccurent Learning Algorithms for Designing Optimal Controllers of Continuous Systems, IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans,, Vol. 3, No.5, pp

135 This page intentionally left blank. 57

136 Some Discretizing Problems in Control Theory Milica B. Naumović Abstract The methods of obtaining the discrete equivalents for the models of continuous-time objects without and with time delay, as well as some model conversion algorithms, are wellknown in the literature. The discretizing and conversing method, presented in this paper, illustrates the use of VAN LOAN s formula for derivation of block triangular matrix exponential []. Keywords - Control engineering, discretizing problems, matrix exponential, system with time delay, model conversion. I. INTRODUCTION In numerous control applications it is usefull to be able to find the matrix exponential in an effective manner. Moreover, computing integrals involving the matrix exponential is necessary, in order to find the cost equivalents in optimal control theory, for example. Notice, that Van Loan s method [] for computing four characteristic integrals, based on the derivation of block triangular matrix exponential can be in parctice in control theory. Matrix exponential is anyhow one of the most frequent computed matrix function [], and many algorithams, developed for that purpose up to now, have bad numerical performances [3]. A method for computing the exponential of a certain block triangular matrix, due to Van Loan [], is given as follows: Theorem. Let n i, i =,,3,4 be positive integers and set m to be their sum. If the m m block triangular matrix C is defined by A B C D } n A B C } n C = A3 B3 } n3, () A4 } n4 { { { { n n n n 3 4 then for t F() t G() t H() t K() t () t () t () t e Ct F G H =, () F3() t G3() t F4 () t where t F j () t = e A j, j =,, 3, 4 (3) Milica B. Naumović is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, t j( t s) j+ s G () t = e A B e A ds, j =,, 3 (4) j j t Aj( t s) Aj+ ( s r) H j() t = e Cje d s (5) t s Aj( t s) Aj+ ( s r) Aj+ r + e Bje Bj+ e drd s, j =, t A( t s) A4s K() t = e De ds t s A( t s) 3( ) e A s r A ( s r) 4 e 3 e A r + C B + B C e drds ts r A( t s) A( s r) A3 e ( r w) A4w + Be Be B3e dwdrd s. (6) Corollary to Theorem. Let A, B and Q c be real matrices of dimension n n, n p, and n n, respectively. Assume T Qc = Qc and positive semi- that matrix Q c is symmetric ( ) definite ( xqx T c ). Following the previous theorem and combining various submatrices, it can be shown that the integral T T Q( T) = e A s e As Qc ds (7) can be calculated as Q( T) = F3( T) T G( T), (8) T A Q F( T) G( T) where exp c T =. (9) F3 ( T ) A The need for computing intrgral (7) arises in optimal sampleddata regulation problem, for example. It is posible to compute the exponential of the matrices of low dimension analytically for an arbitary sampling interval T. The advantage of an analytical computation is that the result is expressed in terms of different parameters, and it is possible to examine the effect of changing these parameters. Recall that an arbitrary matrix function f ( C ) can be computed via the wellknown Cayley-Hamilton Theorem. Moreover, if the matrix C has distinct eigenvalues, the method of eigenvalue decomposition can be use for computing matrix function [], [4]. In this paper the problems of discretizing the continuous-time systems without and with time delay, as well as the digital model conversion are solved by computing exponential of some special form matrices. 573

137 Some Discretizing Problems in Control Theory II. DISCRETIZING THE MODELS OF ANALOG PLANTS For simplicity, without loss of generality, consider the n th-order single-input single-output control object. It is convenient to introduce the realization sets as follows [5]: def N ( ) () s,, : G ( s) ( s ) S = A b d = c c c c c = d I Ac bc, () Dc () s def Nq ( z) (,, ): G ( z) ( z ) Sq = Aq bq d q = = d I Aq bq, () Dq ( z) where T A = Φ( T ) = e A c q, () and T A b = e cτ q bc d τ, (3) Ts e G ( z) G ( s) q = ZL c. (4) s Thus, () represents ZOH equivalent model for () at sampling interval T, that is the so called q-model for the continuous-time system. Note, that A q is matrix exponent, and the vector b q must be computed by integration as shown in (3). However, it is interesting to note that both A q and b q can be computed simultaneously using a single matrix exponential. n+ n+ block matrix M as Define ( ) ( ) Ac bc n M =, (5) { n in which the zero in the lower left-hand corner represents an n dimensional zero row-vector. Then, by using Van Loan s formulas ()-(6) for matrix exponential of M T can be found e T A M q bq =. (6) Thus, the digital model matrices A q and b q can be computed as follows A = [ ] e M q bq I T, (7) whereas [ Ac bc ] = [ I ] M. (8) Example. We will use the procedure described in this section to compute continuous-time model plant ( Ac, bc ) simultaneously on the basis ZOH equivalent model for the plant given by T T Aq bq = T, where T is the sampling period. To use (5)-(8), we create the matrix e T T M A b M = = q q = TT } that has an eigenvalue λ = with multiplicity m = 3. The matrix M is positive-definite, and the 3 3 matrix M can be written as the matrix logarithm M = f ( M) = ( ln M ) T. To calculate this matrix function, some formulas based on the wellknown Cayley-Hamilton theorem can be used. Namely, the matrix M can be written as the matrix polynomial of degree as follows M = α ( M) =α M +α M +α3, To compute the coefficients α i, i =,,3 the scalar function f ( λ ) = (ln λ ) T, polynomial αλ ( ) =αλ +αλ+α 3, as well as their first and second derivatives with respect to λ are required. These equations are calculated at the eigenvalue λ = as follows f () = α () =α +α +α 3 () () f () = T α () = α +α () () f () = T α () = α. When the values of α = T, α = T and α 3 = 3T are substituted in, the result is T T T T 3 M = T + T = T T T Finally, A c and b c are extracted from the just derived matrix M according to the partitions shown in (5). So, we get state-space model ( Ac, bc ), with A c = and b = c for the considered double integrator plant. III. MODEL CONVERSIONS Let an integer N denotes the ratio between the slow sampling period T s and the fast sampling period T, i.e. = s. Let ( A, b, d) N T T qs qs represents the slow discretetime model of the corresponding continuous-time model (). The commonly used matrix continued-fraction method to convert ( Aq, bq, d) to ( Ac, bc, d), for example is [6]: Ac = ln Aq F T T (9) 4 3 F n n T I F 5 I F 5 L where ( n)( + n) def F = A I A I The vector b c can be found by q q. () ( ) n bc = Ac Aq I bq. () 574

138 The conversion of the fast-rate digital model ( A, b, d) slow-rate digital model ( A, b, d) Milica B. Naumović q q to a qs qs with the slow sampling period T s can be carried out as follows. According to (), we have bq = ( Aq In ) Ac bc, which gives Ac bc = ( Aq In ) bq, and = ( n ) we obtain ( Aqs, bqs ) from ( Aq, bq ) as bqs Aqs I Ac bc. Thus N Aqs = Aq () b A In A In b and = ( )( ) qs qs q q. (3) Note that the conversion of the fast-rate digital model to a slow-rate one can be obtain in another way by using relation (6), i.e. N MT A b A b e s qs qs q q = =. (4) By induction it can be shown that N N N i Aq bq Aq = Aq b q i=. (5) So, the matrices of the T s model and T model have the relation N Aqs = Aq, and qs q q. (6) N i b = A b i= IV. DISCRETIZING A CONTINUOUS-TIME SYSTEM WITH TIME DELAY [( k ) T] Φ Γ Γ K x( kt ) x + ( ) ukt ( dt) ukt dt+ K M = M M M + M ukt ( ), ukt ( T) K ukt ( T) ukt ( ) ukt ( T) K (3) T where Φ = e A c τ T τ A ( T ') λ = e τ A Γ c e c A λ bc d λ, and Γ = e c bc d λ. (3) The output equation is obtained from (8) to be x( kt ) ukt ( dt) ckt ( ) = [ d K ]. (33) M ukt ( T) Notice that the above equations (3) and (3) contain partitioned matrices. Each zero below the matrix Φ in (3) and (3) represents a row vector of n zeros. Recall, that because the signal ut () is piecewise constant over the sampling interval, the delayed signal ut ( τ ) is also piecewise constant. However, the delayed signal will change between the sampling instants as Fig. visualizes. Consider single-input single-output continuous-time system with time delay described in state-space by x& () t = Acx() t + bc u( t τ) (7) ct () = dx () t. (8) It is assumed that the time delay is longer than the sampling period T. Let τ= d T +τ, (9) ( ) <τ T and ( ) where d is an integer. Discrete-time transfer functions of systems with a delay that is not an integer times the sampling period are easily obtained by using the modified Zz transform [7]. The discrete-time state-space model of system (7)-(9) is given in literature [8]-[], [4] by [( k ) T] x + Φ Γ x( kt ) Γ = + ukt ( ) for d=, ukt ( ) u[ ( k ) T ] (3) or when d, the equations are Fig.. The piecewise constant signals ut () and ut ( τ), τ < T To integrate the differential equation (7) over one sample period in order to obtain ZOH equivalent model, it is convenient to split the integration interval into two parts, so that control signal ut ( τ ) is constant in each part. Hence, the motion of the considered dynamical system (7)-(9) in the interval kt t < ( k + ) T is [ ] x( t) = Φ( t kt ) x( kt ) + Θ ( t kt ) u ( k ) T, kt t < kt +τ (34) and x() t = Φ( t kt τ ) x( kt +τ ) + Θ ( t kt τ) u( kt ), kt +τ t < ( k + ) T (35) where 575

139 Some Discretizing Problems in Control Theory Φ() t = e A c t and Θ( t λ ) = Φ( t υ) b c dυ. (36) If we now substitute t = kt +τ in (34) and t = ( k+ ) T in (35), we obtain and t λ [ ] x( kt +τ ) = Φ()( τ x kt ) + Θ () τ u ( k ) T, (37) [ ] x ( k + ) T = Φ( T τ ) x( kt +τ ) + Θ ( T τ) u( kt). (38) It is clear, that the relations (37) and (38) can be expressed as the function of the matrices M and e M T, given by (5) and (6), as shown below: x( T +τ) Aq bq x( kt) ( kt) = = M e τ x ukt ( T) ukt ( T) ukt ( T) (39) and [ k T] q q kt M( x ( + ) A b x( +τ ) T ) ( kt ) e τ x +τ = = ukt ( ) ukt ( ) ukt ( ) (4) If we substitute (39) in (4) we obtain [( k+ ) T] Aq bq Aqx( kt ) + bqu( kt T ) x = ukt ( ) ukt ( ) AqAqx( kt) + AqbqukT ( T) + bqukt ( ) =. (4) ukt ( ) Note that we can compute the product of the matrix exponentials as follows: M( T ) M Aq bq Aq bq e τ e τ = (4) AqAq Aqbq+ bq =. Finally, the equations (4)-(4) can be compared with (3)- (3) resulting in Φ = AqAq Γ = bq (43) Γ = Aqbq. Example. Calculate the ZOH equivalent model for the following continuous-time system with time delay x& () t = () t + u( t τ) x, where τ=. and T =.3. The matrix M defined in (5) is A b M = c c =. The matrix exponentials are and e τ e τ e Mτ A = q bq = τe τ e τ ( τ )e τ +, e T τ e T τ M( T e τ) A = q bq = ( T τ)e T τ e T τ ( T τ )e T τ +. Using (43) we get e T.35 Φ = A q A q = = T e T e T.45.35, e T τ.5 Γ = b q = = ( T )e T τ.5, τ + and e T τ (e τ ).45 Γ = A q b q = = e T τ ( T ) e T ( T ).5. +τ + V. CONCLUSION This paper deals with a procedure for simultaneous computing the both matrices of zero-order hold equivalent q- model (A q and b q ) using a single matrix exponential. It is pointed to several applications of this effective approach in some control tasks. REFERENCES [] C. Van Loan, Computing Integrals Involving the Matrix Exponential, IEEE Trans. Automat. Contr., Vol. AC-3, No. 3, pp , 978. [] G.H. Golub, C.F. Van Loan, Matrix Computations, Johns Hopkins University Press, Baltimore, MD, 983. [3] C.B. Moler, C.F. Van Loan, "Nineteen Dubious Ways to Compute the Exponential of a Matrix", SIAM Review, Vol., No. 4, pp , 978. [4] R. J. Vaccaro, Digital Control, A State-Space Approach, McGraw- Hill, Inc., 995. [5] M.B. Naumović, M.R. Stojić, "Comparative study of finite word length effects in digital filter design via the shift and delta transforms", Electrical Engineering, Archiv für Elektrotechnik, Vol. 8, No. 3-4, pp. 3-6,. [6] L.S. Shieh, X.M. Zhao, J.W. Sunkel, "Hibrid state-space self-tuning control using dual-rate sampling", IEE Proc.Control Theory and Applications, Part D, Vol. 38, No., pp. 5-58, 99. [7] E. I. Jury, Theory and Application of the z- Transform Method, New York: Wiley, 964. [8] G.F. Franklin, J.D. Powell, Digital Control of Dynamic Systems, Reading, MA: Addison-Wesley, 98. [9] B. Wittenmark, Sampling of a System with a Time Delay, IEEE Trans. Automat. Contr., Vol. AC-3, No. 5, pp. 57-5, 985. [] K.J. Åström, B. Wittenmark, Computer-Controlled Systems - Theory and Design, Englewood Cliffs, N.J.: Prentice-Hall,

140 SESSION PTDS I Power Transmission and Distribution Systems I


142 An Algorithm for Coupled Electric and Thermal Fields in Insulation of the Large Power Cables Ion T. Cârstea and Daniela P. Cârstea Abstract Dielectric heating is caused by losses due to friction of the molecular polarisation process in dielectric materials. A polluted dielectric has a finite resistance so that the leakage current in the dielectric heats the dielectric. The problem of heating is a coupled thermal-electric problem. The paper presents an algorithm based on a D model for coupled fields in the insulation of a large power cable. The heat transfer in insulation is described by the heat conduction equation where the heat sources are both internal sources generated by the leakage current in a resistive dielectric, and the boundary heat sources of the convective and Dirichlet/Neumann type. Keywords Coupled fields; Dielectric heatings; Finite element Method. I. INTRODUCTION This work deals with the heat generated by ohmic losses generated by the electric field in high-voltage cables. The problem is described by a coupled thermal-electric set of equations. The coupling between the two fields is the thermal effect of the electrical current or a material property as the electrical conductivity. A computation algorithm is presented for coupled problems in two dimensions. A numerical algorithm based on the finite element method (FEM) is presented describing the solution of twodimensional systems. In our example we consider only steadystate regime for the electric field although many transient regimes appear in the behaviour of the electromagnetic devices. The assumption is acceptable because the time constants for the electric phenomenon are less than the time constants for the thermal field. The problem of dielectric heating involves two approaches:. The capacitive case. In this case, all the involved insulation media can be assumed to be perfect dielectric without free charges. The mathematical model is given by Laplace s equation, written with the potential V. The conductivity of the medium is zero. In other words the dielectric is assumed to be perfectly insulating so that neither its permittivity nor the voltage frequency mattered.. The resistive case. In this case the resistive contribution is not negligible. Ion T. Cârstea is with the Faculty of Automatics, Computers and Electronics. Craiova, str. Doljului 4, bl. C8c, sc., apt.7, judet Dolj, Romania, Daniela P. Cârstea is with the Industrial Group of Romanian Railways, Craiova, str. Brâncuşi nr. 5, Craiova, Romania, We consider a coaxial cable with two insulation layers that can be imperfect dielectrics. Our target example is selected in order to compare the analytical solutions with the numerical solutions. At the application of a voltage U, the field changes from a purely capacitive distribution to a purely resistive field. The field between the initial and final has a time variation. Generally speaking there is no perfect dielectric insulation so that a leakage current exists. Ohmic losses cause the dielectric heating. A parallel-plane model can be used to compute the electric and thermal fields. II. MATHEMATICAL MODEL The electric field distribution can be obtained by approximation of the Maxwell equations. These approximations take different forms in accordance with material properties of the equipment. In modelling of these physical systems we must consider both perfect dielectrics and imperfect (or polluted) dielectrics. Fig.. The analysis domain In our target example the analysis domain is plotted in the Fig.. The mesh with triangular elements is presented. The static field distribution can be modelled by the following equations: E = ; E = ρ J with: ρ - the material resistivity, E - the electric strength and J the current density. 579

143 An Algorithm for Coupled Electric and Thermal Fields in Insulation of the Large Power Cables A D-field model was developed for a resistive distribution of the electric field. An electric vector potential P is introduced by the relation [6]: J = P Laplace s equation describes the field distribution (for anisotropic materials): P P ( ρ x ) + ( ρ y ) = () x x y y Mathematical model for the thermal field is the conduction equation: T T T ( k x ) + ( k y ) + q = γ c () x x y y t with: T (x, y, t) - temperature in the point with coordinates (x, y) at the time t; kx, ky thermal conductivities; γ-specific mass; c specific heating; q heating source. It is obviously that there is a natural coupling between electrical and thermal fields. Thus, the resistivity in equation () is a function of T, and the heating source q in () depend on J. Numerical models for the two field problems can be obtained by the finite element method. An iterative procedure was used for the temperature distribution. In dielectric applications we consider the dependency of the temperature by the form: σ = σ exp( αt ) exp( γe) where: σ stands for the conductivity at a temperature of C and field strength of kv/mm; α denotes the temperature dependency coefficient and γ denotes the field dependency coefficient. III. NUMERICAL MODELLING The differential model can not be solved analytically. A numerical model can be obtained by Galerkin s procedure. In general the time dependent problems after a spatial discretization can lead to a lumped-parameter model. For example, the heat equation, after spatial discretization, lead to a system of ordinary differential equations by the form: T [ S ] + [ R] {} T + { b} = (3) t where [R] and [S] are matrices and b is the vector of the free terms. The algorithm in pseudo-code has the following structure:. Choose the initial value of the temperature. Repeat {Computations for electrical field} Compute the resistivity ρ Solve the numerical model for electric potential P {Computations for thermal field} Compute the heating source q by () Solve the numerical model for the temperature 3. Until the convergence_test is TRUE The convergence test is the final time of the physical process but in the internal loop of the cycle repeat-until we have an iterative process because the electric conductivity depends highly on temperature T and electric field E. We present the numerical model for the heat equation. A spatial discretization leads to the ordinary differential equation (3). The time discretization of the temperature can be obtained by a formula of finite difference: ( k) ( k ) T T T = t δt With this approximation, the heat equation () becomes: ( k ) ( k) ( n ) ( cγ ) T ( k ) ( γ c) T = ( k T ) + + q δt A refinement of the numerical algorithm in pseudo-code can have the following form:. Put the iteration counter k on and the initial time t.. k=k+ 3. Compute the resistivity value. 4. Solve the numerical model for the electric potential P 5. Compute the heat source q in Eqn.. 6. Update the numerical model for the temperature 7. Solve the numerical model for the thermal field. The result is the temperature at the moment t k. 8. Increase the time with the step δt in order to obtain the following step t k. 9. If the time t k is less than the imposed limit of the time, then jump to the step, else stop. IV. NUMERICAL RESULTS Our example is a high-voltage direct-current (HVDC) cable with two insulation layers. The leakage current in dielectric is caused by the finite resistivity of the dielectric insulation. The geometrical properties of the cable are: the internal radius of the first layer is 5 [mm]; the internal radius of the second layer is 6 [mm]; the external radius of the second layer is [mm]; The physical electrical properties are: The voltage of the cable is U=5 [kv]; Resistivity of the first layer is. 9 [Ω. m] Resistivity of the second layer is. [Ω. m] The physical properties are: Thermal conductivity of the first layer is.7[w/k.m] Specific heat c=8 [J/Kg.K] Specific mass γ=3 [Kg/m 3 ] Thermal conductivity of the second layer is.7[w/k.m] Specific heat c=6 [J/Kg.K] Specific mass γ= [Kg/m 3 ] At the application of a high voltage the field has a capacitive distribution initially [4]. This distribution is for a short time so that it is not interest for the temperature distribution. Finally the field has a resistive distribution. Between these limits there is an intermediate field that can be computed by an iterative procedure. 58

144 Ion T. Cârstea and Daniela P. Cârstea The analysis domain is the insulation space. The symmetry of the problem can reduce the analysis domain to a quarter (Fig.). The heat source is the thermal effect of the current in the dielectric insulation and the load current of the cable. It is obviously that the ohmic losses in the cable conductor are the most important heat source. T = h( T T ) n C with h the convective coefficient, T - the ambient temperature and C the boundary of the cable and the external medium. A. Constant heat flux In our first case we consider that there is a constant heat flux on the interface conductor-insulation. The source of this flux is the Joule-Lenz s effect of the load in the cable. The mathematical model for the heat transfer is the conduction Fig. 3. Final temperature in radial direction Fig.. Temperature versus time equation (). The boundary conditions are Neumann s condition at the interface conductor-insulation, and convective condition at the boundary insulation-environment. The Neumann's condition can be computed by the conductor losses in the case the cable was loaded before switching of the step voltage, that is the current in the cable has been raised long before and the temperature distribution in the cable is stable. In this case the value of the heat flux is computed with the relation []: P cond p = πr with P cond - the ohmic losses per cable meter in the inner conductor as Joule-Lenz s effect. Thus, Neumann s condition is: T = p n C with C the interface of the cable conductor and insulation. At the interface insulation-environment we consider a convective condition by the form: In Fig. the temperatures vs. time at external surface of the conductor the curve (green), and environment surface the curve (red) are plotted. The time interval was 6 [s]. The convection coefficient h was for an environment temperature 38 [K] (35 C). In the Fig. 3 the final distribution of the temperature in the radial direction is plotted. The width of the insulation is 5 [mm]. B. Constant temperature Another practical assumption in electrical engineering is a Dirichlet boundary condition at the interface conductorinsulation. For our target example we considered a constant temperature of the conductor surface and a convective condition at the boundary insulation - environment. In numerical simulation the conductor temperature was considered as C (373.6 K). In Fig. 4 the temperature versus time is plotted in two interest points. The first curve denoted (green) is the temperature at the external surface of the first layer. The curve (red) represents the temperature at the external surface of the cable. From the engineering viewpoint the assumption of Neumann s condition seems more realistic. The heat flux at the conductor surface can be estimated more accurate than the conductor temperature. An accurate model can be obtained by including the conductor in the analysis domain. This approach increases the computational effort. This case was presented in reference []. We presented insulation with two layers. But in advanced technology the first layer at the conductor surface is a 58

145 An Algorithm for Coupled Electric and Thermal Fields in Insulation of the Large Power Cables semiconductor. In this way the variation of the electrical field at the interface conductor-insulation is smooth. and thermal field. The numerical models were obtained by the finite element method in a D space. As target example we considered a cable with twoinsulation layer. The resistivity of the insulation was considered as finite value. In this case the ohmic losses of the leakage current in insulation generate supplementary losses. The principal heat source remains the losses in the cable conductor. As first example we considered a constant heat flux on the interface conductor-insulation. In practice, the heat flux is dependent on the temperature of the cable conductor. By the numerical simulation we can consider all practical cases in the operating regimes of the cable. In the second example we considered a constant temperature of the conductor. REFERENCES Fig. 4. Temperature versus time for Dirichlet condition In these two examples we considered a steady-state regime of the electric field. This is a practical situation but there are cases where the voltage has step variations so that a transient regime appears as a natural situation [5]. V. CONCLUSION In this paper we presented an algorithm for coupled electric and thermal fields in the insulation of the large power cables. A parallel-plane model was considered both for electrical field [] Cârstea, D. CAD tools for magneto-thermal and electricthermal coupled fields. Research Report in a CNR-NATO Grant. University of Trento. Italy, 4 [] Cârstea, D., Cârstea, I. CAD in electrical engineering. The finite element method. Editor SITECH.. Craiova, Romania. [3] *** QuickField program, version 5.3. Year 4. Page web: Company: Tera analysis [4] Cârstea, D., Cârstea, I. A finite element technique for HVDC insulation parameters computation. In: Annals of the University Arad, Romania. October. [5] Cârstea, D., Cârstea, I. Computation models for electrical field in HVDC cables. In: Papers of the Energy National Conference CNE M, 9- October, Chişinău, Moldova. [6] Bastos, J.P.A., Sadowski, N., Carlson, R. A modelling approach of a coupled problem between electrical current and its thermal effects. In: IEEE Transactions on Magnetics, vol.6, No., March

146 Control of the Electrical Field in the Connectors for High-Voltage Cables Ion T. Cârstea and Daniela P. Cârstea Abstract The paper presents the analysis and control of electrical field in the mechanical connectors of the high-voltage (HV) cables. In the cable terminals a field enhancement occurs because the core has a sharp edge and the shield is interrupted. A mechanical connector links the cables and controls the electric field using special materials to ensure a homogeneous potential distribution. We present the analysis of the electric field in connectors and the control of the electrical field using Raychem technology. A semiconductor shield and a control tube can optimise the field distribution in cable connectors and terminals. Keywords Numerical analysis; High-voltage cables; Finite element Method. The role of the control tube is to do a uniform distribution of the field lines and electrical field in terminal. The material of the tube has a volume resistivity and permittivity controlled rigorously. The tube has a non-linear resistivity with behaviour of a varistor. It has a direct contact with the semiconductor shield of each terminal of the two cables that are connected. At the joints of two cables the mechanical connectors can be used []. In Fig. an axial section of the connection is presented with the following components: conductor; phase insulation; 3 control tube; 4 - muff insulation; 5 semiconductor layer; 6 the connector; 7 a special material for filling (mastic); 8 semiconductor layer (mantle). I. INTRODUCTION The problem of the analysis and control of the electrical fields in cable terminals is an open problem that involves a multidisciplinary research. The problem of an insulated electrical conductor fitting into a grounded screen is a common configuration in many electromagnetic devices so that the results from our case can be extended in other similar areas. We consider a high-voltage cable (Fig. ) where the significances of the components are []: conductor; phase insulation; 3 a layer for field control; 4 a semiconductor shield. Fig.. An axial section in muff Fig.. Cable terminal Generally speaking there is no perfect dielectric insulation so that a leakage current exists. Ohmic losses cause the dielectric heating. A parallel-plane model can be used to compute the electric and thermal fields. The tube for the field control covers the semiconductor shields of each cable terminal of the muff. The mastic has a high permittivity and realises a uniform distribution of the electrical field. In this way the electrical stresses are reduced at the cable terminals. The muff insulation is in direct contact with the external semiconductor and the thick is selected to prevent the partial discharges in the separation zone []. Ion T. Cârstea is with the Faculty of Automatics, Computers and Electronics. Craiova, str. Doljului 4, bl. C8c, sc., apt.7, judet Dolj, Romania, Daniela P. Cârstea is with the Industrial Group of Romanian Railways, Craiova, str. Brâncuşi nr. 5, Craiova, Romania, II. MATHEMATICAL MODEL The electric field distribution can be obtained by approximation of the Maxwell equations. These 583

147 Control of the Electrical Field in the Connectors for High-Voltage Cables approximations take different forms in accordance with material properties of the equipment. In modelling of these physical systems we must consider both perfect dielectrics and imperfect (or polluted) dielectrics. The control layer of the field has a finite resistivity and controls the electrical stress in the terminals. The static field distribution can be modelled by the following equations []: E = ; E = ρ J with: ρ - the material resistivity, E - the electric strength and J the current density. A D-field model was developed for a resistive distribution of the electric field. An electric vector potential P is introduced by the relation: J = P Laplace s equation describes the field distribution (for anisotropic materials): P P ( ρ x ) + ( ρ y ) = () x x y y Mathematical model for the thermal field is the conduction equation: T T T ( k x ) + ( k y ) + q = γ c () x x y y t with: T (x, y, t) - temperature in the point with coordinates (x, y) at the time t; kx, ky thermal conductivities; γ-specific mass; c specific heating; q heating source. It is obviously that there is a natural coupling between electrical and thermal fields. Thus, the resistivity in equation () is a function of T, and the heating source q in () depend on J. Numerical models for the two field problems can be obtained by the finite element method. An iterative procedure was used for the temperature distribution. The imperfect insulation leads to local heating of the connectors so that a coupled model can be a good approach of the electrical field computation. In our work we consider the electrical properties are constant with the temperature. III. CONTROL OF THE FIELD DISTRIBUTION In the real engineering, the designer of an electromagnetic device starts from an imposed performance of the device and tries to reach the performance by a command that can be a distributed or boundary command. In the area of the electrical engineering we can have a parametric optimisation. Practically, there are three possible parameters [3]: A physical property as electrical property (for example the permittivity); The excitation of the system (voltage or electrical current); A geometrical parameter (configuration, dimensions in any direction etc.) In our target example, there are many regions involving different materials as conductor, semiconductor and dielectrics (insulation). In a synthesis problem we seek the material property the permittivity that needs to be used in a certain part of the device. In other words, the optimisation parameter that we seek is the permittivity of those parts so that the object function goes to the extreme value. Gradient techniques can be used to reach the optimum parameter. Optimisation with respect to geometry of the device is much more complex than with respect to material or excitation. IV. NUMERICAL RESULTS We considered the example from the figure. Because of the symmetry the analysis domain is limited to a half of the field domain. In the Fig. 3 the meshed domain is plotted with the axis Oz as symmetry axis (the horizontal line). The finite element method was used for numerical Fig. 3. Meshed domain for a muff simulation. The program Quickfield [4] uses the triangular elements. Fig. 4. Equilines of potentials The geometrical properties of the device are: the radius of the conductor is 5 [mm]; the external radius of the phase insulation is [mm]; the external radius of the control tube is 6 [mm]; the external radius of the muff insulation is 8 [mm]; the external radius of the second semiconductor layer is 3 [mm] the width of the internal semiconductor layer is [mm]; the external radius of the connector is [mm]; the length of the connector is 5 [mm]; The physical electrical properties are: The voltage of the cable is U= [kv]; Relative permittivity of the first insulation layer is 3.5 Relative permittivity of the muff insulation layer is 4 Relative permittivity of mastic is 6 Relative permittivity of control tube is In the Fig. 4 the distribution of the field lines are plotted for the data mentioned above. 584

148 Ion T. Cârstea and Daniela P. Cârstea In our simulation tests we considered many values of the permittivity of the mastic. In Fig. 5 the variation of the electrical field at the external radius of the phase insulation is plotted. If the value of the permittivity of the control tube is It is obvious that we can find an optimum value of the material property (in our case the permittivity) so that an objective function can reach a minimum value. In our particular application the objective function is a measure of the deviation of the electrical field from a desired value. The solution of the inverse problem is done iteratively. V. CONCLUSION Fig. 5. Strength E versus space (ε r =6) In this paper we presented some aspects of analysis and control of the electric field in cable connectors. We limited the discussion at the material properties as optimisation parameter. The influence of the material properties on the field distribution in connectors is analysed. The numerical models were obtained by the finite element method in a Dspace [4]. An optimisation with respect to geometry is an open problem that involves increased computational efforts. At each step of iteration the application software must rebuild the mesh of the finite element program. To simplify the optimisation process, the initial problem is divided into subproblems so that the gradient technique that involves differentiation with respect to geometry is divided into differentiation subproblems. REFERENCES Fig. 6. Strength E versus space (ε r =8) increased at 8, the distribution of the electrical strength is modified. In the zone of the joint, the electrical field strength is reduced (see Fig. 6). [] *** Catalog /. Tyco (Electronics).Accesorii pentru cabluri de energie [] Cârstea, D., Cârstea, I. CAD in electrical engineering. The finite element method. Editor SITECH.. Craiova, Romania. [3] Hoole, S.R.H., Finite elements, electromagnetics and design. 995, Elsevier, Amsterdam. [4] *** QuickField program, version 5.. Year 4. Page web: Company: Tera analysis 585

149 This page intentionally left blank. 586

150 Untransposed HV Transmission Line Influence on the Degree of Unbalance in Power Systems Ljupčo D. Trpezanovski and Metodija B. Atanasovski Abstract In this paper, the untransposed HV transmission line influence on the degree of unbalance in power systems is presented. The model of untransposed HV line with ground wires is given in phase and sequence domain. The proposed model is used for asymmetrical load-flow solution by Newton- Raphson procedure, incorporated in the Neplan 5. software. All 4 and kv unbalanced transmission lines in the power system of the Republic of Macedonia are taken with real parameters and asymmetrical state is analyzed. The unbalance factors for negative- and zero-sequence voltages for 4 and kv buses are calculated. Positive-sequence voltages from asymmetrical state are compared with phase voltages from symmetrical state for the same power system, when the transmission lines are treated as balanced. Keywords Untransposed HV transmission lines, unbalance factors, asymmetrical load-flow. It should be noted, that in this research all loads are treated as balanced elements. The unbalance factors can be calculated from the sequence components (for voltages or currents). If these components are not on disposal, they are obtaining by transformation of the corresponding phase values. Values of phase nodes voltages or elements phase currents for the three-phase power system states, which deviate more or less from symmetrical states, are obtaining with asymmetrical load-flow (ALF) calculations. The solution of ALF problem was successfully performed using methods in phase domain (Newton-Raphson and Fast decoupled procedures) [] and faster methods in sequence domain [], [3]. II. UNTRANSPOSED HV TRANSMISSION LINE MODEL IN PHASE AND SEQUENCE DOMAIN I. INTRODUCTION The three-phase power system consists of several networks with different rated voltages, connected with two or threewinding interconective transformers. The elements in the power system can be balanced (with equal phase parameters) or unbalanced (with different phase parameters). Practically, all generators, transformers, transposed lines and symmetrical loads can be treated as balanced elements. The untransposed and asymmetrical loads are treated as unbalanced elements. If there is even only one unbalanced element, asymmetrical state in power system is occurred and sequence voltages and currents are present in the power system buses and elements. The presence of sequence components causes negative influence on the elements correct function. For example: negative-sequence currents at generator terminals rise heating in their rotors; malfunctions of protective relays; zero-sequence currents increase greatly the effect of inductive coupling between parallel transmission lines; higher power system losses; zerosequence currents in the ground wires and through the ground, etc. The degree of deviation from the symmetrical state can be valued with the unbalance factors for negative- and zerosequence voltages or currents. When a system has adverse unbalanced factors, the transposition on phase conductors at substations or all through the lines should be applied. Ljupco D. Trpezanovski is with the Faculty of Technical Sciences, University St. Kliment Ohridski, I. L. Ribar bb, 7 Bitola, Macedonia, Metodija B. Atanasovski is with the Faculty of Technical Sciences, University St. Kliment Ohridski, I. L. Ribar bb, 7 Bitola, Macedonia, 587 If the HV transmission line has a considerable length, and phase conductors are not transposed, it can causes a significant negative- and zero-sequence components. Usually, because of the great costs for the transposition towers and insulators, line transposition is avoided. Practically, the transposition is recommended if inequality () is satisfied: U ( kv) L (km) 5 (kv km), () n V where U n is rated voltage in kv and L V total line length in km [4]. It is shown that inequality () is satisfied for and 4 kv lines, but should be checked for kv lines. For exact unbalance factors calculation, a proper mathematical model of three-phase HV transmission line should be defined. In steady state problems, three-phase transmission line is represented by lumped-π circuit. The series reactance and inductance are lumped between line ends and shunt capacitance of the transmission line is divided into two halves and lumped at line ends [], [] and [5]. Let us consider a three-phase unbalanced transmission line with one ground wire. A. Series Impedance of a Transmission Line The series impedances of phase conductors and ground wire with earth influence, which are mutually inductive coupled, are illustrated in Fig.. The following equation for the line ends voltage difference can be written for phase a: V a V = I ( R + jωl ) + I a a a ag g a an b jωl n ab + I n c jωl + j ω L I jωl I + V. () ac

151 Untransposed HV Transmission Line Influence on the Degree of Unbalance in Power Systems The voltage and current of the fictive earth conductor, signed as n, are given with next equations: V n V g V a V b V c n I g I a I b I c I n = I n ( Rn + jωln ) I a jωlna I b jωlnb I jωl I jωl, (3) c n a nc b g I = I + I + I + I. (4) Substituting Eqs. (3) and (4) in Eq. () gives: c V g V a V b V c Fig.. Series mutually inductive coupled line impedances. ng g earth and ground wire(s). All elements of this matrix can be calculated from the matrix equation: Z Z Z - Zabc = Z A ZBZDZC = Z Z Z Z Z Z aa ab ac ba bb bc ca cb cc. () Usually, instead of impedance matrix, the series admittance z - matrix Y = Z is applying for the line model. abc abc B. Shunt Capacitance of a Transmission Line Shunt mutual capacitive couplings for the three phase conductors, ground wire and earth are illustrated in Fig.. g a b ΔV a = Z I + Z I + Z I + Z I aa n a ab n b ac n c ag n g (5) c Writing similar equations for the other phases and ground wire, the following matrix equation results: ΔV ΔV ΔV ΔV a b c g Z = Z Z Z aa-n ba-n ca-n ga-n Z Z Z Z ab-n bb-n cb-n gb-n Z Z Z Z ac-n bc-n cc-n gc-n Z Z Z Z ag-n bg-n cg-n gg-n I I I I a b c g. (6) The series impedances line model with only three phase conductors is more convenient and it can be established in few steps. At first, matrix Eq. (6) should be presented in partitioned matrix form as follows: ΔV ΔV abc g Z = Z A C Z Z B D I I abc g Multiplying the partitioned matrices results with equations: abc A abc B g (7) Δ V = Z I + Z I, (8) Δ V = Z I + Z I. (9) g C abc Assuming that the ground wire is at zero potential ( ΔV g = ), from Eq. (8) and (9) can be obtained the final three phase conductors model for the transmission line in matrix form: abc abc D abc g Δ V = Z I. () The Z abc impedance matrix includes phase selfimpedances and mutual inductive couplings with influence of The potentials of the line phase conductors and ground wire are related to the conductor charges by the matrix equation: where Fig.. Shunt mutually coupled line capacitances., p aa, V V V V a b c g p, p, = p, p, p, ab,, aa ba ca ga p p p p, ab, bb, cb, gb p p p p, ac, bc, cc, gc p p p p, ag, bg, cg, gg Q Q Q Q a b c g, (), p gg are potential coefficients. On the same way, as it was conducted for the series impedances, the only three phase conductors line model for the shunt capacitances could be established. Taking into account the zero potential of the ground wire(s) and Eq. (), potentials of the line phase conductors with included influences of earth and ground wire(s) in matrix form are: V = P Q. (3) abc The capacitance matrix can be easy calculated as: abc Caa Cab Cac - C abc = Pabc = Cba Cbb Cbc. (4) C ca Ccb Ccc Usually, the shunt admittance matrices Eq. (5) corresponding to the line ends are applying instead of capacitance matrix. n abc 588

152 Ljupčo D. Trpezanovski and Metodija B. Atanasovski Y s abc = jωcabc. (5) Finally, the series and shunt admittance lumped-π model of an untransposed transmission line (connected between buses k and j) represented with three-phase compound admittances is shown in Fig. 3. k I abc k V abc Following the rules developed for the formation of the admittance matrix using the compound concept [], the bus k and bus j injected currents can be related to the nodal voltages by the equation: I I k abc j abc (6,) z Yabc + Y = z Yabc s abc Y Y z abc z abc s abc + Y (6,6) V V k abc j abc (6,) (6) Above explained procedure can be used for formation the lumped-π model of an untransposed transmission line with more than one ground wire. Series and shunt admittances can be converted in sequence domain using transformation matrix T s and equations: Y Y z dio s dio = T Y T (7) s s z abc s abc s = T Y T. (8) Now, the lumped-π model of an untransposed transmission line in sequence domain can be presented as in Fig. 4. Finally, the mathematical model in sequence domain can be presented in matrix form with Eq. (9), similar as it was presented for the phase domain. I I k dio j dio k (6,) s Y abc z Ydio + Y = z Ydio s dio z Y abc Y Y z dio z dio s dio + Y s (6,6) s Y abc V V k dio j dio (6,) (9) Inductive and capacitive mutual couplings among positive-, negative- and zero-sequence circuits are expressed with nonzero off-diagonal elements in matrices Y and Y. z dio j j I abc j V abc Fig. 3. Lumped-π model of an untransposed transmission line in phase domain. k I dio k V dio k s Y dio z Y dio s Y dio j j I dio j V dio Fig. 4. Lumped-π model of an untransposed transmission line in sequence domain. s dio Instead of mutually admittances, the couplings can be expressed by compensation current sources. Thus, the unbalanced line model can be presented with three decoupled sequence circuits. The mutual couplings are replaced by corresponding controlled sources current sources. More detailed explanation for the untransposed transmission lines modeling in phase and sequence domain is given in [, ], [5]. III. ASYMMETRICAL LOAD-FLOW SOLUTION The phase voltages for all buses of the entire power system can be obtained performing the ALF solution. Because the sequence voltages are of interest for unbalanced factors definition, it is appropriate to use the ALF methods established in sequence domain. Presented results in this paper are obtained by Newton-Raphson method in sequence domain [], incorporated in the Neplan 5. software [6]. This method is based on the system of three matrix equations each one related to the decoupled positive-, negative- and zero-sequence equivalent circuit of the power system (Eqs. (), () and () respectively). H M d d N d Δθ d L d Δ Vd Vd i i i ΔP = ΔQ d d. () Y V = I, () Y V = I. () o o Actually, the matrix Eq. () has the same form as the equations that represent the symmetrical Newton-Raphson load-flow model. The other two supplementary systems given by Eqs. () and () are systems of linear equations. IV. STUDY CASES CALCULATION OF UNBALANCE FACTORS The influence of untransposed HV transmission lines on the degree of unbalance was studied on the entire power system of the Republic of Macedonia. In the performed analyze are included 5 buses of 4, and kv voltage level, 53 lines, 5 interconnective transformers and 9 equivalent generators with step-up transformers. All 4 and kv lines are untransposed, and real phase arrangements shown in Fig. 5 are taken into account. 6,94,4 3,4 3,5 Fig. 5. Phase conductors and ground wire arangement for untransposed a) 4 kv line and b) kv line ( * produced by EMO-Ohrid). o 4, 4,7 a) b) 4,,9,9 5,9 589

153 Untransposed HV Transmission Line Influence on the Degree of Unbalance in Power Systems The mentioned power system is shown in Fig. 6, only with buses in which the unbalance factors are calculated. The rest parts of Macedonian power system and connections with neighborhood s power systems are presented with blocks. VRUTOK PSMK PSMK BT Two cases with all balanced loads were studied. In the first case, all transmission lines are treated as balanced. The solution for voltages in each node of the system buses shows that only positive-sequence voltages exist and they are equal with phase voltages. For this case the voltage in phase (node) j a for the bus j is denoted as V abal and it is equal with positivej sequence voltage V d. This notation is necessary for definition an unbalanced factor for positive-sequence voltages, when asymmetrical state of the system is compared with the symmetrical state for the same system. In this case total active power loss is ΔP bal = 3, MW. In the second study case, all 4 and kv lines are taken with their real parameters. Presence of six 4 kv lines with total length of 376,7 km, one kv line with 65, km and one kv line with length of 4 km (build on 4 kv towers), cause asymmetrical state and appearance of sequence voltages and currents. Because the sequence components have unwanted effects on the power system elements it is desirable to measure the degree of system unbalance. For this purpose the unbalance factors (usually in %) are introduced. Unbalanced factors for positive-, negative- and zero-sequence voltages are given with Eq. (3) respectively. Vd Vi Vo F d = ; F i = ; F o =. (3) V V V abal PSSR SRGR DUB 4 kv PSMK kv kv on 4 kv towers d PSMK SK SK4 PSMK PSGR If F d = % and F i = F o = % power system is in symmetrical state. Asymmetrical power system states, which deviate more or less from symmetrical state have greater or smaller unbalanced factors F i and F o. Results from study cases are shown in Table I. Although, the unbalanced factors for negative- and zero-sequence voltages are small, the total active power loss in second case is P = 37,8 MW. Δ unbal d STIP SOLUN PSMK Fig. 6. Untransposed lines and their connections in power system of the Republic of Macedonia - PSMK. bus BT 4 DUB 4 SK4 4 SK 4 STIP SK VRU TABLE I RESULTS FOR UNBALANCED FACTORS V abal (kv) V L (kv) V (kv) V L L3 (kv) F d (%) F i (%) F o (%) V. CONCLUSION The presence of untransposed HV transmission lines causes asymmetrical states in the power system. These states have unwanted effects on the power system elements and there is a need for their quantification. For evaluation of the unbalance degree, the unbalanced voltage factors are introduced. In this paper, the procedure for untransposed line modeling in phase and sequence domain is presented. Decoupled-sequence line model is applied in the Newton-Raphson method for asymmetrical load flow calculation incorporated in Neplan 5. software. Two real state cases of the Macedonian power system are studied. Results from the studies show that in the case with untransposed lines, although the unbalance factors are small, total active power loss growth for 6 MW, against the case when lines are treated as transposed. REFERENCES [] J. Arrillaga, C.P. Arnold, B.J. Harker, Computer Modelling of Electrical Power Systems, John Wiley & Sons Ltd, 983. [] X.-P. Zhang, Fast Three Phase Load Flow Methods, IEEE Trans. on PS, Vol., No. 3, pp , August 996. [3] V. Strezoski, Lj. Trpezanovski, Three phase asymmetrical load flow, International Journal of Electrical Power and Energy Systems, Vol., No. 7, pp. 5-5, October. [4] N. Rajakovič, Analiza elektroenergetskih sistema I, Elektrotehnički fakultet, Akademska misao, Beograd,. [5] M.S. Chen, W.E. Dillon, Power System Modeling, Proc. IEEE, Vol. 6, No. 7, pp. 9-95, July 974. [6] Neplan 5. software, BCP Switzerland (Educational version for Faculty of Technical sciences - Bitola). 59

154 Calculation of GIS kv Insulating Bushig Apllying Hybrid BEM-FEM Method Hamid Zildžo and Halid Matoruga Abstract This paper elaborates procedure of geometry optimising kv SF6 gas insulated bushing with regards to dielectric toils. Iterative Dirichlet-Neumann sequential procedure was applied in frame of modern hybrid BEM/FEM method. Keywords Boundary Element Method, Finite Element Method, Method of successive underrelaxation, Galerkin procedure. I. INTRODUCTION Finite element method (FEM) is suitable for calculating domains with a number of insulating mediums but with final boundaries. Boundary Element Method (BEM) is suitable for calculating domains with one insulating medium but with final or infinite boundaries. In this paper, hybrid-coupled method of BEM-FEM boundary and final elements is applied which contains best characteristics from both FEM and BEM methods. II. MATHEMATICAL MODEL Let us elaborate separately the basic principles of Galerkins procedure of weight residue in FEM method, direct BEM method and iterative sequential Dirichle-Neumanns hybrid BEM-FEM method. With FEM, Laplaces partial differential equation is solved, and with BEM, integral equation of electrostatic field is solved. Potentials on BEM-FEM boundary are iteratively calculated by applying the method of successive sub-relaxation. III. FINITE ELEMENT METHOD [ H ] FEM { ϕ} FEM = { Q} FEM () where: [ H] FEM - Two-dimensional matrix of coefficients that s general clause is given with: h FEM ij { Φ} FEM n e e e e e N N = j N N i i j = ε + ds e e x x y y SΔ (i=,,..,n f ; j=,,...,n f ) (3) - Column vector matrix of unknown potentials in articulations of one final element of rank n f x. { Q} FEM - Column vector matrix of free clauses containing Neumann s boundary conditions, which s general clause is given with: n e n FEM f ϕ = = FEM e e j q = N N ds (4) i i j e j e n S Δ e N i -Functions of configurations which enable that unknown function of potentials is approximated as follows: ϕ j n FEM ϕ = n f j= - Neumann s boundary condition. e N ϕ (5) e j j In the method of final elements, domain of observed physical system in procedure of so called bisection of continuum is divided on final number of parts of certain geometry, which are called final elements. Laplace s partial differential equation of electrostatic field is given with: ϕ ϕ ε + = x x y ε y () After applying the Galerkins method of weight residue we can record the solution of electrical potentials distribution in form of linear algebraic equations system: Hamid Zildžo is with the Electrical Engeneering Faculty Zmaja od Bosne bb 7 Sarajevo, Bosna i Hercegovina Halid Matoruga is with the Electrical Engeneering Faculty Zmaja od Bosne bb 7 Sarajevo, Bosna i Hercegovina, 59 IV. DIRECT METHOD OF BOUNDARY ELEMENTS Mathematical model of direct method of boundary elements is based on Greens symmetrical identity and equations of continuity with which boundary conditions on boundaries between domains with different mediums are entered. Let us observe two special cases of 3-D electrostatic field calculations, with the case when observation point Q is located inside of calculating domain V and second case when point Q is located on domains boundary. General formula for observing potentials within, on the boundary and outside of calculation domain is given with the following formulation: ϕ(p) C(Q) ϕ (Q) + T(P,Q) ϕ(p) ds = G(P,Q) ds P (6) P n S S P

155 Calculation of GIS kv Insulating Bushig Apllying Hybrid BEM-FEM Method where: G(P,Q) - Greens function, T(P,Q) - Extraction of Greens function in perpendicular domain on boundary surface, ϕ ϕ i - Calculated functions of potentials and normal field n component on boundary surface, C(P, Q) - Constant which depends on observation point is given with: different mediums additional continuity equations are written for ϕ and ϕ/ n p, which are valid on these boundaries. V. HYBRID BEM-FEM METHOD Let us observe example of calculation with related BEM- FEM domain, on Fig.. inside domain V (Poisson formula) on smoothly boundary in D and 3 D domain γ3 D C(Q) = on winding boundary in 3 D domain 4π γ D on winding boundary in 3 D domain 4π out domain V (7) After application of collocation procedure in point in method of weight residue on Eq. 6 we will get the solution in form of matrix system: BEM BEM ϕ (8) BEM BEM [ H] {} ϕ = [ G] n where: [ H ] BEM Two dimensional matrix of coefficient that s general clause is given with: [ G ] BEM h BEM i, j n e = e= S N T e j e i,j ds + δ P i,j C (i=,,..,n e j=,,...,n e ) (9) Two dimensional matrix of coefficient that s general clause is given with: BEM g BEM i,j n e = e= S e e N G j i,j ds (i=,,..,n e j=,,...,n e ) () ϕ {} ϕ BEM i n - Column vector matrix of variables. On domain s boundary with one medium, in every articulation of boundary elements, the value of variable φ or ϕ/ n p known. So in Eq. 8 calculation is conducted on boundary domain of the variable ϕ or ϕ/ n p which is not given as boundary condition. On boundary between two domains with different mediums, unknown are ϕ and ϕ/ n p as well. In that case, for each domain boundary the system of Eq. 6 is written with consideration of Dirichlet s and Neumann s boundary conditions, and on boundaries between two domains with i Fig.. Example of related BEM-FEM domain calculation FEM domain is bisected with -D FEM final elements whose articulations are marked with white articulations, and BEM domain with -D boundary elements whose articulations are marked with black articulations. BEM-FEM boundary from FEM side is connected with -D final elements, and from BEM side is connected with -D boundary elements. There are direct and iterative algorithms for connecting the methods of boundary and final elements. With direct approach, forming of linear system of algebra equations is conducted with the aid of exponent () in FEM domain and exponent (8) in BEM domain, and equations of continuity are added on BEM-FEM boundary. This procedure has a large flaw because it is necessary to solve large full equations system. For the sake of memory saving it is recommended to use some of iterative procedures, and the best known are: - Robbins s relaxation algorithm for connecting. - Neumann Neumann s algorithm for connecting. - Advanced Dirichlet Neumann s algorithm. - Advanced sequential Dirichlet Neumann s algorithm. With iterative BEM-FEM procedures, separated solving of two separated matrix system of linear equations is conducted separately for BEM and separately for FEM domain, and the results of potentials distribution or flux on BEM-FEM boundary is iteratively calculated by applying successive subrelaxation method. 59

156 Hamid Zildžo and Halid Matoruga VI. ADVANCED SEQUENTIAL DIRICHLET NEUMANN S BEM-FEM ALGORITHM In this paper advanced [4]sequential Dirichlet-Neumann s BEM-FEM algorithm will be used which consists of following steps. Dividing the calculating domain on BEM and FEM domains Defining of starting values of potentials on boundary of BEM- FEM Starting the iterative cycle which lasts until satisfying the convergence terms: DO n=,, to convergence Solving the field in BEM domain: On BEM domain boundary other than on BEM-FEM boundary itself, Dirichlet or Neumann s boundary terms are inflicted. Accordingly, in matrix system (8) in matrix [H] BEM and [G] BEM we can write BEM-BEM contributions from pure BEM boundary and BEM-FEM contributions from BEM- FEM boundary. As a result we will get Neumann's boundary terms which are relevant in perpendicular direction on FEM side of BEM- FEM boundary. Solving the field in FEM domain: Solving the field in FEM domain is conducted in this step. On FEM domain boundary except on the BEM-FEM boundary itself, Dirichlet s or Neumann s boundary terms are given. Accordingly, in matrix system () in matrix [H] FEM we can write FEM-FEM contributions from pure FEM boundary and contributions from FEM-BEM from FEM-BEM boundary: [ ] { ϕ} n {} ϕ FEM FEM { Q} n+ { } FEM BEM Q n+ FEM FEM FEM + H = (3) FEM BEM n+ where the matrix elements { Q } FEM n + are calculated by applying (4) and () formulation. As a result we will get FEM BEM potentials { ϕ} on FEM-BEM boundary. n Correction of calculated potentials on BEM-FEM boundary: In this step correction of calculated potentials that are calculated on FEM-BEM boundary in previous step is conducted. Correction is conducted by applying the method of successive sub-relaxations: SF 6 BEM FEM BEM FEM FEM BEM { } = ( θ){} ϕ + θ {} ϕ (4) n+ n ϕ n+ Sub-relaxation factor θ is given in interval from to. Checking the convergence of iterative cycle and stopping when suitable punctuality is achieved. SF 6 Fig.. Example of calculation of kv insulating bushing in SF6 GIS station BEM BEM ϕ BEM BEM BEM [ ] {} ϕ n+ BEM [ ] n n+ H = G BEM FEM () BEM FEM {} ϕ n+ ϕ n n+ In the system () it is necessary to consider Dirichlet s and Neumann s boundary terms including potential values on BEM-FEM boundary from previous iteration step, as well. Solving the system () will give us the values of normal BEM FEM ϕ component of fields on BEM-FEM boundary n n+ In this step, solving of continuity equation on BEM-FEM boundary is conducted: ε FEM ϕ n ϕ n FEM BEM n+ FEM BEM n+ = ε ε = ε BEM BEM FEM ϕ n BEM FEM n+ ϕ n BEM FEM n+ () VII. EXAMPLE OF CALCULATION Fig. presents example of calculation of the electrostatic field on kv SF 6 GIS of insulating bushing. Bushing is usually located on top of SF 6 gas insulated station (GIS) and through it an air kv phase line is introduced into SF 6 GIS bus bar. It is very important to correctly optimise the geometry of this bushing considering the dielectric tensions. Copper bus bar goes trough porcelain insulator and goes into the GIS bus bar. Interior of porcelain insulator and GIS bus bar is filled with SF 6 gas. Dielectric permitivity of porcelain is ε r =5, and SF 6 gas ε r =. Araldit support insulator supports bus bar and it has ε r =4. Permitivity of outside air is ε r =. Phase line is located on up thrown % potential, and cover of GIS bus bar is grounded with % potential. Various insulating mediums SF 6 gas, araldit and porcelain with final boundaries are bisected with D final elements. Surrounding air whose boundaries are reaching the infinity represents BEM domain, and is bisected with D boundary elements. 593

157 Calculation of GIS kv Insulating Bushig Apllying Hybrid BEM-FEM Method VIII. CONCLUSION FEM FEM In this paper modern approach to optimising geometry of conductor bushing is shown regarding the dielectric tensions. Infinite boundaries of air are taken into consideration with establishing BEM-FEM boundary and on outer bushing surface itself. Series of calculations was made and final version of bushing is shown. In final version auxiliary display is built in which serves to stretch optimally the line force fields over the surface of porcelain insulator to satisfy maximum allowed values of normal and tangential components of electric field in all insulating mediums, individually. REFERENCES Fig. 3. Generated network of final and boundary elements Fig. 4 shows the results of calculated distribution of dielectric tensions in observed bushing. Results are derived by applying BEM-FEM computer programme [] D. S. Burnett, FINITE ELEMENT ANALYSIS FROM CONCEPTS TO APPLICATIONS, Addison-Wesley Publishing Company, Massachutesetts, 987. [] Haznadar Z., Štih Ž.: Elektromagnetizam,, Zagreb, 997. [3] Zienkiewicz O. C.: The Finite Element Method, New York, McGraw-Hill, 977. [4] Elleithy, W. M., Al-Gahtani, H. J. and El-Gebeily, M., Iterative Coupling of BE and FE Methods in Elastostatics, Engineering analysis with Boundary Elements, Vol. 5, No. 8, August, pp Fig. 4. Calculation results of maximum tangential and normal components of electric field bushing 594

158 Estimation of the Air Power Line Parameters Under the Influence of Lightning Overvoltages Mariana G. Todorova and Margreta P. Vasileva Abstract The overvoltages in the power systems can t be avoided and is necessary to limit them. There is necessity to know of the power line parameters whit positive, negative and zero sequence (R, L and C). They are used when the power line model is constructed. The aim of this paper is estimation of the line parameters in case of direct lightning stroke over the conductor of air power line by using different number of shifted two-dimensional Haar wavelets. Keywords estimation, air power line parameters, shifted twodimensional Haar wavelets. I. INTRODUCTION The overvoltages in the power systems can t be avoided and is necessary to limit them. There are three main groups of overvoltages temporary, switching and lightning overvoltages. Lightning overvoltages originate in atmospheric discharges. A direct lightning stroke causes extremely high overvoltages and thus severe faults. There is necessity of knowing of power line parameters whit positive and zero sequence (R, L and C). They are used when the power line model is constructed. The detailed models of line elements are known, but there isn t undivided algorithm for commonly description of power system model. The algorithm based on non-obvious integration is the most universal and easily applied. Program products are also developed for visual programming package for modeling of dynamic systems SIMULINK, SCILAB etc. Initiating researching system characteristics occurs in dialogue regime. As a result we get a researching system model. Models parameters don t accord to the catalogue data. This needs previous calculations. This is a small defect. In the present study the parameters of a power line under the influence of lightning overvoltages are estimated via Haar wavelets technique application. II. MATHEMATICAL EXPRESSIONS FOR DETERMINATION PARAMETERS OF SUBSTITUTE SCHEME OF AIR POWER LINE IN SYMMETRICAL COORDINATES. [ - 4]. Calculation of positive sequence resistance R where: ρ к (ω) R = ρ. к ( ω ). ξ ( θ ). η, Ω/km () resistivity of the conductor for С, Ω.mm /km; S coefficient, taking into account the change resistance from skin effect; к (ω) =+,675.r. к (ω) =,7+,4.( r. ω. μ,if r. ρ μ permeability of the conductor, H/m; ω- radian frequency, rad; r - radius of the conductor, mm; ω. μ 4 ; ρ ω. μ. -4), if r. ρ ρ ω μ >4; ξ ( θ ) - coefficient, taking into account the change resistance from temperature; Mariana G. Todorova is with the Faculty of Computing and Automation, Technical University, Studentska str., 9 Varna, Bulgaria, Margreta P. Vasileva is with the Faculty of Power engineering, Technical University, Studentska str., 9 Varna, Bulgaria, ξ ( θ ) =+α.(θ о - о ); 595

159 Estimation of the Air Power Line Parameters Under the Influence of Lightning Overvoltages α - temperature coefficient of the conductor, / о С; θ о - work temperature, о С; η - coefficient taking into account difference between the real length of the air power line and the conductor length; 5.Calculation of positive sequence capacitance C 6,4. C, F / km D lg CP r (5) 8 z η = + ; 3 l m z - suspending of the conductor, m; l m distance between two poles, m; S effective cross section of the line conductor, mm ; 6.Calculation of zero sequence capacitance C S ср =.h ср ; C,83. S CP lg 3 r. D 8 CP, F / km (6). Calculation of zero sequence resistance R R = R +3.R З () R З real ground resistance;. -4 μ = 4π. μ ω R З =, Ω / km; 8 ω = π. f, H / km; f frequency, Hz. 3. Calculation of positive sequence inductance L μ D ср r L ln, H / km r 4 μ = + (3) π μ r - relative permeability of the conductor; D ср = 3 D D. D AB. BC AC,m geometric average distan- ce between two line conductors. 4. Calculation of zero sequence inductance L μ D r L 3 μ = 3 ln, H / km + (4) π 3 r. Dcp 4 D 3 = 64 f. γ З, m h = ; CP 3 ha. hb. hc m - average height of the conductors. Zero sequence inductance and positive and zero sequence resistance depend of the frequency. III. LIGHTNING CURRENT`S PARAMETERS High frequency process appears under the lightning influence on the air power line. The frequency depends of the shape and the duration of lightning current. Table shows distribution of the peak value of the lightning current [4]. Table Distribution of the peak value of the lightning current P I, % I м, ka max The lightning current has an aperiodical shape. The front duration is about a few microseconds and the impulse duration is hundreds of microseconds. The different possibilities for current front duration determine processes with different frequency in air power lines and different parameters for R, R and L. The aim of this paper is estimation of line parameters in case of direct lightning stroke over the conductor of air power line. γ З - specific conductivity of the soil, / Ω. m. 596

160 Mariana G. Todorova and Margreta P. Vasileva Three phase measuring Three phase measuring Subsystem Subsystem Three phase measuring Three phase measuring Subsystem 3 Three phase measuring Three phase measuring Subsystem 4 Fig. The model of power network kv The model of power network kv is composed for identification of air power line parameters. Figure shows the model. It unifies description of the following elements: power system (S); power transformer - kv (Т ); air power line (W,W ) and cable power line (W 3 ); power transformer -,4 kv (T ); surge protective devices (SP) metal oxide surge arresters; voltage measurement transformers (ТV). Standard blocks from Matlab Simulink library [5] are used for modeling of power line, power transformers and surge arresters. Lightning current parameters are: amplitude 8 ka and shape / μs. The processes in line are described whit a partial differential equations (PDE) system (7). U( x, t) I( x, t) = RI.(,) xt + L. x t I( x, t) U( x, t) = C. x t The research is in progress whit using of solve method ode3t [5]. This method is an implementation of the trapezoidal rule using a "free" interpolant. Current and voltage that are needed for identification of air power line parameters are measured in ten points. These points are uniformly distributed through the length of air power line. (7) IV. PARAMETER IDENTIFICATION The orthogonal set of Haar functions is a group of square m / waves with magnitude of ± in some intervals and zeros elsewhere []. Since the interval on which Haar functions are defined is not suitable for solving parameter identification problems, suitable transformation is required. The shifted Haar wavelets are defined [6] as where: H * j k (t) = H(t).(.t ), (8) j * m j ; m k; k j j = + < ; H * (t) -scaling function, pleased during the whole observed interval [, T]. A function f (x,t) that is square integrable in the regions t [,T], x [,X], can be approximately expanded in a series of two-dimensional shifted Haar wavelets [, 6]. 597

161 Estimation of the Air Power Line Parameters Under the Influence of Lightning Overvoltages j R = 3568 Rˆ L Lˆ Table Model parameters, obtained parameters values and relative parameter errors =.7. 3 C Cˆ = R = 5. Rˆ L Lˆ. =. 3 C Cˆ. = E % The Haar wavelets implementation reduces the problem of parameter identification to a computationally convenient form. The PDE of system (Eqs.(7)) are transformed into set of algebraic equations, and the algorithm for estimating of the parameters can be derived in a discrete form. The identification process includes the following fundamental steps: (i) expansion of the functions of PDE into shifted two-dimensional Haar wavelets; (ii) rewriting of the PDE in the matrix form using the Haar wavelets properties and after some well known manipulations []; (iii) solving of the obtained matrix equation for the vector of unknown parameters using least squares technique. In this section m file is created in Matlab based on proposed algorithm. The estimation values of the parameters for different number m of shifted two-dimensional Haar wavelets are calculated. The model parameters R, L, C, R, L, C, the obtained parameters values R ˆ, Lˆ ˆ ˆ, Cˆ ˆ, R, L, C and the relative parameter errors E are given in Table. REFERENCES [] Chen C. F., C. H. Hsiao, Haar wavelet method for solving lumped and distributed parameter systems, IEEE Proceedings Control Theory and Applications, vol. 44, no., pp , 997. [] Герасимов К. К., Й. Л. Каменов, Моделиране в електроенергийните системи, Авангард Прима, София, 7. [3] Генов Л. Г., Техника на високите напрежения в електроенергийните системи, Техника, София, 979. [4] Hart W. C., Malone E. W., Lightning and Lightning Protection, Interference Control Technologies, Gainesville, pp. 3-9, 988. [5] MATPOWER, Power Systems Engineering Research Center, School of Electrical Engineering, Cornell University, Ithaca, / matpower/matpower.html, 997. [6] Todorova M., Research of the possibilities and application of two dimensional orthogonal functions for dynamic distributed parameters systems identification, PhD Thesis, Varna (in Bulgarian), 3. V. CONCLUSION Estimation of the air power line parameters gives a possibility for more precise modeling of processes in power line. The line parameters in case of direct lightning stroke over the conductor of air power line by using different number of shifted two-dimensional Haar wavelets are estimated. Suitable m file based on the proposed algorithm for parameter identification is created in Matlab and numerical results are given. The parameters estimations are obtained very accurately when bigger number of Haar wavelets is applied. Compared with the classical methods, the Haar wavelets method is computationally simplest, faster and has low computer memory requirement. 598

162 Calculation Model and Analyses of Grounding of the Fence on Medium Voltage Stations Nikolce Acevski and Mile Spirovski Abstract - Groundings of transformer stations (TS) in power networks with neutral point grounded by small resistively, in the case of single pole fault to ground, may to find on high potential. In such a case considerable potential differences can be registered between some points into and around transformer station and to come to high touch and step voltages. In this paper results of analysis of particular case are introduced, TS MV/MV (medium voltage) 35/ kv 'Omorane', near by Veles, which is make to find optimum way for grounding of the fence to satisfied safety criteria's, given in rules and recommendations. The analysis have for purpose not just to answer of the problem which is already described, thus to give some general thinking and recommendations in the general case. Keywords - grounding, fault to ground, analysis, step and touch voltage, fence of TS, safety criteria's I. INTRODUCTION Metal fence of TS HV/MV, TS MV/MV longer time is object of different treatment in projecting practice in relation at type of grounding and abstinence of dangerous step and touch voltages. In this problem it is general approach is one of the following ways: - fence is grounded with galvanic connection of more places on general grounding grid of TS, figure a. - through special grounding placed from external side of fence on distance of m which can be in galvanic relation with grounding grid, so-called common grounding, or galvanic separated from it, figure b. In first case external electrodes of grounding grid of TS MV/MV usually follows fence of external side on distance of m. with that occupied area is increased with grounder and resistance is decreased of shared parts, but potential of fence is going to be equivalent with grounding grid voltage. Safety criteria s will be satisfied if gradients of potential of both sights of fence are controlled with help of modeling of potential. But that is possible just in case of low specific resistively on the bottom point ρ, low current of near grounded connection and so on, so in more practical those terms are not fulfilled. In some rube it can be founded some recommendations by which this problematic are examined and in itself includes the way of grounding of neutral point of network. Nikolce Acevski is with the Faculty of Technical Sciences, Bitola, Macedonia, Е_mail: Mile Spirocski is with the Faculty of Technical Sciences, Bitola, Macedonia, Е_mail: Fig.. a) without b) with special grounding placed from external side of fence on distance of m II. REVIEW TO RECOMMENDATION FOR GROUNDING OF MV TS According to (), in phase of projecting of TS 35/ kv or 35/ kv it doesn t needs preview or account for grounding, so the project needs just to predict grounding according this recommendation. Therefore it is need connecting of grounding grid of TS 35/ kv, TS 35/ kv with annular grounding grid of fence, i.e. it is needed to be used common grounding. Motion of network with isolated neutral point can go on with capacity current of fault to ground is not to be increased more than A in 35 kv network, i.e. A in kv network. Grounding of neutral points of MV networks (35 kv, kv, kv) is obliged when currents of fault to ground will achieve two times higher values than stated above. Systems of grounding of TS are dimensioned in accordance touch voltage so they are not allowed to over cross voltages shown according phase (): V T, 75s 75 U doz = V, 75s < T 53, s T 65V T, 53s where T is duration of fault to ground. According the same recommendation, safety criteria s of touch voltage will be satisfied if total impedance of grounding has value: kd U d Z u () r I kd - attitude of voltage on grounding grid of TS and touch voltage. U - allowed voltage according relation () d k () 599

163 Calculation Model and Analyses of Grounding of the Fence on Medium Voltage Stations r - reductive factor of MV overhead line who give a current of TS I k - total allowed current of fault to ground of medium voltages network. In accordance, literary () for isolation network when fault to ground has stabile character this values are: k d =, U d = 65 V, I k = A. If TS gives current over overhead line the reductive factor amount is r =. Replacing this values in () it gives that safety criteria s will be satisfied only if total impedance of grounding has value lower of 6,5 Ω. According the same recommendation, if network is with grounding natural point over low impedance amplifier with limitation of current of fault to ground on 3 A, total resistance (impedance) of system of grounding on TS 35/kV or 35/ kv should be in relation () Z u, 7Ω if it is TS 35/ kv or TS 35/ kv connected to the overhead line 35 kv, k d = 3, (r=). In case not to be satisfied over headed condition, it needs, time of disconnection on fault to ground on collectors 35 kv in TS 35/ kv or TS 35/ kv to be original for most,5s with it disturb to be fraught criteria for security of touch voltage thereated without prove with accountings or measurement or decrease value of voltage grounding on security grounding (for example: with adding of vertical grounding grids, adding for one more annular grounding grid, etc., in order to satisfy overhead term. According the same recommendation at taking out of grounding to install contour (annular) round fundament grounding grid that is connected with it on more places of distance m from wall of building on depth of,8 m. III. MODEL FOR EVALUATION OF THE WAY OF GROUNDING ON FENCE Often in practice the question is estimated for each other influence on grounding grid for different near placed objects. The answer may be appropriate useful at analyses of secure and work grounding grid of TS MV/LV, at appointment of different metal installation or cable of metal conductive layer near to the grounding grids in basement of housing objects like in case at evaluation is annular grounding grid to fence to be galvanic separate from main grounding grid or linked. And in case of galvanic separated grounding grid of fence of grounding grid of TS it get some potential of fault in TS as a result on that what is found in potential funnel on active grounding grid, (of TS 35/). Characteristics of both grounding grids can be calculate over below-mentioned mathematics model, based on wellknown Maxwell s relation. For two grounding grids a and b with n a and n b rectilinear electrodes besides (4), (7) import: [ U a ] [ U ] b = [ raa ] [ rab ] [ r ] [ r ] ba bb [ I a ] [ ] I b (3) [ U ],[ ] a U b -vector of voltages on electrodes of grounding grids with dimensions n, n [ I ],[ ] a I b a b -vector of current of taking away on electrodes on grounding grids, with dimensions n i.e. n a b raa r bb -square symmetrical matrix with dimensions n a na [ ],[ ] i.e. n b nb. On main diagonals with own resistivities on electrodes on grounding grids, and the others members are mutual resistivities of electrodes from the first (second) grounding grid. [ ],[ ] rab r ba -rectangular matrix with n a nb,nb na members which presents mutual resistivities of elements from grounding grid a with elements from grounding grid b. At calculation of own and common resistivities to be taken apprehend into consideration and their links on relation of plain on their own discontinuity (land area), one or endless number in dependence on that if the ground is homogeneous or is not homogeneous, and is calculate by method of medium potential, 3. literature[ ] [ ] During calculation can be omitted failure of voltage because are small on electrode with smaller length and to take in consideration that all elements are on the same potential, i.e.: [ U a ] = [ a ] U a and [ U b ] [ b ] U b = (4) In the last relation it showed two single vectors with same dimensions like the vectors of voltages. Parameters of two near galvanic separates grounding grids and its mutual influence are analyzed in conditions when from some of them shunt current into fault to ground I z, for example from grounding grid a, in case fundamental grounding grid. Run a grounding b, annular of fence, doesn t have current of fault to ground. T I = I (5) From overhead relation emanate: U U a b z [ a ] [ ] b [ a ] [ a ] T [ ] [ ] = b I b (6) = [ raa ] [ rab ] [ r ] [ r ] ba From relations (5) and (6) follows: T [ a ] [ b ] T [ ] [ ] bb [ I a ] [ ] I b [ I a ] I = [ I ] z a b Relations (7) and (8) can be with one common matrix equations: b (7) (8) 6

164 (9) [ raa ] [ rab ] [ a ] [ a ] [ rba ] [ rbb ] [ b ] [ b ] T T [ a ] [ b ] T T [ ] [ ] a Which solution will be: b [ I a ] [ I ] U U b a b = [ C] [ I a ] [ I ] U U [ a ] [ ] b I z b a b Nikolce Acevski and Mile Spirovski [ a ] [ ] = b I z () where [ C] = { cij }, inverse matrix on square matrix in relation (9) with dimensions ( n a + nb + ) ( na + nb + ) and [ a ][, b ] zero vector with dimensions n a, n b. The systems relations () can be presented in progressing form: I a U ( k) = ckj I z I b ( k) = cij I z a = c jj I z U b c j j I z = + () where k =,,...n a i.e. k =,,...n b appropriately, i = k + j n a + n + n a = b For own self grounding resistively of first grounding at existing of second and mutual grounding resistively of both groundings import: U R = U = () a b a = c jj ; Rab = c j+ j I z I z Further it can calculate the potential of any point M of land area like value of potentials which are given as a result of currents on taking away from both groundings: M [ r ] [ I ] + [ r ] [ I ] ϕ = (3) am a bm b current of fault in near to it grounding can be calculate according to relations, literature [ 9 ], and at that the mistake not to be bigger then some %. On this way are avoid matrix equations and potential of passive grounding grid is calculate like potential in its brunt or live medium value potential calculated in middle points of electrodes in annular. IV. EXAMPLE AND ANALYSES In base on shown model, who can be generalized like in [ 7 ],[ 8], from sight on authors is made computer program by which help is analyzed the problem of grounding of fence of TS 35/ kv Omorane, near to Veles. In this example networks on 35 kv and kv sights are with isolation neutral point while network,4 kv is directly grounded. All kv drain are airy, and intake is round the same so with one over ground lead. Transformer station is predict to work with isolated natural point with possibility in future to ground, with what it will used medium secure. Because it doesn t exists concrete predictions in which time interval in future can be show need to grounding of neutral point, dimensioned and presentation of grounding is done for real conditions in 35 kv and kv network, but it is tested the variant when network will be grounded over small resistively. Grounding of TS is done by technical recommendation no. 7. The building in which is located complete TS (patch board, command room etc) is predict to have fundamental grounding grid, presented with FeZn clatter 3x4 mm. Fundamental grounding grid is predict on 3 places to be connected with external grounding grid, presented with copper cable Cu 5 mm of distance of m from external wall of object and on depth,8 m, because immediately to the fence is made a pavement with width m. Internal of building is present line for equivalence of potential with FeZn clatter 5x3 mm. on which are connected metal construction of 35 kv and kv cells, and all metal parts. Line for equality of potential in object is connected with fundamental grounding grid, grounding grid for modeling of potential and zero point,4 kv from transformer station of home needs (with cable PPOO x6 mm ), and lighting rod installation. r am r bm is matrix on mutual resistively at all electrodes from both groundings and their links and point M. If groundings are galvanic linked in that case total current on fault to ground is addition of currents on take away in earth over electrodes on both groundings, and potential of both groundings are equivalent, so equations (4), (5), (6), are modificated. Two galvanic linked grounding grids can be solved so as one grounding. Potential who is occur on one grounding like result on where [ ][, ] Fig.. View of fundamental grounding grid and on grounding of fence of TS 35/ kv Omorane - Veles Specific resistively of fundamental in and around transformer station is ρ = Ω m. 6

165 Calculation Model and Analyses of Grounding of the Fence on Medium Voltage Stations By analyzing the grounding resistively groundings on TS in two cases (when they are galvanic linked and separated), grounding grid of TS, no., and grounding on fence, no., like and potentials who will find two groundings at fault to ground of kv sight, U, U. Also it s calculating touch voltages which are the largest on corner of fence, followed diagonal, from internal and external sight of fence, U dv, U dn. At it is inspect next 4 cases:. network is insulate, groundings galvanic separated. network is insulate, groundings galvanic linked 3. network is fault to ground over small resistively, groundings galvanic separated 4. network is fault to ground over small resistively, groundings galvanic linked Results of calculating are shown in table : TABLE : CHARACTERISTICS OF THE GROUNDINGS AND case 3 4 R ( Ω) 5,37,3 5,37,3 U (V) 7,38 44,6 6,7 669,5 U (V) 3, 44,6 348,5 669,5 U / U( %),6,,6, U dv (V) 5,,63 75,5 74,45 U dn (V) 6,36 8,4 95,4 76, U dv / U (% ),59 6,7,59 6,7 U dn / U % 7,4 4,4 7,4 4,4 ( ) It shows that in case of insulate network term () is satisfied in both cases. But if network is ground over small resistively then grounding resistively is higher of limited,7 Ω. From table we can see that in cases i.e. 4, grounding resistively is smaller as potential of main grounding grid. However in that case both groundings are proceed on the same potential which is potential of fence and is larger than in cases, 3. Like result on that step and touch voltages from internal and external side are larger than in cases when groundings are galvanic separated. This is important in case 4 when they are linked and network is grounded. In that case we get touch voltages higher from limited. If resistively of people body in best case man can be exhibit of voltage of 3,5 % lower from value showed in table. For critical case, at touch of external sight of fence this values for cases 3 i.e. 4 will be 9,9 V i.e. 89,75 V. So in conditions of work with insulate neutral point, terms for secure by recommendation are satisfy in both cases and that is almost indifferent is annular grounding of fence and grounding grid of TS will be galvanic linked or not. But at eventual crossing of network with grounded neutral point safety criteria s is much easier to be satisfied if groundings galvanic are separated. In this case duration of fault is limited on,5 sec. Allowed touch voltage, internal and external of installation for this time by recommendation, relation (), is 5 V. From here we can give conclusion that in cases of work with grounded natural point safety criteria s are satisfy internal of fence, which is not case out of fence. To be satisfy this conditions in this case its need specific resistively of fundamental around TS to go on, or to add one more ring or vertical elements in poll in annular grounding of fence or drain of TS to do with cables with funeral external layer like on example. IPO 3 which shows that are excellent groundings. V. CONCLUSION The analyses shows that projectants working on this problematic shall not roundly to hold to recommendation so they should do some calculating. It shows that technical recommendation no. 7. consistently imports if network is with insulate neutral point. In that case it benefits together grounding. But in eventual crossing of work on neutral point fault to grounded over small resistively, ( for which in our country in this moment is convoy comprehensive action), if it proceeded consistently of recommendation, safety criteria s can from most higher step and touch voltages not to be satisfied. In case like this, safety criteria s can be satisfied much easier if galvanic are separated from grounding of fence, from grounding grid of TS HV/MV, TS MV/MV. REFERENCES [] J. M. Nahman, "Uzemljenje neutralne tacke distributivnih mreza", Naucna knjiga, Beograd 98. [] J. M. Nahman, "Programi EFD- i EFD- za proracun uzemljivackih sistema u dvoslojnom i homogenom tlu", XIII Savjetovanje, JUNAKO CIGRE, Bled 977. [3] J. M. Nahman, "Numericki postupak za proracun medjusobnih otpornosti tankih pravoliniskih provodnika", ELEKTROTEHNIKA, ELTHB, 7 Zagreb,Maj-Juni 984, str [4] D. Jelovac, "Matematicki modeli za analizu uslova uzemljenja TS /.4 kv", ELEKTROTEHNIKA, ELTHB, 9, Zagreb, Maj- Juni 986, pp [5] I. Zelic, I. Medic, "Analiza utjecaja uzemljivaca ograde postrojenja na raspodelu potencijala u okolnom tlu", ELEKTROTEHNIKA, ELTHB, 9, Zagreb, str [6] Galek, "Analiza dodirnog napona i nacina uzemljenja ograde elektroenergetskog postrojenja", ELEKTROTEHNIKA, ELTHB, 9, Zagreb, str [7] N. Acevski, R. Ackovski, "Determining of galvanically separated gronding grids and grounding systems", MELECON', th Mediterranean Electrotehnical Conference May 9-3 CYPRUS, IEEE Region 8. No, MEL36. [8] N. Acevski, J. Sikoski, " Resavanje na galvanski odvoeni zazemjuvaci i zazemjuvacki sistemi", I Sovetuvawe na ESM, Bitola, 6-8 dekemvri 999g., str [9] N. Acevski, R. Ackovski, " Izvoz na potencijali vo metalnite instalacii i zazemjuvaci na stanbeni objekti", I Sovetuvawe na ESM Bitola, 6-8 dekemvri 999g., str [] TP-7 na EPS na Srbija (Izvodjenje uzemljenja distributivnih TS 35/ kv, 35/ kv, /,4 kv, /,4 kv i 35/,4 kv, III Izdanje, juni 996). 6

166 SESSION EQ I Education Quality I


168 Meaning Making Through e-learning B. Gradinarova and Yuri Gorvits Abstract Different approaches have been proposed to add more educational value to e-learning. One of these views proposes modern pedagogical models that better fit the nature of the unique features of technology. A related approach is to embed modern learning and instructional design theory into new communication and interaction channels provided by information and communication technologies such as Internet. This study presents a model for e-learning illustrated with a specific case study of in-service teacher training in learning with digital media. Intact e-communities were developed through interaction and communication by using Internet services to share meaning, views, and understanding. Keywords e-learning, e-communities, learning models. I. INTRODUCTION Diverse approaches have been offered to add more educational value to distance learning programs. Many e- Learning programs emphasize the e side, centering on the learning management system used and searching for new tools to improve distance learning [3]. Even though these views are necessary we believe that the central focus is somehow missed. Very few studies concerning pedagogical models for distance learning to fit the particular and unique features of Internet are proposed. Most programs follow a chalk and talk way of teaching without paying much attention to innovation by designing new pedagogical models to fit the unique features new media [5],[6]. We can say that this vision is close to the old wine in new bottle view. Other studies identify learning managing systems as key tools that define the learning methodology and strategies. The software framework forces a way of teaching that reduces the flexibility required by active learning methodologies [3],[]. Many of them end with a model tailored to the technological framework used instead of a software framework built upon the needs and features given by the pedagogical model assumed. As a result, we can end up with some e-learning principles that support any course implementation such as: To promote an active role of learners in the construction of knowledge, to promote meaningful learning, to promote broad and deep learning, to develop skills, attitudes and values, to Boyka Gradinariva is with Computer Sciences and Technology dep. at TU-Varna,Studentska Str.,9 Varna, Bulgaria, Yury Gorvits is a Buiness Develment Manager Education&Research at ORACLE 5,Savvinskaya Moscow, 9435, Russia allow real experiences through real world activities, to promote collaborative learning, to promote a changing role of teachers/tutors as learning facilitators, to involve learners as co-evaluators, to make learners to reflect on what is doing, to use technology to enrich learning, to enhance action on knowledge objects, to solve cognitive conflicts These principles emerge from underlying theories and models of learning such as constructivism, understanding as thinking, understanding as a network, social interaction, social distribution, situated learning, generalized learning, and selfregulated learning [],[8]. This study introduces a model for e-learning that is built upon these principles, models, and theories. We describe the design, implementation, and evaluation of an e-learning program. Our pedagogical model is illustrated with a pilot implementation with teachers. We highlight the way teachers construct meaning by reflecting on teaching and learning. II. DESIGN We designed a whole e-learning training program for teachers. We wanted to preserve the academic quality and innovate in the way we deliver education both the technology and model of learning. To do this we followed these steps: Technology evaluation: We selected a learning management system and evaluated the technical requirements Team organization: We created a multidisciplinary team to implement the e-learning program with engineers, educators, and informatics and education specialists. Model of learning: Once we knew the characteristics of the LMS and content we designed a pedagogical model for e- Learning. Pilot testing: We designed a pilot testing course on methodologies for using information technologies with a reduced number of teachers. We tested the functioning of the LMS and the pedagogical model. We also evaluated diverse materials, working interfaces, learning strategies, type of interactions, and time spent in different section of the course. Modeling: We designed and implemented the structure of the LMS by considering the learning model and the structure of the course program. The content of eight courses was modeled. Most of this content was already in digital format facilitating the process content modeling. Online classes: The students were selected and registered. Then the first week they inspected the platform by following an entrance module. Students also started to virtually communicate and know each other. Online modules: For learners interaction in each course we designed modules with individual and team activities to 65

169 design products weekly. They used different interaction tools such as chat and forums to do collective designs and constructions. Face-to-face modules: We designed three out of eight course modules to be delivered face-to-face. They included content that requires more student-facilitator interaction. Each course was delivered in an intensive week with a day topic and collective works. Evaluation: We finally evaluated the courses through questionnaires and opinion polls to get ideas, comments, and suggestions concerning online and face-to-face classes. We also implemented a focus group with professors and tutors of the course program to analyze and discuss the attainment of goals and objectives. Meaning Making Through e-learning A. A model for e-learning Our model is based on constructivist principles of learning [8]. We view learning as the process of construction and modification of cognitive structures through learning by experience and collaboration. Each module of the learning cycle is oriented to obtain a contextualized meaningful learning. Learners are required to reflect, apply, criticize, argument, and solve problems and thus allowing them to construct their own representations. We identify five major processes in e-learning: realizing, approaching, conceptualizing, structuring, and applying. Realizing implies to identify the educational challenge. This process consists of orienting learners in their studies by identifying the problem and making their point of view. They also understand the objectives of the course work proposed and the starting points. They know what they will learn and the reason why the activities are proposed. The learner has to make representations of the expected products and results, and the rationale for doing this. Realizing involves the process of motivating, problem identification, and pre-concept/concept contrasting. Approaching consists of constructing a new learning and point of view by learners guided by a group of professionals that design diverse methodological proposals to fit their cognitive styles. The idea is to produce a cognitive conflict to question the learner s intuitive models and to identify the strengths of the proposed models. It involves the process of reflecting, retention, adapting, exploring, and researching. Conceptualizing involves identifying the concepts and possible conceptual changes when exploring and approaching to the content. It involves the processes of metacognition, representation, and adaptation. Structuring implies to construct meaning through didactic strategies such as synthesis, monitoring, and metacognition. This involves processes such as analysis, synthesis, retention, metacognition, and abstraction. Applying consists of giving the opportunity to students to apply their conceptions to new and different scenarios. It involves evaluation, imaging, adaptation, abstraction, problem solving, contextualizing, and metacognition. Fig.. A model for e-learning The virtual interaction triggers a synergic effect on the model by carrying these five processes of knowledge efficiently and thus allowing feedback, confronting ideas, discovering, and collaboration. All these processes are critical in the construction of meaning. B. Training teachers through e-learning Our research took place in the Center for teacher development at Moscow Institute for Open Education. There we have implemented an e-learning experience in order to design and evaluate the proposed methodological learning model. We also wanted to evaluate the learning management system used and to identify main components and strategies to implement an e-learning course. Fig.. e-learning cycle In order to do this we followed five phases: Design, implementation, evaluation, feedback, and redesign. The design of the e-learning cycle involved processes such as entering to a content unit, analyzing documents, negotiating meaning, and applying what learners have learned through collaborative constructing to end with a group synthesis (see Figure ). We have created an interaction virtual space for each content unit to integrate the construction of knowledge around 66

170 B. Gradinarova and Yuri Gorvits a topic. Due to the fact that the quality of the interaction among learners was not tacit we implemented an initial strategy to break the initial barriers to communicate and interact. Learners had to introduce themselves to the class by highlighting personal strengths in a playful way and listing their expectancies on the course. Courses such as the one we are presenting should concentrate in time when teachers work load is not high. The time dedicated by the student to the course is a key to maintain them in the class. The role played by facilitators is very important to foster participation and knowledge construction. The learning management system used can determine the type of activities to be implemented but not necessarily the learning model involved. The follow up strategy is also relevant to the e-learning experience. Facilitators and administrators should have tools to visualize learner s action within the virtual environment. We also think that the time period dedicated to tutoring and coordinating can determine the quality of the learning experience in online courses. This means time for solving problems, follow up, and to create a working climate to motivate students to actively participate in the learning process. III. METHODOLOGICAL STRATEGIES Each online course was divided in five working units during six weeks with a final evaluation. Units were developed in a week basis and ended with an individual or collective product. The last week was dedicated to prepare and take the final evaluation. Each unit consisted of unit description, objectives, general directions, activities, support materials, web links, and online discussions around each activity and working document. During each module students were involved in activities such as document synthesis, term glossary, abstracts, graphic representations (schemes, concept maps), collective constructions of documents, comparative charts, and case studies. Each course consisted of a virtual class section, synchronic communication with the professor responsible of the course and diverse discussion forums to implement activities and documents. The course was in charge of a professor, assisted by a coordinator and a teaching assistant facilitator. IV. MEANING MAKING THROUGH VIRTUAL INTERACTION We based our observation on meaning construction when learners were interacting within the Virtual Dialog Classroom. There two processes can occur: Presenting and Comparing. Presenting involves posing an opinion, comment, information or knowledge. Comparing includes contrasting beliefs and personal known knowledge with other learners by verifying agreements and disagreements. This implies three other processes: Falsifying, complementing, and discovering. Falsifying implies to assign falsity or error to a comment or judgments as a result of disappointing with a belief, comment or knowledge idea. Complementing means that we agree with the comment and accept it as a truth but we believe that it is incomplete. Discovering is new knowledge for learning or new ways of viewing a known knowledge. These processes are grouped within the most general process of comparing and can be externalized or just mentally processed without explicating it. The idea with our study was to go further of just presenting information. We foster discussions where personal known knowledge is proof tested because of the collective interaction ending with a collaborative social knowledge construction. If we wish to evaluate these processes as triggers for meaningful learning we observed a direct relationship between previous knowledge and the quality of the construction of knowledge. Thus the more knowledge a teacher may have on a specific topic the more probability of falsifying. This is very relevant when assigning a role to content and support materials for the Virtual Dialog Classroom. V. DISCUSSION The main goal of this study was to develop a model for e- Learning and test it with a group of teachers by using modern learning theories and principles that better fit e-learning. Some of the premises of our design were the enormous potential of collaborative work and virtual interaction in e- Learning as it is mentioned in the literature. However these learning strategies are not tacit and even though they can facilitate learning they can also impede it. To ameliorate this there are some strategies such as teachers sharing their interests in teams and to maintain non formal communication during the course work. This favors confidence among students and group interaction around academic tasks. One of the key aspects to facilitate collaboration was solving educational problems. Teacher could discuss themes based on their everyday experience by connecting theory and practice, and taking into consideration the teacher s knowledge. A balanced mixture between individual and collaborative strategies is also recommended. e-learning programs should exploit the unique capabilities of Internet as a communication medium by going further than just studentteacher communication and emphasizing group work among students. We believe that distance learning programs should exploit the unique features and added value of a powerful medium such as internet. Thus some constructivist theories and principles can be embedded into virtual environments to promote active learning and the construction of meaning. We have presented an e-learning model and describe the design, implementation, and evaluation of a training program for school teachers. We analyzed the teacher s construction of knowledge by reflecting on teaching and learning. Through interacting and communicating we have developed electronic communities around pedagogical content. We believe that this 67

171 Meaning Making Through e-learning experience reflects a way of knowledge construction from teachers that is not exclusive to e-learning; rather it can be used in a meaningful way during everyday pedagogical practices in the school. REFERENCES [] Collis, B. (997). Pedagogical reengineering: A Pedagogical approach to course enrichment and redesign with the WWW. Educational Technology Review, 8, -5. [] Dillenbourg, P. (). Virtual learning environments. Proceedings of EUN Conference, Learning in the New Millennium: Building New Education Strategies for Schools. Workshop on Virtual Learning Environments. Geneva. [3] Garrison, D. & Anderson, T. (3). E-Learning in the st Century: A Framework for Research and Practice. New York: Routledge/Falmer. [4] Haddad, W. (3). Is instructional technology a must for learning?. TechKnowLogia, 5-6, January March. [5] Haavind, S. (). Why don t face-to-face teaching strategies work in the virtual classroom? How to avoid the Question Mill. [6] Harasim, L., S. R. Hiltz, L. Teles, and M. Turoff. (995). Learning networks. Cambridge, MA: MIT Press. [7] Harasim, L. (99). (Ed.). Online Education: Perspectives on a New Environment. New York: Praeger. [8] Jonassen, D. (998). The Computer as Mindtools. TechTrends, 43(), 4-3, March. [9] Jonassen, D. (995). Constructivism and computer-mediated communications in distance education. The American Journal of Distance Education, 9(), 7-6. [] Kozma, R., Zucker, A. Espinoza, C., McGhee, R., Yarnall, L., Zalles, D., and Lesis, A. (). The online course experience: Evaluation of virtual high school s Third Year of Implementation, 999- Final Report, [] Meyen, E. L., Aust, R.J., Gauch, J.M., Hinton, H.S., Isaacson, R.E., Smith, S.J., Tee, M.Y. (). e-learning: A Programmatic Research Construct for the Future. Journal of Special Education Technology, 7(3), [] Salomon, G. (988). Novel constructivist learning environments and novel technologies: Some issues to be concerned with. Research Dialogue in Learning and Instruction, (),,

172 Software Egineering e-learning Mathematical Software Bekim Fetaji, Shefik Osmani and Majlinda Fetaji 3 Abstract The research is proposing a new way of tackling the process of creation of e-learning interactive environments by integrating and undertaking the software engineering approach based on e-learning outcomes. The main research was focused towards creating an e-learning mathematical software solution that will be used in the course Discrete Mathematics to learn mathematical operation of different types such as system of linear equations, matrices, determinants, functions etc. This gives the student learners additional advantages in learning and solving equations, and perform matrix operations in a shorter time and have a visual representation of it. It is also an opportunity for the student learners to gain computational experience and to check their results with the software ones. In order to assess e-learning effectiveness we have proposed a methodology called ELUAT (E-learning Usability Attributes Testing) and as measuring instrument the PET (predefined evaluation tasks) inspection technique. It investigates and is modeled to support problem based learning. and adjacent matrices. Also, the software checks a few matrix properties such is it upper or lower, involuntary, orthogonal, symmetric, asymmetric and diagonal. The software is able to solve determinants of any range by the same algorithm. Root of the function the users can also find the root of the nonlinear function by methods learned in the Numerical Analysis Course at SEEU. Those methods are Bisection, Secant, Newton-Raphson, Regula Falsi, Stephensen and Fixed Point. Functions the most challenging part of the developed software system is evaluating and drawing the graph given by a user in a text format. For these algorithms which are very hard to define we have used a Delphi package for evaluating the functions. I. INTRODUCTION The software solution developed has evolved as an idea to be a valuable tool for students and others who want to learn mathematical operation of different types such as system of linear equations, matrices, determinants, functions etc. Students complained about course learning content in Discrete Mathematics course which involves linear systems solution solving, finding the roots of a function, drawing and evaluating graphic of the function and other mathematical operations. Their opinion was that the learning content was not enough, and they spent to much time solving and calculating mathematical operations such as matrix and system operations with higher range. It means that they spent a lot of time in operations that are second hand and are simple calculations. Therefore we have initiated a research study to design and build software solution that will answer these requirements. The software was envisioned to provide and fulfill the next requirements: System of linear equation the software should be able to solve the system independent from the number of unknown variables in that system of equations. Matrices as an important part of these mathematical fields are matrices. The solutions of the systems of linear equations will be based on the solutions of the matrices. The main operations that software will calculate are addition, multiplication, subtraction, inverse of a matrix, transposed matrix, finding their determinants, LU factorization Bekim Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, Shefik Osmani is with the IT center-seeu, Ilindenska bb, Tetovo, Macedonia, 3 Majlinda Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, 69 Fig.. The Interactive e-learning mathematical interface. II. RESEARCH METHODOLOGY We have considered the next learning modeling approaches: ) the content-oriented, ) the tool-oriented, or the 3) task-oriented approach [5]. We have decided to use the task-oriented approach. The data were collected throw usability testing, focus groups and interviews with prospective users. The purpose of the research realized is in order: () to gather information and asses e- learning interactions between human actors and the developed medium of instruction-the software solution, (intervention strategies and content), () determine the distance between learner activities and preconceived scenarios. The observed route of a learner has been used to give feedback information on the effective learning. In our approach for the software solution we have decided to be modeled and used for Problem Based Learning. In Problem-Based Learning, students think, retrieve information for themselves, search for new ideas and apply them using the software solution.

173 Software Egineering e-learning Mathematical Software We have used the general principles and guidelines for HCI regarding the software design from [9], and general principles and guidelines for document design and guidelines for online documentation [3]. All this guidelines were closely advised and reviewed when designing the interactive e-learning mathematical tool. In order for the software solution to be successful it should be developed in close consultation and contacts and feedback with users. In the case of technology to support learning that means consulting with both teachers and learners. The matrix form contains three tabs: Properties, Operations and LU. The user can calculate the property of the matrix by following actions: (Note: The matrix should be a square matrix to perform these operations.) Enter number of rows and columns in textboxes (the numbers should be positive): ) On the grid enter the matrix values, ) Click Calc button, 3) As the result users will get the determinant of the matrix, range and the properties in form of checkboxes. Check means that the matrix has that property. Fig 4. Properties tab of the Matrix Operations In the operations tab users can calculate few matrix operations, as following: ) Enter rows and columns of the first matrix, ) On the grid below input values, 3) On the dropdown list select the action to perform, 4) If operation addition, subtraction or multiplication is selected the enter number of rows and columns of the second matrix the input the values, 5) Click on the Operate button The result users will se in the result grid, depends on the action selected. In the screenshot there is an example of inverse of matrix. In the following tab can be calculated the LU factorization as following: ) As an input here you have one matrix, and output two matrices, ) Input number of rows and columns of the matrix, 3) Input the values on the grid, 4) Click Factorize button. As a result users have two matrices Lower and Upper matrix (Figure 6.) Fig 5. Operations tab of the Matrix Operations Fig 6. LU tab of the Matrix Operations Roots of Equation - Bisection To find the root of equation with method of Bisection users follows these steps: ) In the f(x) textbox input the function, ) Enter Tolerance, Endpoint A, Endpoint B and Steps, 3) Click Solve button. The result is displayed in the grid. To see the graph of this function user should click on the graph button (Figure 7). III. RESEARCH INSTRUMENT DEVELOPMENT Major challenge for e-learning researchers is to assess e- learning effectiveness. In order to do that we have proposed a methodology, called ELUAT (E-learning Usability Attributes Testing), which combines an inspection technique with user-testing based on 4 usability attributes we have set. The usability attributes we have set are: ) Time to learn, ) Performance speed; 3) Rate of errors; 4) Subjective satisfaction. The e-learningmethodology is necessary for presenting the e-learning in an efficient aspect. The theoretical basis are pedagogical conceptions defined from [6]: Learning according to the constructivist perspective, usability of the e-learning environment and research about user opinions. We have based the measuring instrument on the use of predefined evaluation tasks (PET), which precisely describe the activities to be performed during inspection in the form of a predefined tasks, measuring previously assessed usability attributes. We have named it as PET inspection technique and using this technique we evaluated usability attributes using evaluation tasks for a particular scenario. Evaluation tasks in this technique are determined throw designing several user scenarios and choosing the scenarios that include the most of the options of the software. This kind of approach using this technique has shown very effective, straightforward and useful in determining the distance between learner activities and preconceived scenarios in several research project we conducted. Using the ELUAT methodology and PET inspection technique we have gathered information on interactions between human actors (intervention strategies and content). Scenario contains at least a collection of components and a method. The components are roles, activities or activitystructures, which role does what (which activity) and at which moment is determined by the method which is made up of one or many plays formed by a series of acts. In an e-learning environment, information obtained from learner activity contain a certain pedagogical semantic. The observed route of 6

174 Bekim Fetaji, Shefik Osmani and Majlinda Fetaji a learner has been used to give feedback information on the level of learning and its effectiveness. We have considered the next learning modeling approaches: the content-oriented, the tool-oriented, and the task-oriented approach, and we have chosen the task oriented approach for which we developed the methodology to suite to our specifics. Task n# Time for: Task completion Help search Recover from errors M S E R O H F * design. When testing, psychology is far more important than the rational mechanics of good information architecture, though it's clearly desirable to understand both. V. DATA COLLECTION AND RESULTS According to the research of [8] for usability testing 5 users are enough, however we have used users. After the usability test we had collected data from the participants we had, were 5 of them were experts while the other 5 novices. In order to handle those data we have used the triangulation technique from [3], were we look at all data at the same time to see how the different data supports each other. Time to Total: Learn: Fig.. PET inspection technique task based form The PET inspection technique uses the next measurements: M Menu Error; R Repeat task; F- Frustrations; S Selection error; O Uses online Help, E Other errors, H - Help calls, *-Subjective Satisfaction (5-very high, 4-high, 3- average, -low, -very low). This methodology and the inspection techniques has been used in several different research projects and it produced valuable information for the design of the subsequent studies and proved as viable methodology and technique. IV. THE EXPERIMENT The testing process is divided into three phases [5]: planning, acquisition and execution with evaluation. We have followed these guidelines. The planning phase provides an opportunity for the tester to determine what to test and how to test it. Users are asked to perform tasks while usability experts observe and take note of their actions. The acquisition phase is the time during which the required testing software is manufactured, data sets are defined and collected, and detailed test scripts are written. During the execution and evaluation phase the test scripts are executed and the results of that execution are evaluated to determine whether the product passed the test. The difficult areas that repeat themselves between multiple test participants reveal areas that should be studied and changed by the developers. User testing can often uncover very specific areas needing improvement, where focus groups and task analysis often find more general areas needing improvement. The major output of the planning phase is a set of detailed test plans. In a project that has functional requirements specified by use cases, a test plan should be written for each use case. There are a couple of advantages to this. Since many managers schedule development activity in terms of use cases, the functionality that becomes available for testing will be in use case increments. This facilitates determining which test plans should be utilized for a specific build pf the system. Second, this approach improves the traceability from the test cases back into the requirements model so that changes to the requirements can be matched by changes to the test cases. The specialist/analyst who sits in on the test will almost certainly be a behavioral psychologist, with cognitive psychology skills (the process of learning and understanding) and knowledge of HCI (Human Computer Interaction). They will also be a usability expert, but it's likely that their background will be in psychology rather than site Fig. 4. Triangulation technique[3]. We also tabulated the data for the performance measurements using the next usability attributes: time to learn, speed of performance, rate of errors, Subjective satisfaction, and Frustration for the both classes of users Experts and novices. Please look at the appendix E for the tabulated data sheets and results. Here is the tabulated data sheet for time to learn, and speed of performance as well as the general usability requirements measures. Usability Attribute Time to learn Speed of performance Rate of errors Subjective satisfaction Measuring instrument Task Scenario Task Scenario Task Scenario Task Scenario Value to be measured Time to complete task Time to complete task Number of errors Satisfaction degree of users Current Level average 34.7 s s Worst acceptable s s Planned target level s s Best possible 9 s 8 s * number. Subject satisfaction scale: very high high average low very low Table. Usability requirements for students VI. CONCLUSION The software solution as a new system solution is functioning practically and correctly as defined in its specifications. The experience introduced suggests the positive effects of using the interactive e-learning mathematical tool as our software solution. Randomly assigned treatment groups experienced and worked with the software solution. 6

175 Software Egineering e-learning Mathematical Software Our conclusion regarding the first goal () to gather information and asses e-learning level and interactions between human actors and the developed medium of instruction - the software solution. Our analyses have shown that the e-learning interaction based on PET technique is quite high and the learning curve is quite high also. It is obvious that the student learner are faced with a lot of decisions and they need previous knowledge in order to use the software. The high learning curve of the system however is based on student interaction without any previous instructions. If the system is taught and instructed how to be used in classes then the learning curve might drop significantly and therefore the benefit of the usage of the system cold be much higher. Our conclusion regarding the second goal () to determine the distance between learner activities and preconceived scenarios, our e-learning research analyses based on EULAT methodology and PET technique as well on focus group, we have seen, evidenced and concluded the next: The learners interpret their experiences according to their own perceptions and doing that they construct their own knowledge. Active construction demands a high level of independence and self organization. Construction of knowledge of the learners and the refinement of the ability to do so do not happen passively and autonomously. Learning is situated. The social, motivational and emotional contextual factors of the learning situation decisively control the ways and means of the learning- and retention-process as well as the use of the knowledge and abilities. Students achieve better results and learn more when they can reflect what they learn. This is especially achieved using our developed software solution where they can reflect what they have learned previously, relate it to past experiences, and apply it practically using the software solution. Generally the software is very much appreciated and well welcomed. The analysis of numerical methods is very important task in mathematics because it presents the general study of methods for solving complicated problems using the basic operations of arithmetic (addition, subtraction, multiplication, and division). The contribution of this software solution is that uses a software approach for solving numerical problems that makes the job of the students taking the course Numerical Analysis easier as it gives them an opportunity to solve equations, and perform matrix operations in a very short time and learn more in depth and more thoroughly the different mathematical operations.. The speed or let we say the number of steps for finding solutions to equations with one variable will depend rapidly on the interval or the initial approximation points. If we choose closer approximation we will get faster and more accurate results. The reason why we can say that there is no best algorithm or there is no best method for every case, is just because some of the methods are faster but less accurate, and the others are slower but with a higher accuracy. The software is easy to use, and is directed not only to students of computer science and mathematics, but to other users that want to perform different mathematical tasks included in the software or just want to test the results gained by hand. The help file for using the software solution is another opportunity that simplifies the usage of the software as well as giving general information of each method used in its implementation, for those that have no or some knowledge about numerical analysis.. Since there is always room for improvement the study has to move with time, in the sense that it must be continuously improved in design and functionality in order to meet new demands imposed by new technology. REFERENCES [] Bieber, M., and Vitali, F.(997), Toward Support for Hypermedia on the World Wide Web. IEEE Computer, pp [] Campbell, C. (4), E-ffective Writing for E-Learning Environments, NY:Idea Group Publishing [3] Dumas, J. S., & Redish J. C. (999) A practical guide to Usability Testing revised edition, Pearson Education Limited, pp.55-6 [4] Helic, D. Krottmaier, H., Maurer, H., & Scerbakov, N. (5): Enabling Project-Based Learning in WBT Systems, In International Journal on E-Learning (IJEL), Vol. 4, Issue 4, pages , 5. [5] Elfriede, D. (4) Effective Software Testing: 5 Specific Ways to Improve Your Testing, Addison-Wesley Pub Co [6] Klauser, F.; Schoop, E.; Gersdorf, R.; Jungmann, B. & Wirth, K. (4): The Construction of Complex Internet-Based Learning Environments in the field of Tension of Pedagogical and technical Rationality, Research Report ImpulsEC, Osnabrück, 4. [7] Kuhn, D. (999). A Development model of critical thinking. Educational Research, 8(), 6-6, 46 [8] Nielsen, J. (). Designing Web Usability: The Practice of Simplicity. New Riders Publishing, Indianapolis, ISBN X [9] Pressman, R. (5) Software Engineering: A practitioners approach 6 Ed McGraw-HILL, inc, pp.8-7 6

176 Combining Virtual Learning Environment and Integrated Development Environment to Enhance e-learning Majlinda Fetaji, Suzana Loskovska and Bekim Fetaji 3 Abstract The research was undertaken having in consideration two hypothesis. The first hypothesis is that integration of virtual learning environment (VLE) in the form of an e-learning interactive tool and integrated developing environment (IDE) for programming in Java language will contribute in improving the efficiency and quality in learning because of the enhanced graphical user interface and the hands on approach. The learners can implement and test what they have learned and further extend their learning at the same time. The second hypothesis is that the designed graphical user interface of the virtual learning environment will contribute in facilitating its use by improving the results of the learning process, increasing the user-satisfaction and attention during learning that implicates improving the overal efficiency of learning programming in Java. The objective of the research was to investigate what are the possibilities for improving learning Java programing by creating an e-learning interactive tool with enhanced graphical user interface. An analyses of the traditional method of learning programming language and virtual learning environment approach has been realized. Issues have been identified and proposed solutions and recommendations while reviewing the current situations in these fields. The usability of the created virtual environment was reviewed, in order to assess and propose solutions to the identified issues. The outcomes of the research is a Java interactive tool as virtual environment for learning and practicing Java programming language. It provides an integrated help, that the learners need in order to learn Java language without exposing them to the need to leave the application framework. An editor is also provided with compiling option, option for running Java application or applet and capturing and validating the syntax errors from user side. This way we have promoted the Java learning environment as self sufficient to achieve its objective. I. INTRODUCTION During the last decades, due to the development of information and communication technology and the raising impact of the Internet, an access to a huge amount of information is enabled world wide. This offers new opportunities to acquire knowledge any time, anywhere regardless to the previous constraints, time and location. More and more information in a daily basis is presented in digital and multimodal form. In order to use all this information in the process of learning Majlinda Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, Suzana Loskovska is with Electro technical Faculty, Karpos, Skopje, Macedonia, 3 Bekim Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, 63 electronic environments are created and used. The impact of these technologies is reflected in the increased utilization of e-learning systems and virtual e-learning environments for learning. However there is a certain skepticism regarding e-learning and virtual environments efficiency lately. This is the reason why we have analyzed e-learning systems and virtual e-learning environments. We are in an opinion that further research needs to be conducted to design a grounded theory that would focus on developing a good and efficient system for learning. II. ANALYSES OF THE TRADITIONAL METHODS OF TEACHING Based on our experiences and that of the other colleges from other institutions, related to teaching object oriented programming in a classroom, a conclusion is drawn that teaching and learning object oriented programming is much easer if an electronic environment and if new technologies are used. Learning to program in object oriented language is difficult for novices in the traditional classroom method. The instructor must transmit new ideas of programming concepts, writing, debugging and testing a code that is very difficult. The traditional method of learning is instructorcentered and depends on the methods the instructor uses. The method of teaching lectures on the table, even the visual format of the lectures in a computer, is not sufficient. To overwhelm all the elements of the process of learning to program, each element must be practically tested. The traditional method is limited in time, place and time duration of the class. The wide range of experiential background from novices to advanced programmers. Because of the diversity in the level of knowledge and capabilities of students, some might need to do revision on the lectures that is impossible in the traditional way of learning. Using the hands on approach that means the learned concepts can be tested and applied immediately, also is impossible in the traditional way of learning. In this method of learning, the students are more passive while offering people the opportunity to be active in the learning process through structuring the context in which problems are presented encourages a more natural style of learning. To outrun these difficulties and to complement the demerits of the traditional method of learning to program, today new methods are developed using the electronic learning environments and developing environments for practical application. III. VIRTUAL LEARNING ENVIRONMENTS Over the last years, the education and learning and teaching have been influenced by the rapid technology development.

177 Combining Virtual Learning Environment and Integrated Development Environment to Enhance e-learning That is the learning process has been changed towards more interactive learning activities and authentic experiences according to []. The new learning environments are technology enhanced and supported and computer based environments called virtual learning environments. [] defines virtual learning environments as computer-based environments that are relatively open systems, allowing interactions with other participants and access to a wide range of resources. Such environments foster the any time/any place learning model that is not only a different way of delivering knowledge, but also a powerful means of creating knowledge. These new ways potentially have a wide range of advantages over traditional environments (e.g., convenience, flexibility, lower costs, currency of material, increased retention, and transcending geographical barriers) according to [3]. Learning to program is difficult. To help novices to learn programming we have focused our research on developing a virtual environment to facilitate learning to program in a sense of offering an electronic environment that should meet all the users needs and overrun the demerits of the traditional method of learning. Usually while developing virtual environments pedagogical aspect is left behind without consideration. Therefore, to develop a quality e-learning virtual environment for learning Java we have focused on the pedagogical concept of the e- learning solution. A research is made on how to design a quality e-learning. According to [4] to design our e-learning solution we have followed the approach that design and use of e-learning must be grounded in a learning theory approach. In order to develop the use of e-learning from a pedagogical point of view, it is therefore not enough to study the existing practice. Instead, it is necessary to have an understanding of theoretical principles of the learning process and of the ideal learning environment [5]. The learning environment is important because it models the learning process of particular course in a technological medium, so we have to ideally model the learning process. That is the interface that the learners interact with and the learning activities are taking place to achieve the learning goals. This means that the design of e-learning can not be based only in the existing practice, it is necessary to understand the relation between theory and practice to ensure that the design of practice is founded on the learning theory. This concept is shown in the following figure: Figure - Theoretically grounded evaluation of technology [] We have followed this concept as a pedagogical background of our e-learning solution. It describes that the different learning activities that are driven in the learning environment are supported by the e-learning technologies stated above. The learning principles are formed by the learning activities to be done to produce the learning outcome. The learning activities are crucial to define the features and abilities the learning environment has to support and are supported by the technology. According to the concept of grounded design in [4] that is defined as the systematic implementation of processes and procedures that are rooted in established theory and research in human learning (p. ), the implementation of the learning activities are rooted in the learning theory and human learning theory. IV. PEDAGOGICAL CONCEPT ADOPTED We think that the cognitive and intellectual abilities of learners are crucial in the process of learning to program in an object-oriented programming language. From the several years teaching experience, we concluded that learning to program in an object-oriented programming language is a complex process where the learning approach is not self sufficient. To support the cognitive learning in the process of learning to program we think that a constructive approach of creating knowledge should be enabled. We think the combination of the two approaches would give a better result on the process of learning. The model of the developed learning environment is founded on the learning activities that depend on the cognitive and intellectual abilities of learners and their abilities to individually construct a knowledge. The pedagogical concept in designing the e-learning virtual environment to learn Java is based on: Our e-learning solution for learning programming language Java is grounded on the cognitivist and constructivist learning theory where the learning environment consists of structured learning content integrated as online help content and editing-developing environment which enables creating and finding solution to problems, in the sense that they test different programming concepts of a given example (in the help content) or one created by the user. The independent student work supports their individual cognitive abilities to perceive the learning content and process it into knowledge and the individual and subjective construction of knowledge. The students work is based on their independent exploration of the learning content where they learn and even more on constructing their knowledge by testing what they have learned and creating new solutions of the given examples or problems or new problems. V. ADVANTAGES AND DISADVANTAGES IN USAGE COMPARED AGAINST THE TRADITIONAL METHOD OF LEARNING The virtual learning environment for learning to program in Java will have a simple GUI and will be easy to understand and use, and will be distributed for free download and the more important is the system doesn t need to be installed, meaning we minimize the needed requirements to a simple run of an application. To support the novice programmers our project provides a set of specially designed tools. It includes an editor for editing programs and file manipulation, visual tools for compiling and executing the program and help content that are all presented within a single user-interface 64

178 Majlinda Fetaji, Suzana Loskovska and Bekim Fetaji framework. This allows students to move from one activity to another with a minimized effort. All this provides a maximum support to the novice programmers since program construction can be conducted entirely through menu interaction. It will offer just the essential functions needed to write Java code. This will allow the users to concentrate on the language structure and the principles of coding. The virtual environment offers a developing environment that enables the hands on approach that helps students to improve the quality in learning in a sense of immediate testing what they have learned. While the learning content is integrated as a help content, including links to external recourses, multimedia and audio content and the Microsoft avatar as an assistant in learning. This will offer students to learn and practically test the programming concepts, self-pace the process of learning and when and where they want. To learn programming in Java in the traditional method, we have used the traditional format of lectures in the table or power point presentations that were transmitted by the instructor and were instructor-centered, and not flexible in aspect of time and place. To write, debug and test a code was very difficult while creating an executive object was impossible with the concept ex-cathedra. Therefore, a simple text editor was used to write the code. To compile and execute the Java code, the Sun Java compiler - Javac was invoked writing a strict syntax-ed command that initiated an increased rate of errors. In the following table is given the result of comparing learning to program in Java in the traditional method and using the developed learning environment, the same time the advantages of using the developed virtual environment against the traditional method of learning Java: Variable Traditional method Virtual Learning Environment Java Conducting instructorpaced self-paced learning Flexibility of non time and place learning flexibility Revision of the content impossible multiple times Time duration limited unlimited of the class Writing, testing very difficult easy and debugging Compiling and big rate of low rate of executing a errors, task errors, task code scenario scenario Learning activity passive active Table. Traditional method versus Learning with VLE Java Disadvantages: There are still some disadvantages of using the developed virtual environment against the traditional method of learning Java:. The acquisition of some skills and concepts of programming depend on direct face-to-face contact with the instructor.. The classrooms enable to get an instant feedback from the learners which is very important in the process of learning. 3. The students that can not learn without help are disadvantaged. The face-to-face training with an instructor leads to greater interaction during learning where the learner may acquire knowledge from the instructor and that leads to greater success VI. USABILITY TESTING We conducted usability testing based on performance measurement to quantify usability requirements such as time to complete a task, time to learn, rate of errors and subjective satisfaction defined by task scenario using the traditional method environment and the developed integrated virtual environment Java. Also we made an evaluation by direct observation of users while they were performing different tasks by using the traditional method environment and the developed integrated virtual environment, and users from two different classes were observed. a. What do we evaluate? In terms of usability: functionalities: can the user perform the requested tasks? time: are the tasks performed in a reasonable time? satisfaction: is the user satisfied? Mistakes: does the user make a lot of mistakes? comparison, in particular with text based interface tool. The research conducted was based on qualitative research were we study the relationships between the study variables and afterwards we use exploratory research to research the factors influencing the graphical user environment and afterwards constructive research to construct the software solution. We can view the results of the usability testing in the two used environments in the following tables: Table. Usability research for Class- in the Traditional method and the developed virtual environment Usability Attribute Time to learn Speed of performance Rate of errors Subjective satisfaction Measuring instrument Task Scenario Task Scenario Task Scenario Task Scenario Value to be measured Time to complete task Time to complete task Number of errors Satisfaction degree of users Traditional method environment Integrated virtual environment * number. Subject satisfaction scale: very high high average low very low

179 Combining Virtual Learning Environment and Integrated Development Environment to Enhance e-learning Table. Usability research for Class- in the traditional method and the developed virtual environment Usability Attribute Time to learn Speed of performance Rate of errors Subjective satisfaction Measuring instrument Task Scenario Task Scenario Task Scenario Task Scenario Value to be measured Time to complete task Time to complete task Number of errors Satisfaction degree of users Traditional method environme nt Integrated virtual environme * number. Subject satisfaction scale: very high high average low very low VII. CONCLUSION We have tested the viability of the variables chosen for the study of the developed Java Editor e-learning system. It also produced valuable information for the design of the subsequent studies. The conclusions may be summarized as follows: - The variables provide both qualitative and quantitative and objective and subjective data. The experiences introduced suggests the positive effects of using the Java Editor in classroom teaching/learning. In these classes, randomly assigned treatment groups experienced the Java Editor assisted learning in different ways, and the data were collected through the class experiences and questionnaires. Those questionnaires have shown positive opinions and high degree of user friendly concept embracement of the developed virtual environment. Using this kind of user centered approach in building our graphical user interface and involving the users at each stage of the development and evaluation of the interface we have concluded that it resulted in a very user friendly graphical user interface. It is more usable, oriented towards the users and will certainly be used in the future from their side according to the satisfaction rate encountered during the usability testing. According to the research results we acquired from the empirical study and compared to the previous years of the same java classes when for compiling was used the command based interface and for writing the code the notepad was used, the new developed graphical interface has several advantages. The developed graphical user interface system is easier to use and has better performance rate than the textual command line based interfaces which was usually used previously in java classes for compiling the java code. Also having everything they need in one place the students do not need to leave the application framework at all especially by having the multimedia and virtual assistant help. The option to capture the syntax errors was also welcomed from the users of both type s expert and novice. In a perspective of learning a programming language, in general, to use a graphical user interface system is less expensive and less time consuming, a greater accuracy in the process of writing the code has been achieved, also compiling and running the code is much easier and linear process than the use of textual command-line based interface. Users are more involved in using the visual graphical interface and more confident than the previous users using command based textual interface. Our recommendations are to use this kind of structured approach described here to develop similar graphical user interfaces using the user centered approach that will include the users at all the development stages of the graphical interface. REFERENCES [] Malone, P., Schryer, C. & Rossner-Merrill, V. (). Combining Instructional Models and Enabling Technologies to Embed Best Practices in Course Instructional Design. In P. Kommers & G. Richards (Eds.), Proceedings of World Conference on Educational Multimedia, Hypermedia and Telecommunications (pp ). Chesapeake, VA: AACE. [] Harmon, J. & Marquez-Zenkov, K. (3). Perpetual Pedagogy: A Critical Deficiency in Modeling Educational Technology to Pre- and In-Service Teachers. In C. Crawford, D. Willis, R. Carlsen, I. Gibson, K. McFerrin, J. Price & R. Weber (Eds.), Proceedings of Society for Information Technology and Teacher Education International Conference 3 (pp ). Chesapeake, VA: AACE. [3] Ahmad R., & Ives B., (998) "Effectiveness of Virtual Learning Environments in Basic Skills Business Education: A Field Study in Progress.", with. Proceedings of the Nineteenth Annual International Conference on Information Systems (ICIS '98), Helsinki, Finland, December 998. [4] Hannafin, M. J., Hannafin, K. M., Land, S. M. & Oliver, K (997).: Grounded Practice and the Design of Constructivist Learning Environments, Educational Technology Research and Development, 45(3), 997, p.-7. [5] Hannafin, M., Land, S. & Oliver, K. (999): Open Learning Environments: Foundations, Methods, and Models. In: Reigeluth, C. M. (Ed.). Instructional-design Theories and Models: A new paradigm of instructional theory, Volume II, 999, p Lawrence Erlbaum. 66

180 Software Engineering e-learning Information Retrieval Courseware Bekim Fetaji and Majlinda Fetaji Abstract Research studies, practical project activities and realworld implementation experiences were focused on designing and building information retrieval courseware system. The objective of the research was oriented towards creating a courseware system that will be based on assessed, and evaluated e-learning outcomes and previous known concepts to users. It was targeting the computing knowledge level of the users and provides higher level of information for the course content and support for different file formats. The main focus was set on evaluating the e-learning outcomes and based on them designed the information retrieval courseware in compliance with theories of learning and didactical pedagogical approach. The research is proposing a new way of tackling the process of creation of e- learning information retrieval courseware by integrating and undertaking the software engineering approach based on e- learning outcomes and taking into consideration theories of learning pedagogical approach in its development. Also business objective expressed throw the cost effectiveness of the entire system was set as priority. We have achieved to have a high cost effectiveness throw minimizing maintenance and need for training keeping the learning curve flat. Based on our research survey and user feedbacks it resulted in a courseware system that is cost effective and very usable. We recommend this courseware to departments were the staff computer literacy level is low and there are no financial means for a commercial learning management system. Keywords e-learning, information retrieval courseware, education I. INTRODUCTION In most of the contemporary universities, the implemented Course Management Systems (CMS) lacks of responsiveness of the well-designed model regarding the specific needs of the institution. Usually these models are complicated and require a lot of additional efforts from the staff and students both for learning and usage. On the other hand they are expensive to maintain and support especially when compared with the level of their usage. A straightforward implementation of an existing commercial tool for Course Management Systems (CMS) in environment of low e-literacy of teachers and e- Culture of students, slow growth of Internet penetration, inadequate and insufficient IT equipment of the Higher Educational Institutions (e.g., countries of West Balkans and other regions in the world with similar economic Bekim Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, Majlinda Fetaji is with the Faculty of Communication Sciences and Technologies, Ilindenska bb, Tetovo, Macedonia, environments), can bring only simulations and not a substance for improving the real performances of the learning process. The proposed model in this project is based on achieving the several parameters: to be in compliance with the learning theories and have a didactical pedagogical approach, e- learning outcomes, simplicity in usage-effecting in low learning curve, minimizing the maintenance and support in order to achieve high cost efficiency. We also considered as crucial and very important to asses and focus on the computer literacy level, but also of the instructors/teachers views and ideas with regard to their fields (i.e., their practical knowledge in delivery of course.) Our solution was targeting the academic staff computer literacy current level and adopted to their views and needs as prospective users of the system. Our main research objective was to create an information retrieval courseware system that we named as Intranet Gateway system that will provide higher accessibility oncampus and of campus. The aim was to raise the level of accessibility by providing additional e-learning features and options to the end users. The research study objective was set in finding new and more effective approach in designing and building a course management system that will emulate the approach of the classical teacher-classroom-content laboratory and will raise it to the overall level of communication and accessibility to information and e-content. II. RESEARCH METHOD Our research methodology was fundamental research of the e-learning outcomes, variables and the findings from our fundamental approach we used in our applied research afterwards based on exploratory and constructive research on several hypermedia development methodologies. A hypermedia application represents a collection of unstructured and multimedia nodes connected through links in an associative way. Since hypermedia systems are highly interactive, the design method has to be user-centered. We have however used a hybrid combination and different interaction mechanisms throw active x controls and the hyper document is actually accessed in run time from the user side, while from an administrative perspective it is set as file on a remote server. At all stages of the design and development, prototyping and evaluation were the basic activities in the development process. The data for this research was gathered from research interviews with e-learning specialists and participants, focus group and interviews, as well as web based surveys and printed hard copy surveys with academic staff and students. Key variables and themes that have been studied are: assessing and measuring e-learning outcomes, students needs 67

181 Software Engineering e-learning Information Retrieval Courseware analyses, environment of usage feasibility analyses incorporating multicultural and multilinguicity specifics, applications specifics and requirements in correlation with the environment and situation of the University and then broadly generalized for all contemporary Universities, accessibility and learning specifics based on usability testing and evaluation of the environment. Considering the software development methodology approach, we have used the spiral software development life cycle, prototyping, usability expressed in matrix and usability testing; and cross-sectional survey using questioners, in order to find and get the feedback from the users of the system in accordance with the guidelines from [6]. III. REQUIREMENTS ANALYSIS AND THE CONCEPTUAL DESIGN The mission of the application is established by identifying prospective users and defining the nature of the information base. Also identifying and assessing the e-learning outcomes is set as primary goal. In addition to the customary requirement collection and feasibility assessment tasks, web applications designed for universal access require special care in the identification of human-computer interaction requirements. In order to establish the interaction mode most suitable for each expected category of users, and for each type of output device that users are expected to use to connect to the application based on recommendations from [3]. Speaking to a prospective users and research interviews with e-learning specialists and participants, focus groups we have assessed and defined our users computing literacy level. Our results shown that almost all of them had knowledge of basic Operating System commands, opening, creating, changing, and deleting folders and they know the common operations with using files, as well as they all know MS Word, and notepad. They were all aware of other file formats as: Adobe portable document (pdf), and Windows help file (chm) format. These assessed knowledge level was set as our target and our software was build to target this level of knowledge and use only these operations as well as embed them in our software solution for the courseware. As far as the conceptualization is concerned the application is represented through a set of abstract models that convey the main components of the envisioned solution. In the Web context, conceptualization focus is on capturing objects and relationships as they will appear to users, rather than as they will be represented within the software system. In our approach in building the Intranet Gateway we have created a Public Folder in a remote server were the entire content of the courses content and subject materials can be placed. Conceptually we have organized the folders to contain all the files inside them and an additional folder to be set up for the web page needs. The lecturer of a particular subject and his assistants have administrative privileges and are allowed to have access to the folder of the subject in the remote folder, were they can add, change, delete, write or read information, change and manage the content. The students are not allowed to have access to these folders. They will access the content from a web interface. To provide a formal user-interface design, the frame concept has been employed. IV. SYSTEM MODELING In developing the Intranet Gateway system we have approached using the spiral life cycle model and followed the usability principles recommended by [], and [5]. DETERMINE GOALS, ALTERNATIVES, CONSTRAINTS PLAN p Alternatives 4 Alternatives Testing plan 3 Constraints 4 Alternatives Constraints 3 Constraints Constraints Alternatives start Requirements, life-cycle plan Development plan Implementation plan Risk analysis 4 Ris k analys is 3 Ris k analys is Risk analys is Prototype Prototype Concept of operation Validated requirements Software requirements Validated, verified design EVALUATE ALTERNATIVES AND RISKS Prototype 3 Software design Code Prototype 4 De taile d design Unit test System Acce ptance test test DEVELOP AND TEST Figure. The Spiral Method, [6] As a platform we have used Microsoft Active Server Pages- ASP. The spiral methodology reflects the relationship of tasks with rapid prototyping, increased parallelism, and concurrency in design and builds activities. The spiral method should still be planned methodically, with tasks and deliverables identified for each step in the spiral software development life cycle (Figure ). Figure. Intranet Gateway courseware system Class content materials An important issue during development was importing the entire content from the remote Public folder and nothing to be hard coded as content. Figure. 3 Intranet Gateway courseware system Delivery mechanism 68

182 Bekim Fetaji and Majlinda Fetaji This makes it very easy to maintain and manage content. The entire content is loaded in the browser at the moment of accessing the required subject throw the web based interface and the course administrator-lecturer and assistants engaged in the subject are not limited and can offer content in different formats like Word documents, Adobe pdf files, MS chm format and all other available standard formats for content presentation. This is realized throw using MS Active X controls. The course administrator simply using copy past functionalities puts the content in the folder from were the students access it. Also the content shows the attributes when it was created and modified as well as its size together with an icon and description of its format. In defining the initial requirements using this model before building our first prototype we first anticipated all the requirements and created a project management time table that would involve all anticipated activities like detailed design, needs analyses, development phase, testing the unit, integration and test, evaluation and implementation. Table : Work breakdown activities with dependencies, duration, pessimistic (PT) and optimistic time (OT) The project managing team introduced 3 work breakdown activities for the Gateway project. Then based on our activities using the Critical Path method- CPM in combination with PERT (program evaluation and review technique) we have defined our critical activities and time of delivery (time when the project can be delivered) that was calculated to be in 5 weeks, and we also have done the risk analyses of the system, analyzing all the possible risks. The project had two critical paths (CP) lying between the activities: CP: and the second critical path between the next activities CP: The system was constantly during its all stages critically evaluated first from the development team during meeting sessions and then from the users. Important issue that was highly considered was the news and announcement options. Based on the interviews with students and lecturers it was concluded that students very rarely access announcement and read news regarding the University, Faculty and subject. In order to solve this issue we have implemented news scrolling text developed using JavaScript that according to our survey was highly appreciated. The entire content of the news and announcement will be uploaded from a simple text file, and it will show in each subject page on the left bottom of the screen. The menu content and links are based on a grounded theory research that included several comparative analyses of course management systems and web based course systems as well on our experiences and surveys on the users needs analyses. Following the guidelines from [4] the interface is clear and navigational structure is clearly marked using breadcrumbs allowing the user to orient him self at each point about his position in the content structure hierarchy. Also a clear exit or shortcut to the other main content groups is provided so the user can easily navigate depending on his preferences and needs observing the navigational guidelines and not so much the aesthetic perspective of the interface. The aesthetics can be addressed later and has to be sacrificed for the accessibility, content availableness and overall functionality since it is not a priority for such a system under development. V. IMPLEMENTATION ISSUES AND SOLUTIONS The functionality concerns both training activities and navigational ones, like moving through hypermedia objects and browsing large multimedia structures []. The training issues were brought to minimum as it was one of the main goals in the design of the project. The main issue was to develop an interactive system that would support user functionality efficiently and effectively, taking advantage of the new infrastructures currently available. All the users are brought to use a system that is already familiar and based on already used concepts and techniques that they already know how to use the public files. The information provided by the system is presented in a variety of ways, such as interactive simulations demonstrating the use of various Internet tools, hypertext, special sounds, appropriate images and animations, hypermedia object, etc. VI. CONCLUSION As a result of the project we have anticipated different outcomes, some positive and some negative aspects. In order to evaluate the outcomes of the Intranet-managed teaching/learning we have based it on the usability testing and evaluation questioners that we used as instrument to evaluate the outcomes. The questionnaire was focused on determining the level of content and comparisons between old teaching and new-intranet approach. Also a grade average analysis (comparison) of students who attended classical classes and those with using Intranet Gateway is done in the following table for the subjects Computer Application in Communications and CAC and ; Object Oriented Programming in Java(OOPJ),Web Design and Multimedia (WDM ).It is very difficult to conclude that this improvement in the performances of student success is only due to the impact of Intranet Gateway, because also other factors can influence to it (ex. Methodology of teaching, the evaluative educational aspect of the next generation of students by default for the purpose, etc.), but the notion that the main factor for improvement was this new Course Management System through Intranet after the analysis of the questionnaire is true. 69

183 Software Engineering e-learning Information Retrieval Courseware Subject GPA before Intranet usage GPA Improvements after usage CAC 6,85 7,4 CAC 6, 6,7 OOPJ 6,7 6,95 WDM Evaluation throw student GPA GPA before Intranet usage GPA Improvements after usage CAC CAC OOPJ WDM Table.. Evaluation throw student grade average GPA The Intranet gateway was implemented for the courses in the Communication Sciences and Technologies (CST) Faculty - Computers Science Department (CSD) of SEEU. Based on the observations and measuring instruments, the outcomes are as follows: Positive outcomes:. With the use of Intranet, the three classes became more effective. Comparing with the projects from previous years, the quality of the technology tasks were improved, especially the quality of multimedia designs and Web based instruction designs. Some of the projects showed that students accomplished more than they had expected, and even more than the listed course objectives, because they had more opportunities and time to explore the technologies, and to improve learning by team collaboration.. Accessibility was substantially increased. The students now had more access to materials and feedback. 3. Student motivation by using the system was also substantially increased. The classes became more interesting for them and they enjoyed learning, requiring new knowledge and technology in the way they felt comfortable, which, to certain extent, motivated their learning. We concluded that learning is substantially more effective when the learners were highly motivated. 4. One of the most important outcomes was that students actively involved through the teaching/learning processes, from which they learned to manage their own studies. 5. Students also learned how to use Intranet to enhance teaching. Their projects reflected not only the technology designs, but also the use of Intranet in teaching. They learned this from the way the classes were organized, and from the way they were taught. Negative Outcomes: 6. The instructors complained that now their work has been at least doubled and the requirements for a course preparation are more demanding compared to previous years, what effected substantially their motivation in using the system because of the amount of work additionally added to maintain courses. 7. Students were complaining why all of the class materials are not provided electronically-digital format as e-content and why they still have to use only books. They can not take the book everywhere with them while they can access the e- content easily and does not impose any constrain to them. 8. The content provided is of static nature and is simple text and static graphic images which does not effect learning in any other level compared to the previous classes (based on user and instructor survey feedback) and there are no studies that could convey the way of preparing e-content that could substantially increase learning compared to the classical classroom. 9. Based on the user feedbacks, e-learning survey and literature reviews we have concluded that the courseware system at this stage of usage is providing only e-reading and not e-learning since it does not impose any distinct effect on learning, accept on motivation, attention, and accessibility level. Our observation based on questionnaires, survey and our analyses is that the developed system is very cost effective, with minimized need for a maintenance and staff training since it based on their previously assessed level of knowledge, it proved practical and easy to use, specially having in mind departments were computing knowledge is not a requirement. Future work should involve further research on authentication issues, forum option, links to other University software like library software, SEEU assessment software, central administration, and also should include options for importing and exporting the content into SCORM complement content package that could be easily used in conjunction with the other commercial Course Management Systems. Because of its cost effectiveness and simplicity of usage and since it is not imposing any previous IT skills and knowledge requirements based on surveys and user feedback it proved practical, useful, and easy to maintain system with highly acceptable cost effectiveness in a broad aspects of usage for different institutions/departments. REFERENCES [] Challa, C. D., and Redmond, R. T. (996) Is it a Lot of Hype? Hypermedia Approach to Document Processing, Journal of Systems Management (47:3), pp. -. [] Dumas, J. S., & Redish J. C. (999) A practical guide to Usability Testing revised edition, Pearson Education Limited, pp.55-6 [3] Helic, D. Krottmaier, H., Maurer, H., & Scerbakov, N. (5): Enabling Project-Based Learning in WBT Systems, In International Journal on E-Learning (IJEL), Vol. 4, Issue 4, pages , 5. [4] Maciaszek L. A., & Liong L.L, (5) Practical Software Engineering: A case study approach, Harlow, England: Addison Wesley [5] Nielsen, J. (). Designing Web Usability: The Practice of Simplicity. New Riders Publishing, Indianapolis, ISBN X [6] Pressman, R. (5) Software Engineering: A practitioners approach 6 Ed McGraw-HILL, inc, pp

184 The Problems in Distant - Learning Veneta Aleksieva Abstract In this paper are represented the problems in distant- learning. I focus on the feedback in e-learning as a base method for contact between teachers and students, which is considered from two viewpoints: student teacher and The learning efficiency. Keywords Distant Learning, Feedback, Final test, Continual assessment tests The achievements of telecommunication technologies pioneered a way for the new technologies advance as teaching and communication tools, which can provide knowledge without the limitation of the traditional way of teaching. These new technologies help very much for the advance of Distant - Learning, used by a huge number of scientific, cultural and trade companies in some form. Today, the tendency is to apply Distributed Education, in which not only is the student is physically separate from the teachers /and the other students/, but he also learns at his own pace and at a time convenient to him. The opportunity for learning and teaching, independent of the time or the place, is much facilitated by the use of Web-based courses. E-learning should be a response to some requirements which should be leading, when we will develop E-learning, namely: Clearly defined a target group Clearly and precisely defined learning goals Quality content /reliable, modern, achievable, appropriately represented for the target group/ Proper teaching style /the curriculum will be presented or assimilated by the proper teaching methods active learning, interactive approaches etc./ Effective communication and feed back which are considered from two view points: The Feedback Student Teacher the student s opportunity to make contact with his teacher /instructor, web- administrator etc./ for various reasons which arise in the learning process The Feedback The learning efficiency the teacher s opportunity to estimate from the students results /tests, exams etc. / the success level, his omissions in the presentation of material and the learning efficiency. The analysis of this type feedback offers the chance to correct the mistakes and omissions and to decrease them in the future by adaptive strategy for presentation to a unique target group Veneta P. Aleksieva, Technical University, st. Studentska, 9 Varna, Bulgaria, Most people have traditional education, from kindergarten to the end of their high school education, so they have their own expectation about the methods and the communication tools in relation Student Teacher. Perhaps they have some of these problems which must be overcome: The change of the teaching role - The teacher is more a consultant and an assistant than a leading figure. The change of interaction with the students distance and time influence the teacher s control on the learning process. The various communication media The newest communication network tools are more complex, therefore they demand more experience with them. E-learning provides topicbased forums, on-line conversation, , newsgroups, on-line tutorials and on-line tests of students. The students isolation It generates practical and psychological problems and requires the use of new communication network tools, forms and communication skills. There is no competition or contact with other students. The immediate teacher s support is missing too, so the students have more adaptation time. The student is a social individual and he needs to be a member of a very integrated group, which collaborates and has identical goals and tasks. The fast feedback opportunity is the essence of E-learning, because it can t the use benefits of traditional learning direct relationship with the teacher, discussions about ideas and problems with other students, to teamwork in skills acquisition. Here the accent is on the cognitive learning, namely the acquisition of knowledge by information adoption and revision, but without monitoring and orientation by the teacher of the developed skills and habits. The student s motivation in E- learning is examined from the teacher s point of view - the feedback is the only tool, with which the needed results are reached. These exactly appear to be the reasons for the restrictions on E-learning application areas in scientific areas which demand basic practical skills accumulation, a mixed learning can be applied, but not only E-learning. /Such professions are those of a dentist, a doctor, a pilot, an army, an underwater diver, a fireman etc. / It is absurd to expect communication by equivalent to the communication by audio- or videoconference. This fact exactly determines a necessity of including an alternating different technical feedback decisions, as the proportion between them is defined by the target group, the teaching goals, the expected results of a student s success, the teaching style included in the E-learning curriculum. The feedback from students about their opinion of the weak and strong points of the system and the communication is important for a flexible curriculum adaptation and a change of teaching style /as far as possible/. The analysis of this feedback 6

185 The Problems in Distant - Learning provides the opportunity for mistakes correction and their decrease in the future by strategy adaptation for the curriculum presentation according to the characteristics of a target group. It is wrong to search for this feedback at the end of the e-learning course. It is necessary to follow it all through course, because it is a regulator for a maximal student motivation and it tries to meet the student needs and requirements. This feedback traces the personal motivation of each participant. So the number of drop-outs /which at this moment is 3-5%/ is expected to decrease considerably, because the personal motivation has been raised and students cognitive dissonance will be absent. In the relationship Teacher-Student the role of grading is important as it is a factor for the student motivation and for his psychical and intellectual advance. At the same time e- learning makes quick and large-scale control in a short time, which necessitates the large application of tests, because tests are the basic form for assessment. This raises the question, Do tests reflect adequately the present students skills and knowledge?. In e-learning test s grading, the form of a question in a test is in the background, but the attention is focused on a correlative relationship among a chosen units. The student s knowledge is assessed on the base of these chosen units, as it analyzes an impartial, indirect, group test for a level of knowledge which is limited in time. It should be kept in mind that grading is an opinion, attitude and assessment of human dignity and insufficient for person. It is a number or verbal expression which measures knowledge or competition of students. Just this fact brings out the impartiality of tests on new ideological level The impartiality from the viewpoint of ignoring the student s age, male, race, physical characteristics. The achievement objectivity depends on the extent to which a subjective estimate of a research-worker is influenced by the test s results, because the test is a method for examination by which in a standard situation reveals personal features. These features are indication for determining a person characteristics and providing the arrangement of a research-person in a classification which is based on a group of comparable persons or based on an ideal standard. In e-learning tests an ideal standard is in a pawn in itself environment which contains a test and is created by a team of teachers to achieve a maximum objectivity assessment. At the test s base lies suggestive questions which contain in themselves a given alternation. They may be represented in different forms. The test must be formed on the basis of meaningful relations among test items, as it follows a principle of quotas, independent from the question form, their quantity and correlation among separate items in a test. The continual assessment tests, the grading should be of quantity as well as of quality. It directs the students to the gaps in their learning. Based on this grading, the teacher summarizes for all participants the achieved level and sets up an adequate feedback on two levels: Individual to every participant Group - to analyze the tendencies for knowledge gaps in the whole group, which is a basis for additional tasks or directions for self- training For distinction from continual assessment tests, the final test has a purpose to control the absorption level on whole curriculum. It is single and limited in the time. While continual assessment tests are related with a particular part of curriculum, the final test contents a complex problems which consolidate subject from different lectures. It shows the student s ability to summarize and apply a got knowledge for decision on new practical problems. The structure of a final test is identical with the structure of current tests. The results from final test is estimated, analyzed and reported from the teacher. While by the continual assessment tests the role of rating is only regulative, then by a final fest the rating is summarized. We must be made more precise what is the final test. It may be distant when stay question about summary a part of curriculum and a level (Cisco, Microsoft, Sun и др. ) or working (a course of foreign language, a course of computer literacy etc.). Working tests are more prestige, because a right participant s identification is on default and is ensured necessary circumstances of test control. In general opinion circulate, that the distant-learning hasn t a same weight as the traditional learning. Therefore exactly working test grows up his notability. Some education organizations resort to make the final working tests which dislocated in different cities in Test centers. This minimizes a charges as from students as for e-learning sponsors. In this case the student in your initiative and wish takes an examination after his whole training. This test hasn t role of a mandatory final test for hold e-learning course, because is possible a person direct take this exam for certification without participate in any course. The importance characteristics for effective evaluate by take a test is consistency and trustworthiness - the results of estimation will be same if examination in the different time, the other circumstance, from the different teachers. The evaluate trustworthiness means that the independent teachers will be estimate in same way an answer of one person. Exactly e- learning tests reach in this aspect very high trustworthiness. The traditional learning models can t to expand to meet the challenge of new generation. In other side distant-learning may be improve /or substitute / the traditional methods and materials /a classroom discussions, a practical occasions, an imprint book/ The last years of globalization demonstrate that learning and teaching from distant, especially when is used the high-speed telecommunication technologies is effective, because the effectivity is measure with the student s achievements, the reversion of an investment, the relationship of the students and the teachers to learning process. REFERENCES [] B.Collis, J. Moonen, Flexible Learning in a digital world, experiences and expectations, London UK, Kogan Page Ltd., [] Т. Элерс, Методика диагностики мотивации к избеганию неудач, Украйна, Одесса, 5. 6

186 The Roles of Colours in the Multimedia Presentation Building Petar Spalevic, Borivoje Miloševic, Kristijan Kuk 3 and Gabrijela Dimic 4 Abstract This paper desribes concept of colours and their usage in the multimedia and Web presentation. Given is explanation of the influence of light on the human eye. Also describes the spectar of colours and the meaning of colours in creating the new Web presentation of High school of electrical engineering in Belgrade. Keywords Human eye, light, colour, presentations, meaning of colours, colour scheme. I. INTRODUCTION The eye has two main component: the lens and the image sensor. The lens contains part of the light refelected from some objects and polarizes it on the image sensor. The image sensor then changes the light sample into a nerve signal []. The eye has two lenses. One of them is the meniscus of the eye which is called the Cornea and the other lens is an adjustable one inside the eye. The cornea does most of the refraction of light [,]. cornea lens The surface sensitive to light which covers the inner surface of the back of the eye is called the retina. As seen in Fig.. the retina can be devided into three main layers of specialized nerve cells: one for changing light into nerv signals, one for processing images and one for transmitting information to the optical nerve found in the brain. There are two kinds of cells which detect light: wands and cones so called because of the physical shape when seen under the microscope. The receptor corks Fig.. are specialized in colour distinction, but can work only when there is enough light. There are three kinds of corks in the eye: sensitive to red, sensitive to green and sensitive to blue. This sensitivity to different colours occures because they contain different photopigments that is chemicals which absorbe different wavelenghts of (colour) light [3]. Fig. 3. shows wavelenghts of the light which trigger all three kinds of receptors.this is called RGB coordinating and it shows how the colour information leaves the eye through the optic nerve. Human perception of colour is more complex because of the nerve`s processing at the lower levels of the brain. retina pup iris eye fibre Fig.. The receptor corks. Fig.. Human eye Petar Spalevic is with the Faculty of Technical Sciences, Kneza Milosa 7., 38 Kosovska Mitrovica, Serbia, Borivoje Miloševic is with Faculty of Electronics, Aleksandra medvedeva 4, 8 Niš, Serbia, boram@pttt.yu. 3 Kristijan Kuk is with High School of Electrical Engineering, Vojvode Stepe 83, Belgrade, Serbia, 4 Gabrijela Dimic is with High School of Electrical Engineering, Vojvode Stepe 83, Belgrade, Serbia, gdimic@vets edu yu Fig. 3. Wavelength of the light which trigger all three kinds of receptors. 63

187 The Roles of Colours in the Multimedia Presentation Building RGB coordinating is converted into a different coding scheme where colours are classified as: red or green, blue or yellow, light or dark. RGB coding is an important limit of human eyesight: wavelenghts which exisist in the environment are grouped into only three wide categories. Visible light consists of seven groups of wavelenghts. Those are colours seen in the rainbow: red, orange, yellow, blue, indigo blue and purple. The lights represents one form of the electromagnetic spectar. The electromagnetic spectar represents a collection of all energies organised into different categories established on wavelenghts for all types of energy. Cold colours Fig. 5. : from purple to green yellow.those colours are excellent for texts. #D5BFC #FC44 #FD535 #FB955 #F8BB #FCFC4 Figure 4. Warm colours. #9B5FC #544FB #33FB #38384 #3845 #4CB45 Figure 5. Cold colours. II. COLOURS The human eye can recognize only visible light. When we look at the sun it seems colourless or white. White light is not a light that consists of only one colour or frequency, but is made up of many colour frequencies. The combination of any different colours in the visible spectar produces a light which is coloured or white. For us to be able to see the red colour of an object there must be a source of light for example the sun. When an object receives a light wave it emmits a light wave red in colour, or it absorbs a spectar of blue or green colour but it reflects a spectar of red colour which we recognise with our eyes. Red, blue and yellow are called basic colours because together they produce white light. This model of forming colour is called the adaptive model in physics. White and black are not colours because in the absense of any kind of light white becomes black and that is reason why we cannot see anything in the dark. Our eye registers light and all its components (red, gree and blue) as white. The absense of any source of light the eye registers as black. An the intervalues of light of equal intesity of the basic components is registred as gray. As we can see from combining these three colours will give us any colour from the colour spectar. So where blue and green overlap we got cyan, red and blue become magenta, red and green become yellow. III. COLOUR SCHEMES FOR MULTIMEDIA PRESENTATION The first step in making colour schemes for multimedia presentations is defining the target group that watches your presentations. If they are young, use bright colours. If you are marketing for sales purposes use natural colours like green and blue. You need to choose a one colour tone to accentuate page allocation. Next choose a colour to complement the first colour to use for titles and subtitles. Mostly commonly used colour scheme: Warm colours Fig. 4. : from red purple to yellow. Those colours make excellent contrast. A. Complementary colours Complementary colours are directly opposite each other on the colour wheel. These colours should be used carefully as they are in direct opposition but providean excellent contrast. For example red is complementary to green on the colour wheel. When using complementary colours next to each other, vibrations that give a very pleasent feeling and really attract attention, are created. (Fig. 6.) B. Analogue colours Analogue colours are any three successive colours segments on the colour wheel. These colours produce enough differentiation of elements without stepping away from the union of elements. C. Monochromatic colours Fig. 6. Colour wheel. Monochromatic colour are all variations of the colour segments on the colour wheel (Fig. 6.). You can use these colours without any fear because the represent a variation of one colour but the contrast is weak. They provide harmony because all elements have something in common. 64

(Video) Download research papers, Articles from Science Direct For FREE using Link LEARN FAST SCIENCEDIRECT

188 Petar Spalevic, Borivoje Miloševic, Kristijan Kuk and Gabrijela Dimic D. Traditional colours Traditional colours are any three colours found at an angle of one is respect to the other. If the Colour Wheel were a clock the blue would be at o`clock, green at 4 o `clock and red at 8 o` clock. This colour scheme gives presentation a good colour balance. IV. THE MEANING OF COLOURS Colours have a direct and intesive influence on humans. To a wide degree, our actions and rections depend on colours. Lighter colours for example produce emotional responces. When the Blackfriar Bridge, in London, was painted green, the number of suaside jumps was lowered by 34%. The human eye can see about seven million colours. When our eyes move from one colour to the other it adapts to the change in colour. Depending on that adaptation we register the so called visual effect. Lighter colours reflect more light which stimulates our eyes [4].. The human eye first registers light colours. The colour first noticed is yellow. Big contracts between colours blind the human eye so they are hard to look at. The meaning of colours mainly depends on the culture we were brought up in; for example red does not have the same meaning in U.S.A and in China. Just guess why. The meaning also changes because of our age and gender. For examply women prefer colours from red to blue while men prefer colours that are their opposites. Older people prefer darker colour to light ones [4,5]. The meaning of colours is also very important in creating a Web sites. The colour scheme of the Web site of High school of electrical engineering in Belgrade (new version which is in creating) is based on three different colours (Fig. 7.) On the Fig. 8. above, is given the colour scheme into Web site of High school of electrical engineering in Belgrade. HTML Color Name Sample Hex triplet (rendered by hex (rendered by name) triplet) lightgrey #D3D3D3 gray #888 darkgray #A9A9A9 dimgray # lightslategray # slategray #789 darkslategray #F4F4F A. Blue V. THE EXPLAINATION OF COLOUR SCHEME Blue is one of most popular colours for presentation. It mirrors quite, harmony, trust and stability. Blue is often a colour used to symbolize honesty and trustworthiness. Blue is associated with water; on coloured maps, oceans, lakes, and streams usually appear blue. This colour agrees with other pastel colours and is perfect with natural colours like green and gray [4,5]. B. Green Fig. 8. Colour scheme. Green shows care. It makes positive negative feelings.. Green presents loyalty and intelligence. Green takes up a large portion of the CIE chromaticity diagram because it is in the central area of human color perception[4,5]. C. Grey Grey looks like a shadow but it represents practicality, safety and credibility when used with cold tones of blue or magenta. Grey symbolizes mediocrity, the background noise. A "grey person" is someone who goes unnoticed, a wallflower [4,5]. D. Orange Orange is contrasting to blue and highy visible against a clear sky. Therefore, orange is often used for safety. The colour is often used to enhance visibility [4,5]. VI. CONCLUSION Fig. 7. Web site of High school of electrical engineering in Belgrade. Web colors are colors used in designing web pages, and the methods for describing and specifying those colors. Authors 65

189 The Roles of Colours in the Multimedia Presentation Building of web pages have a variety of options available for specifying colors for elements of web documents. Web colors have an unambiguous colorimetric definition, srgb, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions. These have been chosen to be similar to many real-world monitors and viewing conditions, so that even without color management rendering is fairly close to the specified values. However, user agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web to print applications. REFERENCES [] N. Chapman and J. Chapman, Digital multimedia, Chichester, John Wiley & Sons, 4. [] [3] J.D.Foley, A.Van Dam, "Fundamentals of interactive computer graphics", John Wiley & Sons, 4. [4] R.Colvn Clark, C. Lyons, "Graphics for learning", John Wiley & Sons,. [5] D. Jokanovic and D. Martinovic, E- learning- challenges and perspectives, Learing without limits, Developing the next -th Aniversary Conference of the European Distance Education Network, -3 June, Stockholm, Sweden. 66

190 SESSION ECST I Electronic Components, Systems and Technologies I


192 Synthesis of DCS in Copper Metallurgy Dragan R.Milivojević, Viša Tasić, Marijana Pavlov 3 and Vladimir Despotović 4 Abstract Many technological processes in production plants demand transfer of information and interaction with the process from remote distances (from control centre, for example). To satisfy these requests, sometimes a complex computer network has to be generated, like a distributed control system (DCS), a type of LAN with local and remote process monitoring and control functions. This paper presents results of development a low cost and easily applied both, hardware and software solutions for process monitoring in Copper Mining and Refining Complex Bor. Keywords monitoring system, process control, computer network, real time I. INTRODUCTION Department for Industrial informatics in Copper Institute Bor produces industrial computer systems. As a core of the developed monitoring and control systems, the third generation of MMS (Microprocessor Measuring Station) is in use. This is a specific industrial PLC (Programmable Logic Controller), which is fully designed and developed in Copper Institute. Classical PC computer is the remote workstation, and allows the interaction between operator and process. On the PC runs the own developed, dedicated software for real time operation, with standard SCADA (Supervisory Control And Data Acquisition) functions, adapted for use in network environment. The 'Copper production line' is a complex organizational unit of Copper Mining and Refining Complex (RTB) Bor. There are a couple of production plants. The technological equipment at all plants was very old with a poor process control systems. In most of them using the local monitoring systems are quite satisfied. Those systems are based on MMS and interactive PC, and implemented at all key production lines. On this way, technologists can follow the process outline in the plants their own. And the performance was satisfactory. But, the whole copper production process is match complex and very often needed more information s about parameters from some remote plants. This is the main reason why the integration of these partial systems into industrial Local Area Network was carried out. Dragan R. Milivojević is with the Copper Institute, Zeleni bulevar 35, 9 Bor, Serbia, Viša Tasić is with the Copper Institute, Zeleni bulevar 35, 9 Bor, Serbia, 3 Marijana Pavlov is with the Copper Institute, Zeleni bulevar 35, 9 Bor, Serbia, 4 Vladimir Despotović is with the Technical Faculty in Bor, Vojske Jugoslavije, 9 Bor, Serbia, That s become an industrial distributed control system (IDCS). In such way distributed system for monitoring of all key phases of technological process was formed. The design of the implemented distributed system has been dictated by practical requirements in the concrete application. II. HARDWARE PLATFORM To choose the hardware infrastructure to build up the network, it is useful to present basic characteristics of network nodes and the way of it s operation. The system consists of a few couples of industrial automatic measuring stations (PLC, Data Logger etc.) and a couple of PC s. The PC s are used for monitoring and interaction with the process (checking actual state of parameters and remote control). In a general case, for distributed processes a large number of PLC s are required to perform measurement of process parameters, data acquisition, control and transfer to the host computer (PC). Data about status of the process are transferred from the place of origin (PLC) to decision-making place (PC). On PC this data being processed and results presented in proper form on the screen and stored in external memory. If the system performs remote control function, depending on the status of the process, PC sends commands to PLC, which causes appropriate actions and affect the process. Apart from effects on the process, commands have their effects on PLC itself: testing its functionality, time synchronization etc. Designed and implemented network has to satisfy several basic requirements: to provide correct and efficient data transfer from PLC, to execute timely transfer of commands to PLC (while the command is active and actual), to realize supplementary transfer of data from PLC in case that there are any faults in normal transfer []. The core of a process control and monitoring system is MMS. It is based on Motorola 68HC micro controller. Main characteristics of MMS (standard configuration) are: micro controller Motorola 68HCE, intern eight channel, 8-bit A/D converter, 64 analog inputs, digital state signals (input + output) with mutual point (or independent). RS3 communication port, 48 (56) KB for data (RAM), 6 (8) KB for software (EPROM). Local display and functional keyboard gives a possibility of device control, time synchronization and start of measuring. MMS can work independent of monitoring computer (PC based system) and can control the process itself. It can also work like data logger, and memorize over 3 data messages in local RAM, and later, when connection with the monitoring PC is established, transfer them to PC. Because of costs, there was a reasonable demand for using the existing private telephone lines like the hardware network infrastructure, as match as possible. This practically means, 69

193 Synthesis of DCS in Copper Metallurgy LEGENDE Switch Modem - 8 -Chanel unit -spec. modem Zyxel 79 R/M PC server MMS E Bridge - - Chanel bridge - phone line PC client P R O C E Production plant -UTP cable S S Modem Director Manager Tech.dir P Tech.prep switch Modem R O C E S S Modem Power plant Tech.prep switch switch E- Bridge Modem Modem Modem Modem Smelt roasting plant P R O C E S S Modem E- Bridge Smelting plant P R O C E S S Tech.dir E- Bridge E- Bridge Tank house P R O C E S S E- Bridge E- Bridge HSO4 plant P R O C E S S E- Bridge Maintenence Tech.dir Tech.dir Fig.. Block diagram of realized DCS that some telephone terminals become the computer network nodes. The dynamics of processes request the appropriate response time and this circumstance demands network with satisfactory performances. Building up such network needs a different kind of network equipment: modems, routers, bridges, switches (see Fig..). III. SOFTWARE ENVIRONMENTS To control and maintain realized industrial network, the software has to cover the functions on two levels. Regarding the network structure, it is possible to differentiate software solutions at both levels: PLC and PC. EPROM of MMS holds residential software (firmware), which consists of executable versions of test, control, operational and communication software modules. Operational program module is responsible for measuring of analogue channels and checking states of digital inputs. Type of measuring, sampling rate and other parameters can be changed using local keyboard, or commanded from a monitoring PC. The message is transferred to remote PC, or memorized in local RAM (if PC is disconnected), so it can be transferred later when the 63

194 Dragan R.Milivojević, Viša Tasić, Marijana Pavlov and Vladimir Despotović connection is established. MMS can work independently of monitoring computer, so local process control is also possible. If any parameter exceeds given limits, it causes alarm message, or even better, if any parameter shows trend of reaching limit value, it firstly can cause warning message, so the operator, or the system itself can react on time. In MMS control program (executive system) there is a complex communication module. It contains the procedures for handshaking, data transfer, transfer control and recovery, and regular disconnection. That is a kind of protocol, the own developed ASP protocol [3]. Appropriate process control application for a PC based system is developed using Microsoft Visual C++ 6. development kit [], and it s main characteristics are: communication with MMS, data processing, data presentation, process control, data archiving and off-line analysis and interpretation of data. Interactive SCADA program contains a part (unit) for communication. There are a few procedures writing in assembler and refereeing to the physical port addresses. The monitoring and control program runs very stabile under Windows 98. The client version runs on both Windows 98 and Windows XP OS. The application communicates with MMS as a secondary network node (PC is a master) using ASP (Asynchronous Serial Protocol). The data can be displayed in real time on dynamic synoptically screens, real-time graphs or in tables. All data are saved in database in three forms, as: daily reports, monthly reports and log files. The history of the process can be displayed in a same manner as in real time. In order to get better performance, user can change process priority, comparing to other active applications on PC, from low to real-time priority. In high or real-time mode, application performances are very stable. Additional facilities are also possible, such as: on line changing of measuring range, changing alarm limits, scaling the axes at the real-time diagram etc. IV. NETWORK DESIGN The realized DCS is built up from a couple of sub networks. The sub network is a local plant s monitoring system which contains one, or more MMS and subordinated PC s. This PC can work as local server, workstation, or both. The industrial network is constructed from heterogeneous nodes, PC s and MMS. The PC s in the network can run in different mode: some of them are servers (local), and other is clients (remote monitoring terminals, see Fig..). All of PC s run under Windows OS, and uses its network performances. The system monitors process at five key plants of the Copper Mining and Refining Complex Bor: Copper Smelting Plant, Converters Plant, Tank House Plant, Sulphur Acid Plant and Power Plant. The network is decentralized, and servers are sited at all of the plants, connected to corresponding MMS s. One server application can run at the time on one server PC, but all client applications can run in multitasking. This gives a possibility of monitoring processes locally and at remote plants at the same time. Also technologists can easily monitor the process flow. The client applications are restricted to monitoring only, and all control functions are disabled. As the distances between some network nodes are greater than 3 km, leased telephone lines are used as transmission lines. The second reason is that they already existed, so this solution was cost effective. The next step was integration with system for air quality control, that was carried out successfully. The simultaneous display of process parameters and concentrations of pollution substances is very significant for control of air pollution in town zone. V. WAY OF OPERATION The technologists, like a process engineers have to monitor their own plant using local sub networks at local server PC. But, sometimes they really need the information of process parameters from the other (remote) plant, because of interdependence between production lines. In this case on the same PC (local server), the client version of SCADA has to be activated. Now there are two independent programs (local and remote SCADA) like two windows services, running concurrently. If there is a need, it is possible to activate many client programs at every PC, but only one server, and it has to be the appropriate one. There are many differences between server and client programs. The main one is the possibility of interaction and sending commands to MMS and process. Because of decentralized network control, and chance to make some confusion in changing the system configuration and action to the process, the number of functions on client version is significantly decreased. There is shown in practice that the massive data transfer (sometimes it is necessary to take a lot of historical data from remote server) is very slow if PC works under Windows XP OS. VI. CONCLUSION The described monitoring system has been implemented for over a year and shown as very efficient. All of local and remote systems give timely quite sufficient useful information, and technologists make the improvements resulting in better productivity. Its significance became even bigger, by integration with air quality monitoring system. As the air pollution in town (imission) is a direct consequence of technological process, interaction between these systems is very useful. Apart from favorable price/performance ratio, functioning of realized network has shown to be very reliable and especially resistant to poor communication conditions, thanks to solid transfer quality control. REFERENCES [] D.Milivojevic, Software elements of MMU, ETRAN 4, Conference Proceedings CD, Čačak, Serbia, 4. [] D. Milicev, Object oriented programming in C++, Mikro Knjiga, Belgrade, 995. [3] D.Milivojevic, V.Tasic, D.Karabasevic, Communication in realized real-time systems, ETRAN 3, Conference Proceedings CD, Herceg Novi, Serbia and Montenegro, 3. 63

195 This page intentionally left blank. 63

196 Removal of Power-line Interference from ECG in Case of Non-multiple Even Sampling Georgy S. Mihov and Ivan A. Dotsinsky Abstract This paper deals with some aspects of the subtraction procedure, which removes the power-line interference without affecting the intrinsic to ECG components. The improvement is for the cases of high even non-multiplicity between sampling rate and rated interference frequency. Keywords ECG, power-line interference removal, subtraction procedure. I. INTRODUCTION The ECG recordings are often contaminated by residual power-line interference despite the high common mode rejection ratio of the amplifiers used [, ] and the variety of sophisticated but conceptually traditional digital filters [3, 4], which suppress to different extent the intrinsic to ECG components around the power-line (PL) frequency. This drawback has been overcome some decades ago by the so called subtraction procedure [5]. Its principle consists of: i) applying linear phase digital filter on linear ECG segments with near to zero frequency content (usually physiological baseline, low amplitude P-waves and some small parts of T- waves), ii) continuously updating and memorizing the removed phase locked interference components, and iii) subsequent subtracting the corresponding component from the signal wherever non-linear segments are encountered. Later, many improvements of the procedure have been developed to cope with PL amplitude and frequency variations including the cases of non-multiple sampling, which lead to a real (noninteger) number n of samples within one rated PL period [6-8]. The aim of this study is to enhance the accuracy of the PL interference (PLI) elimination when the truncated real number n* is even and the multiplicity (sampling rate Φ against interference frequency F) is high. II. THEORETICAL CONSIDERATIONS, EQUATIONS, EXPERIMENTAL RESULTS According to the generalized structure of the subtraction procedure [7, 8], the phase locked interference B i to be subtracted from the ongoing contaminated sample Y i is estimated by means of an interference temporal buffer Georgy S. Mihov is with the Faculty of Electronic Engineering and Teshnologiess Technical University of Sofia, Ivan A. Dotsinsky is with the Center of Biomedical Engineering Bulgarian Academy of Sciences, [i-n*, i-]. Its terms B i-, B i-,, B i-k, B i-n represent filtered middle samples of a moving window over the contaminated sequences X i-k-n-, X i-k-n-, X i-k-. When Φ is multiple to F, B i takes the B i-n value. Otherwise, B i is estimated by the buffer content, which is processed by additional filter type moving averaging with transfer coefficient K FB for f=f. The corresponding equations are: n* Bi+ j = BmidKFB; () j= n* + K FB n* πf sin = Φ ; () n* πf sin Φ i mid * FB i+ j j= n* + B = B n K B. (3) Here K Bi+ j is the so called K-filter [7, 8]. It is n* j= n* + low pass type with coefficient vector K uur consisting of n* terms with equal weight /n*. This filter is transformed in high pass type by subtracting from B mid : B Bmid Bi + j. It is called B-filter [7, 8] and has n* j= n* + KFB transfer coefficient in f = F. The next modification denoted B*-filter [7, 8] is expressed by B* B or KFB Bi+ j + Bmid n* j= n* + B* with transfer coefficient equal KFB to at f = F, which is in fact Eq.. When n* is odd, n*=m+, B mid coincides in time with the real B i-(n-)/ and Eq. 3 becomes n* + i = i j + FB i j j= n* + n* 3 j= + ( * ) i ( n* )/ +. (4) B B n K B B Fig. represents the B*-filter synthesis for PLI evaluation in case of Φ=5 Hz, and F=48 Hz and F=5 Hz, both of them with odd multiplicity n=5. The traces are obtained in MATLAB environment by the filter vector coefficients: = /5 B ur = 4 /5; K uuur [ ] ; [ ] B 633

197 Removal of Power-line Interference from ECG in Case of Non-multiple Even Sampling uuur B*= [ /5 /5 4/5 /5 /5 ]/( KFB ). K = 5 Hz, n = 5 F = 48 Hz, K = -.4 FB F = 5 Hz, K =.44 FB c b d ω ω Bi n*/ = Asin ω( t ) = Asin ωt.cos Acos ωt.sin Φ Φ Φ B = Asin ωt mid ω ω Bi n*/ + = Asin ω ( t+ ) = Asin ω t.cos + Acos ωt.sin Φ Φ Φ. (5) The following expression is obtained substituting B mid for Asinωt in the sum of the first and third Eqs.: B mid Bi n*/ + Bi n*/ + πf =, SC = cos S Φ. (6) C Fig.. B*-filter synthesis: a- basic K-filter; b- B-filter; c and d- B*-filters for F = 48 Hz and F = 5 Hz (Φ = 5 Hz) 75 a 5 f, Hz The removal of PLI with F=5 Hz is shown in Fig.. The error does not exceed ±5 μv original signal interferenced signal clean signal & linearity zoomed error Fig.. Φ = 5 Hz, F = 5 Hz, odd multiplicity n = 5 If n* is even, n*=m, B mid is virtual and does not coincide with a real buffer sample being between B i-n/ and B i-n/+, which are spaced at τ=/φ as shown in Fig. 3. B i-n*+ B i-n* Temporalinterference buffer Restored value B i-n*/ / B mid B i-n*/+ / /F B i- Extrapolated value of the interference Fig. 3. Content of the temporal buffer in even multiplicity Since the buffer has no constant component, the next Eqs. are derived assuming the B mid phase is zero. B i Eq. becomes i n*/ + Bi n*/ + KFB Bi + j SC n* j= n* + B =, resulting in the following expressions for extrapolated value B i, B*-filter and B-filter: i n*/ n* KFB Bi = Bi+ j + Bi n*/ + j= n* + Sc + B n* KFB i n*/ + Bi+ j Sc j= n*/ +. (7) Bi n*/ + Bi n*/ + B* = B. (8) S KFB C n*/ B Bi+ j + Bi n*/ + n* j= n* + SC n* + B S n* n* i n*/ + i+ j C j= n*/ + B. (9) Fig. 4 represents the B*-filter synthesis for PLI evaluation according to Eq. 7 in case of Φ = 5 Hz and F = 6 Hz (even multiplicity n* = 4. The filter vector coefficients are: K uuur = /4; B [ ] B ur = / SC / SC /4, S =,79; C uuur KFB B* =, K FB =,458. SC SC = 5 Hz, F = 6 Hz, n = 4 SC =.79 Hz, KFB=.458 K-filter B*-filter B-filter Fig. 4. B*-filter synthesis for F = 6 Hz and Φ = 5 Hz 634

198 Georgy S. Mihov and Ivan A. Dotsinsky Experimental results with the synthesized B*-filters of Fig. 4 can be observed in Fig. 5. The traces are as shown in Fig original signal interferenced signal clean signal & linearity zoomed error Fig. 5. Φ = 5 Hz, F = 6 Hz, even multiplicity n = 4 Similar results have been obtained with Ф = 5 Hz and F = 6 Hz (even multiplicity n* = 8). III. INVESTIGATION OF THE SUBTRACTION PROCEDURE STABILITY IN NON-LINEAR ECG SEGMENTS These investigations are challenged by colleagues of the Cairo University, who experimented PLI removal in case of even multiplicity with very high truncated real number n*, using even B-filter identical to the K-filter according to publication [8]. It was found that the procedure is prone to autoexcitation if the relative weight of the sample to be compensated in the B-filter equation. is in presence of low frequency components within the temporal buffer. For a longer non-linear segment the B-filter transforms into IIR filter. This process was study in the following way:. Low frequency signal epoch of 4 Hz, μv amplitude and s duration is synthesized.. Synthesized PLI with μv amplitude is added. 3. The subtraction procedure is applied with preset flag for linear segment during the first half of the epoch, this flag being for non-linear segment until the epoch end. 4. The error committed (difference between the free of interference signal and the processed one) is analyzed. The next figures illustrate the results obtained. All abscissas are in s, the ordinates are scaled in mv. The K-filter is marked as K B -filter everywhere it is used to build the B*-filter according to [8]. Experiment (Fig. 6): B i is calculated using Eq. 7. The applied K-filter is K uur = [ / / ]/4. Another filters are the same for the shown in Fig. 4 synthesis. The traces are the same as in Fig. 5. No autoexcitation can be observed, the filter is stable for infinite duration of the nonlinear signal (see the forth trace). Further, Figs. 7- consist of filtered signal and zoomed error graphic only Fig. 6. Φ = 5 Hz, F = 6 Hz, even multiplicity n=4, B i is calculated from Eq. 7. Experiment (Fig. 7): An even K B -filter is applied which is the same as a K-filter for even multiplicity. B i is calculated by Eq. 7: n*/ B = B B i i n* i+ j j= n* + ( ) n* K B B FB i n*/ i+ j j= n*/ +. () The filter vectors are: = [ ] K uuur B = [ / / ]/4; uuur B*= [ / 3 / ]/4/( KFB ) K uur / / /4;, K FB =,334. Autoexcitation can be observed almost immediately after the nonlinear segment begins. That is due to residual components in the temporal buffer. Additional experiment was made with free of low frequency input signal (the lower graphic). The autoexcitation occurs 3 ms later because of computing error accumulation Fig. 7. Ф = 5 Hz, F = 6 Hz, even multiplicity n = 4 Experiment 3 (Fig. 8): Reduced by even B*-filter is applied, other data as in Experiment. The filter vectors are: = / / /4 K uuur = / / / ; K uur [ ] ; B [ ] uuur B*= [ / / ]//( KFB ), K FB =,

199 Removal of Power-line Interference from ECG in Case of Non-multiple Even Sampling Fig. 8. Reduced by even B*-filter Experiment 4 (Fig. 9): The same as Experiment but with reduced by even K-filter for linear segments Filter vectors: =,5,5 / K uuur =,5,5 / ; K uur [ ] ; B [ ] uuur B*= [,5,5 ]/ / ( KFB ), K FB =, Fig. 9. Reduced by B*-filter and reduced by even K-filter. The lower graphic is obtained setting the flag for non-linear segment when the sinusoid is going through zero (minimum non-linearity). The error is considerably reduced. Experiment 5 (Fig. ): The B*-filter is synthesized by odd Φ K B -filter. n* is computed using n* = floor + F, where floor(a) is MATLAB function generating the lower K uur = / / /4; integer of a. Filter vectors: [ ] K uuur B = [ ]/5; uuur B*= [ 4 ]/5/( KFB ), K FB =, Fig.. Even B*-filter based on odd K B -filter. Experiment 6 (Fig. ): The same as Experiment 5 but with even K-filter for linear segments. Filter vectors: = /5 K uuur = /5; K uur [ ] ; B [ ] uuur B*= [ 4 ]/5/( KFB ), K FB =, Fig.. Reduced by even B*-filter and even K-filter for linear segments. IV. DISCUSSION AND CONCLUSION The presented experiments are a significant excerpt only of all carried out. They lead to the following conclusions:. Except for the even B-filter used in Experiment, all other filters are stable for infinite duration of the non-linear ECG signal.. The approximation procedure is negligibly influenced by the K-filter type intended for PLI removal from the linear ECG segments. This may be seen comparing the couples Experiment 3 Experiment 4 and Experiment 5 Experiment 6 3. The approximation error considerably depends on the residual low frequency components within the temporal buffer (see Experiment 4). 4. Out of all analyzed B-filters, the approximation procedure using Eq. 7 is that one that is most efficient for PLI removal from the non-linear ECG segments in case of even multiplicity. REFERENCES [] J. C. Huhta, J. G. Webster, 6 Hz interference in electrocardiography, IEEE Trans., Biomed Eng, pp.9-, 973. [] M. van Rijn, A. Peper, and C. A. Grimbergen, High-quality recording of bioelectrical events, Part : Interference reduction, theory and practice, Med. Biol. Eng. Comput., 8, pp , 99. [3] S. C. Pei, C. C. Tseng, Elimination of AC Interference in Electrocardiogram Using IIR Notch Filter with Transient Suppression, IEEE Trans., Biomed. Eng. 4, pp. 8-3, 995. [4] P. S. Hamilton, A comparison of Adaptive and Nonadaptive Filters for Reduction of Power Line Interference in the ECG, IEEE Trans., Biomed. Eng. 43, pp. 5-9, 996. [5] C. Levkov, G. Michov, R. Ivanov, and I. Daskalov, Subtraction of 5 Hz interference from the electrocardiogram, Med. Biol. Eng. Comput., pp , 984. [6] I. Christov, I. Dotsinsky, New approach to the digital elimination of 5 Hz interference from the electrocardiogram, Med. Biol. Eng. Comput. 6, pp , 988. [7] G. Mihov, I. Dotsinsky and Ts. Georgieva, Subtraction procedure for power-line interference removing from ECG: Improvement for non-multiple sampling, J. Med. Eng. Techn. 9, pp , 5. [8] C. Levkov, G. Mihov, R. Ivanov, I. Daskalov, I. Christov and I. Dotsinsky, Removal of power-line interference from the ECG: a review of the subtraction procedure BioMed. Eng. OnLine, 4:5,

200 Synthesizing Sine Wave Signals Based on Direct Digital Synthesis Using Field Programmable Gate Arrays Hristo Z. Karailiev and Valentina V. Rankovska Abstract Analysis of the design flow for creating devices and systems based on Altera s Field Programmable Gate Arrays (FPGA) has been made in the present paper. The digital part of the sine wave synthesizer based on FPGA has been designed. The synthesizer is realized using the development system TREX C of Terasic Technologies Inc. Keywords Field Programmable Gate Arrays - FPGA, Design flow, Direct Digital Synthesis DDS, Sine wave frequency synthesizer. I. INTRODUCTION The method for direct digital synthesis (DDS) [] of signals with arbitrary form is well known, but for a long time its wide implementation has been prevented by the low level of the technology development. Various methods for producing arbitrary form output are known analogue and digital. The DDS method has some advantages: high resolution; it allows an extremely fast transition to another frequency with continuous phase; the digital implementation allows easy realization of microprocessor control. These advantages determine its growing usage in functional generators, various modulations in the communications, etc. Various digital implementations of DDS synthesizers have been described in the literature: based on discrete components and low scale integrated circuits, such as dividers, counters, etc.; modern application specific integrated circuits, such as AD985, AD9858, AD9857 and lately based on Field Programmable Gate Arrays (FPGA). A drawback of the application specific integrated circuits is the fact that they produce output of certain form, for instance AD985 produces stable frequency and phase-programmable digitized analog output sine wave. They are not so suitable for applications, where signals with arbitrary wave form have to be created, for instance functional generators. That is why in the present work field programmable gate arrays have been used. This approach will allow in a single chip to integrate many functions for creating arbitrary form output [3]. Aim of the report:: Design and implementation of a sine wave synthesizer based on the direct digital synthesis method and field programmable gate arrays. Hristo Z. Karailiev is with the Technical University of Gabrovo, H. Dimitar 4, 53 Gabrovo, Bulgaria, Valentina V. Rankovska is with the Technical University of Gabrovo, H. Dimitar 4, 53 Gabrovo, Bulgaria, Main problems of the report: Analysis of the design flow for creating devices and systems based on Altera s FPGA; Designing the table, including the values of the sine wave; Designing the digital part of the sine wave synthesizer, based on FPGA of Altera; Implementation of the synthesizer using the development system TREX C. II. DESIGN FLOW WITH QUARTUS II SOFTWARE The characteristics, features and resources of FPGA of leading producers have been outlined in [7] and a device of Altera has been chosen. The software of Altera Quartus II Web Edition v.6. has been used at the design process. The main stages of the design flow with Quartus II are shown in Fig [5]. Entering the design of a device or system can be implemented in one of the following ways: As a program, written in one of the following hardware description languages AHDL, VHDL, Verilog HDL. The integrated text editor of Quartus II can be used. The software allows using so called MegaWizard Plug-In Manager. It supplies the designer with high level library blocks megafunctions, which the designer can parameterizes. MegaWizard Plug-In Manager creates automatically the necessary files to include in the project according to the chosen programming language. As a block diagram. The block editor of Quartus II is used. The block diagram may include library blocks and logic gates entered and parameterized with MegaWizard Plug-In Manager and also user blocks, created with the symbol editor of Quartus II. Defining requirements for the project and settings of Quartus II Defining in advance requirements for the project and some settings of the software allows controlling the functions and the features both the software and the created design in order to increase its effectiveness. They are made by some program parts of Quartus II. Some of the assignments and requirements refer to: design files, the device used, timing requirements, etc. Some conditions for the design optimization in relation to the resources of the selected chip, the power consumed, time intervals, maximum frequency, compilation time can be also defined. 637

201 Synthesizing Sine Wave Signals Based on Direct Digital Synthesis Using Field Programmable Gate Arrays the connections between the logic cells in the device as reference signals, data, etc. We can specify constraints and assignments that help the design meet timing requirements. If we specify constraints or assignments, the Fitter optimizes the placement of logic in the device in order to meet those constraints. After that timing analysis calculates the time needed the signals to reach their destination. It can also calculate signal transitions. Design simulation At the simulation test and settings of the logic operations and timing relations in the design is made. First a file with input stimuli for the design input pins is created. Depending on the needed information we can make functional or timing simulation and to test the logical operation and timings in the worst case for the current design. We can estimate the simulation results visually. If the design needs to be corrected that can be made in the design entry. The compilation and the simulation repeat after that. Device Programming At the programming the files produced by the compiler are loaded into the device and it is configured. But first assignment the design pins to the physical device pins is done. The design is compiled again and the device is programmed. Fig.. Design flow with Quartus II Design compilation Several consecutive processes take place at the compilation stage: analysis and synthesis, place and route, assembling and timing analysis. During every of the upper stages the design has been checked for its correctness. This stage is an iteration process we can return to a previous one if it is necessary (if there are some errors) till we receive a properly operating design. At the Analysis and Synthesis the design database is created. Analysis & Synthesis performs logic synthesis to minimize the logic of the design, and performs technology mapping to implement the design logic using device resources such as logic elements. It groups register and combinational resources into individual logic cell-sized units in order to use resources efficiently. It examines the logical completeness and consistency of the project, and checks for boundary connectivity and syntax errors. It also optimizes the design for instance making choices that will minimize the number of resources as using functions, which are optimized for Altera devices. During the Place and Route the defined timing and logic requirements are matched to the resources of the selected device. The most suitable place of the logic functions in the device logic cells is found and the most suitable interconnections and pin assignments are selected. The Assembling completes the design processing, producing files for programming the device and information for the consumed power. The Timing Analysis is a method of analyzing, debugging, and validating the performance of a design. Timing analysis measures the delay along the various timing paths and verifies the performance and operation of the design. These paths are III. ARCHITECTURE OF A DDS SINE WAVE FREQUENCY SYNTHESIZER An architecture of a frequency synthesizer, used for creating a frequency grid, is shown in [3], based on a study of many references. The presented block diagram can be used for producing arbitrary form signals. In the current design it is used for implementing a sine wave frequency synthesizer (Fig. ). N PIR n Reference clock digital part n m Σ PhR LUT DAC LPF n k fc Fig.. DDS frequency synthesizer mixed digital analog part Briefly the DDS synthesizer operates in the following way: The digital equivalent of the produced frequency is loaded into the phase increment register PIR. That value is continuously added to the value, accumulated in the adder. The most significant k bits of the result address the Look-Up Table (LUT). In our case the LUT includes a set of values defining the form of the sinusoid. The values, derived from the table, are passed to the Digital Analogue Converter (DAC) to receive an analogue signal and after that to a low-passed filter (LPF) to reject the unwanted components of the signal and its smoothing out. fout 638

202 Hristo Z. Karailiev and Valentina V. Rankovska q[9..] lpm_add_sub N[9..] PIN_8 INPUT VCC clk dataa[9..] A result[9..] A+B datab[9..] B inst INPUT VCC inst data[9..] clock lpm_dff DFF q[9..] lpm_rom address[5..] q[9..4] clock inst q[3..] Fig. 3. Functional circuit of the digital part of the sine wave frequency synthesizer OUTPUT m[3..] PIN_ PIN_9 PIN_8 PIN_7 IV. DESIGNING A SINE WAVE FREQUENCY SYNTHESIZER WITH FPGA Creating the project of the sine wave frequency synthesizer Fig. 3 shows the functional circuit of the digital part of the sine wave frequency synthesizer, which is implemented in Altera s FPGA Cyclone EPC6Q4C8 []. The registers and the adder are chosen to be 3 bits wide (n=3). The development system TREX C of Terasic Technologies Inc. [6] has been used at designing and examining the frequency synthesizer. It has the following resources: f ТГ =5 MHz, FPGA Cyclone EPC6Q4C8, three 4-bit DACs. For the present implementation of the sine wave synthesizer one four-bit DAC (shown in Fig.3) variant and eight-bit DAC variant have been used. A standard three-tap Π -type LC filter has been used. Creating and Filling in the LUT Table The output signal Y ROM, passed to the input of the DAC can be expressed by Eq. (): Y ROM ={sin[π.(n- k- )/ k- ]}.(-).( m- -)+( m- -) () where: k the number of the address inputs of LUT; k the number of the cells in the LUT; k- the number of the cells for the positive (negative) part of the sinusoid; N k - the current number of the cell in the LUT, matching the current number of a point of the sinusoid; m the length of the cells, represented in a number of bits (the resolution of the DAC); M= m a level scaling factor; m- - an offset of the sinusoid on the ordinate, in order to receive positive values of the function. The values Y ROM are mixed fractions and can t be loaded in that way into the LUT. That is why they must be rounded and that operation is a source of errors. The limited number of address inputs k of the LUT and the length p of the cells, define the non-linearity of the output sinusoid. To increase the linearity using the DDS method it is necessary k and m to be relatively big numbers. On the other hand to increase the spurious-free performance it is necessary to take into account the Eq. () [6]: k=m+ () In the case of 4-bit DAC m=4, k=6, N max = k =64, M= m =6. The values received for Y ROM and Y in relation to r ROM the current number of the cell N -63, are shown in Fig. 4, and Table I includes the exact and the rounded values, loaded into the MIF file in Quartus II. Values (M) b a Addresses (N) Fig. 4. Output signal, defined by the calculated (a) and rounded (b) values for the amplitude at N max =64 In the case of 8-bit DAC m=8, k=, N max =4, M=56. The values loaded in the MIF file (Fig. 5) are too much that is why they are not shown in a table. Values (M) Addresses (N) Fig. 5. Output signal, defined by the calculated and rounded values for the amplitude at N max =4 In Quartus II a new file is opened with File/New/Other Files/Memory Initialization File. We define the size of the MIF file, in our case 64 addresses, 4-bit cells. An empty MIF 639

203 Synthesizing Sine Wave Signals Based on Direct Digital Synthesis Using Field Programmable Gate Arrays TABLE I CONTENTS OF THE LUT r r N Y ROM Y ROM N Y ROM Y ROM file appears, in which we fill in the calculated with the program Excel values, and we save the file. Simulation and experiments with the design The simulation of the digital part of the design has been made (Fig. 6), and also experimental study of the DDS synthesizer operation as a whole (including DAC and LFP), proving its proper operation. Fig. 6. Output waveforms passed to the DAC V. CONCLUSION The contributions of the present work are the following: Analysis of the design flow for creating devices and systems, based on FPGA; Designing the digital part of the sine wave frequency synthesizer; Implementing the synthesizer using the development system TREX C. The design is to be expanded as follows: Examining the noise sources and reducing their influence [4]; Producing signals with various modulations FSK, PSK, I-Q, etc. REFERENCES [] A Technical Tutorial on Digital Signal Synthesis, Analog Devices, Inc., 999. [] Cyclone Device Handbook, vol., Altera Corp., 5. [3] H. Karailiev and V. Rankovska, DDS Method for Generating a Frequency Grid at Systems for Test Control and Automated Regulation, ICEST 6, Conference Proceedings, pp. 3-33, Sofia, Bulgaria, 6. [4] H. Karailiev and V. Rankovska. Error Sources at Direct Digital Synthesis Signals, ICEST 7, Conference Proceedings, Ohrid, Macedonia, 7. (forthcoming) [5] Quartus II Version 5. Handbook. Vol. : Design & Synthesis, Altera Corp., 5. [6] TREX C Development Kit Getting Started User Guide. Terasic Technologies Inc., 5. [7] V. Rankovska, FPGA Families, Features, Resources, and Devices and Systems Design Technology, Unitech 6, Conference Proceedings, pp. І- І-7, Sofia, Bulgaria, 6 (in Bulgarian). 64

204 Negative Impedance Converter Improves Capacitance Converter Ventseslav D. Draganov, Zlatko D. Stanchev and Ilya T. Tanchev 3 Abstract - The problem of increasing the frequency output relative sensitivity of the capacitance converter and decreasing the influence of the parasitic capacitances is solved by connecting the Negative Impedance Converter (NIC) into the converter. An equation of equivalent NIC s capacitance depending on the value of its circuit elements is worked out in this paper. The results are confirmed from investigation by simulation using software, reading the influence of the parasitic capacitances as well as of the input capacitance of the op amp in NIC s circuit. Key words - Negative Impedance Converter I. INTRODUCTION When developing the capacitance converters for measuring of non-electrical quantities a necessity to design converter circuits for registering very small capacity with reduced influence of the parasitic capacitance arises. A converter capacitance DC voltage consisting of two converters capacitance time interval and time interval DC voltage is developed. It is capable of compensating the parasitic capacitance of the connected primary capacitance converter to certain extent []. This solution provides comparatively low sensitivity (a few pf). It is suggested a new solution enabling the increasing of the output relative sensitivity of capacitance converter when it is needed the comparatively large parasitic capacitance (over pf) to be compensated by connecting Negative Impedance Converter (NIC). The use of special schemes for improving then sensiti-vity of the capacitance converters by decrease of their actual capacitance C is known. Scheme applications are known which: - connecting of the NIC into the capacitance converter reduces С of the converter to a final capacitance of С - С eq []. Ventseslav D. Draganov is from the Faculty of Electronics at the Technical University -, Studentska Str., Varna, Bulgaria, address: Zlatko D. Stanchev is from the Faculty of Electronics at the Technical University -, Studentska Str., Varna, Bulgaria, address: 3 Ilya T. Tanchev is from the Faculty of Electronics at the Technical University -, Studentska Str., Varna, Bulgaria, address: - improving a capacitive-sensor circuit with a modulator and an RF transmitter by modifying the modulator portion from adding a negatron circuit (a configuration that uses equivalent negative capacitance) [3]. The final result of the both cases is higher frequency output relative sensitivity df/f. The study of this circuit solution is carried out without reading the influence of the parasitic capacitances as well as of the input capacitance of the used op amp. The negative impedance converter (NIC) is used to realize negative driving point impedances. Negative capacitances are applied in gyrator-active C filters [4]. Two floating NIC circuits (FNICs) are now suggested, which can be used to simulate floating negative elements. The dependence of the input impedance of FNIC is defined by the chain matrix. An application of Floating Negative- Impedance Converter (FNIC) for designing of a two directional constant-resistance amplifier is given [5]. II. EXPOSITION А. Negative Impedance Converter (NIC) The Negative Impedance Converter (NIC) represents a four-pole (fig.), for which the following dependences are valid [6]:. I I U Four-pole U Z L. Fig.. Negative Impedance Converter (NIC) U Z = = k. () i Z L I U Z L = () I U (3) = I k. I U There are two separate boundary cases the Current Negative Impedance Converter (CNIC) and the Voltage Negative Impedance Converter (VNIC). If the conversion is fulfilled in the respect of the voltages, a VNIC scheme is obtained (fig.)... 64

205 Negative Impedance Converter Improves Capacitance Converter В. Capacitance converter with VNIC, connected into the measured capacitance For decreasing the influence of the initial capacitance on the primary capacitance converter and of the parasitic capacitances of its connecting conductors with goal - increasing the frequency output relative sensitivity of the capacitance converter, using relaxing generator, to the litter VNIC is connected fig.3. Fig.. Voltage Negative Impedance Converter (VNIC) When using ideal elements the dependence is valid: Z Cn = X Cn = ω. Cn (4) Z Ci = X Ci = ω. C (5) U = U = U + = U S. U S R.I i X X C n C n + R (6) = U (7) From (6) it follows: U.( X Cn + R ) U S = (8) X Cn After substituting of (8) in (7), it is obtained: U( X Cn + R ) = U R. I. (9) X Cn After transforming of (9) it is obtained: R. I. X Cn U = () R whence it is determined: U X Cn Z i = = R. () I R From (5) and () the input capacity of VNIC is defined: R C i =. C n. () R The Voltage Negative Impedance Converter (VNIC) enables schemes of negative capacitance to be synthesized, taking into consideration the stability since the negative capacitance cannot exist in natural mode, only connected to other elements. The choice of the element values in the negative impedance realization is usually based on the following general design consideration: R << ( R, R, X C ) << R id (3) where R is output resistance, R id is input differential resistance of the used operational amplifier []. The maximum useful frequency can be increased by making R = R []. Fig.3. Capacitance converter with VNIC, connected into the measured capacitance The capacitance converter consists of relaxing generator, which, is realized by the operational amplifier DA, the resistors R 3, R 4, R 5 and the capacitor С х on the primary capacitance converter. Parallel to the primary capacitance converter is connected VNIC, which is realized by the operational amplifier DA, the С n capacitor and the resistors R and R. The frequency of generated oscillations is defined by the expression [7]: f = = (3) T N. R3. C eq where: C eq = C x + C (4) i the coefficient N is defined by values of the resistors R 4 and R 5 as well as of the supply voltage. After replacing () and (4) in (3) following relationship is defined: R T = N. R. C x. C (5) 3 n R The period of the generated signal is decreased proportionally to the values of the capacitance С n and the resistor R and conversely proportionally - to the value of the resistor R in the circuitry of VNIC. 64

206 Ventseslav D. Draganov, Zlatko D. Stanchev and Ilya T. Tanchev EXPERIMENTAL RESEARCH The capacitance converter, with VNIC connected into the measured capacitance, is studied by simulation with the program product Electronics Workbench 5.. The influence of the value variations of the elements of VNIC, connected to the converter on the period of the generated signal Т, is studied by reading the influence of the input capacitance of the op amp in NIC s circuit as well as of the parasitic capacitances.. The dependence of the period of generated oscillations from the capacitance of the C n capacitor in VNIC s circuit. The dependence is defined by reading also the influence of input capacitance C in on op amp DA in VNIC s circuit. For this purpose, parasitic capacitance C p is connected to input terminal on DA in parallel, which is a real parasitic capacitance with values from to 5pF; when C p = pf only the input capacitance C n exercises influence on the op amp, which is smaller than the capacity C x. The experimental results are shown graphically in fig.4, where value of the resistors R = R = 5 kω and the primary capacitance converter С х = 5 pf. T [μs] T = ϕ ( C n ) C n [pf] 3 Cp = pf Cp = pf Cp = 3 pf Cp = 5 pf Fig.4. Dependency T = φ (C n ), with C p = 5 pf Conclusion: The dependence T = φ (C n ) corresponds to this formula from (5) to a considerable degree when the value of the capacitance C n < C x. In order to receive an explanation for the radical change of the character of this dependence when the values C n > C x the dependence of the current through the parasitic capacitance C p (fig.5) and the voltage between the input terminals of DA (fig.6) on the capacitance C n variation. When C n > C x the current through the input capacitance increases, the voltage between the input terminals of DA also increases and the circuitry with DA becomes unstable (it does not correspond to VNIC) the capacitances C in and C p are connected to C x in parallel which on its behalf leads to increasing of the total capacitance defining the generator frequency. I [μa] I = ϕ ( C n ) C n [pf] Cp = pf Cp = pf Cp = 3 pf Cp = 5 pf Fig.5. Dependency I = φ (C n ), with С х = 5 pf and C p = 5 pf U [mv] U = ϕ ( C n ) C n [pf] Cp = pf Cp = pf Cp = 3 pf Cp = 5 pf Fig.6. Dependency U = φ (C n ), with С х = 5 pf and C p = 5 pf The drawn conclusions are also confirmed when the values of С х = pf (fig.7). T [μs] T = ϕ ( C n ) C n [pf] Cp = pf Cp = pf Cp = 3 pf Cp = 5 pf Fig.7. Dependency T = φ (C n ), with С х = pf and C p = 5 pf. The dependence of the period of generated oscillations from the resistors R and R in VNIC s circuit The dependences Т = φ (R / R ), at change of the relationship k =, for R = 5 kω, С х = 5 pf, C p = pf (only the R R capacitance C in has an effect), are shown graphically in fig

207 Negative Impedance Converter Improves Capacitance Converter T [ μs ] T = ϕ ( C n ) C n [ pf ] k=,8 k= k=, Fig.8. Dependency Т = φ (k, C n ) Conclusion: The results obtained from the tests show the correctness of the dependency Т = φ (C n, R, R ), expression (5), when the capacitance value C n C x. 3. The dependence of the parasitic capacitances C p and C p3 In a real circuitry and the parasitic capacitances C p and C p3 (fig.3) also exists. The experimental results for the dependence of the period of generated signal Т on the variation of each of them shown graphically in fig.9 and in fig.. T [ μs ] T [ μs ] T = ϕ ( C p ) C p [ pf ] Cn = 5 pf Cn = pf Cn = 5 pf Fig.9. Dependency T = φ (C p ) T = ϕ ( C p3 ) 5 5 C p3 [ pf ] Cn = 5 pf Cn = pf Cn = 5 pf Fig.. Dependency T = φ (C p3 ) In fig. the studies of the dependence of the generated signal period T on the simultaneous change of the parasitic capacitances C p and C p3 are depicted. T [ μs ] T = ϕ ( C p, C p3 ) 5 5 C p = C p3 [ pf ] Cn = 5 pf Cn = pf Cn = 5 pf Fig.. Dependency T = φ (C p3 ) with C x = 5 pf Conclusion: When the capacitance C n < C x in the circuitry of VNIC the parasitic capacitances C p and C p3 have noticeable effect only just at values over (5 ) pf, which are greater than the real ones. III. CONCLUSION By using the Voltage Negative Impedance Converter (VNIC) the output relative sensitivity of the capacitance converter, constructed on the basis of relaxing generator with connected to it primary capacitance converter is increased. This is due to decreasing of the initial capacitance of the primary capacitance converter as well as to lessening the effect of the parasitic capacitance of the conductors connecting the primary capacitance converter. The studying the converter carried out through its operating simulation using software confirm the derived theoretical relationships for the effect of the VNIC capacitance variation within a certain range on the frequency of the generated oscillations. The reasons for invalidity of the results outside the range in which the connection of the circuitry of VNIC to the primary capacitance converter decreases its initial capacitance is explained by the results from the studies. In case of necessity to decrease the effect of the parasitic capacitances in measuring circuits VNIC can be used to increase their sensitivity. REFERENCE [] Draganov V., И. Танчев, И. Атанасов, С. Георгиева Резултати от изследванията на устройство за измерване на малки капацитети Електроника 6, София, 6 [] Belousov A. Negative impedance improves capacitive sensors, [3]Travis B. Improved frequency modulator uses negatron [4] Soundararajan K., K. Ramakrishna Nonideal Negative Resistors and Capacitors Using an Operational Amplifier, IEEE TRANSACTIONS ON CIRCUIT THEORY, September 975 [5] Antoniou A. Floating Negative-Impedance Converters, IEEE TRANSACTIONS ON CIRCUIT THEORY, March 97 [6] Auvray J. Electronique dex signeaux analogiques, Borda, Paris, 98 [7] Coughlin R.,F. Driscoll OPERATIONAL AMPLIFIERS AND LINEAR INTEGRATED CIRCUITS, Prentice Hall, Inc. Englewood Cliffs

208 Fuel Cells and Fuel Cell Power Supply Systems an Overwiev Zvonko S. Mladenovski, Goce L. Arsov and Josif Kosev 3 Abstract In this paper general review of the fuel cell as an alternative power supply is given. Fuel cell is an electrochemical power source where internal combustion phase of the fuel is omittted and overall efficiency is two to three times higer compared to conventional power supplies. Fuel cell operation principle, types, advantages and disatvantages are described. At the end a power conditioning subsystem wiith its components is analysed. support the R&D of the fuel cells, implementation and economic payoff. The most common application areas of fuel cell can be classified in five main groups as shown in fig. [6]. Keywords Electrochemical power source, Fuel cell, Alternative power supply, Fuel cell power supply systems. I. INTRODUCTION At the beginning of the at sentury, fuell cells meet the power needs of variety of applications. The fuel cell (FC) is an electrochemical device that converts the chemical energy into electrical and thermal energy through direct conversion process. The basic features of a FC system are composed of six basic subsystems: a fuel cell stack, a fuel processor, an air, water and thermal management, and power conditioner systems (PCS). The overall system promises to provide a number of advantages, such as diversity of fuels (natural gas, methanol, etc.), high efficiency at full and part load, compatibility of wide range of sizes, and indpendence of environmental pollution [] [3]. Fuel cells as energy source has been present since 839. They were discovered and developed by the english physicist Willliam Grove. But, since then, for more over one century they were not more than a laboratory curiosity [4]. After the period of years since the fuel cells emerged, NASA demonstrated some of their potential aplications in the space flights exploration. Consequently, the industry has started recognizing the commercial aspects of the fuel cells, which due to the technological barriers and their high production costs were not economically profitable at that stage of technology. [5]. However since the midst of the 8's of the th century, Office of Transportation Technologies at the U.S. Depatment of Energy, has started to support fuel cells technology which has aroused the interest of over companies worldwide to Zvonko S. Mladenovski is with COSMOFON - Mobile Telecommunications Services - A.D. Skopje, Macedonia, Goce L. Arsov is with the Faculty of Electrical Engineering and Information Technologies, Kaarpos II, b.b. P.O.Box 574, Skopje, Macedonia, 3 Josif Kosev is with the Faculty of Electrical Engineering and Information Technologies, Kaarpos II, b.b. P.O.Box 574, Skopje, Macedonia, Fig. Classification of fuel cell application II. DESCRIPTION AND OPERATIONAL PRINCIPLE OF THE FUEL CELLS The fuel cell is a mini power source generating electrical energy without the combustion stage. The basic physical structure of the fuel cell is consisted of an electrolyte layer (membrane) which in contact with the two porous electrodes (anode and cathode) on its both sides [7]. Porosity of the electrodes enhace the active electrode area hundreds, even thousand times. This fact is very important because electrochemical reactions take palce on the electrode surface. Catalyst is incorporated in the elctrode microstructure. e.g., platinum, nickel or their alloys which accelerate the speed of electrode's electrochemical reactions [7]. The chemical energy is directly transformed into electrical energy and heat when the hydrogen fuel reacts with the oxygen from the air [4], [5]. Water is the sole byproduct of the reaction. The basic electrochemical reaction is the following one [8]: + + Anode reaction: H H e () + Cathode reaction: O + H + e H O () Overall reaction: O + H H O (3) Looking at the the previous equations, one can get a wrong impression that this process is very simple, but actually the physical and chemical processes happening on each of the electrodes and membrane are rather complex. A schematic fuel cell representation with flow directions of the fuel, reactant and ion current is given in the Fig. []. 645

209 Fuel Cells and Fuel Cell Power Supply Systems an Overwiev Fig.Operational principle of the fuel cell Single fuel cell at no load (e.g. polymere electrolyte fuel cell - PEFC) in ideal case, generates voltage of,6 V at the temperature of 8 C and gas pressure of bar. Loaded fuel cell at this operational conditions generates,7 V. Thereby 6% of the fuel energy is transformed into electrical energy [4]. Maximum emf, E, gained during the hydrogen and air reaction (H + ½O H O) at the specified values of temperature and pressure can be determined by the following expression: ΔG E = (4) nf where ΔG is Gibbs free energy, n is number of electrons participating in the reaction, and F is the Faraday constant. In order to use the fuel cell as energy source practically, a number of single fuel cell has to be serially connected (stacked) to gain the higher output voltage. When the hydrogen is used as a fuel, the pollutants are not products of the reaction. Hydrogen fuel could be produced by electrolysis process using renewable power sources as sun, hydro, geothermal and wind energy. But the hydrogen could be extracted from any hydrocarbons e.g. petrol, naphta, biomass, natural and LPG, methanol, ethanol, etc. The most often classification of the fuel cell is according to the type of the used electrolyte [7]. So there are five types of fuel cell although basically the same electrochemical reaction takes place in all of them [7]: Alkaline Fuel Cell - AFC: AFC operating at 5 C has an electrolyte of highly concentrated potassium hydroxide KOH while those operating at lower temperatures ( C) have lowly concentrated KOH. The electrolyte is retained in an asbestos matrix. Wide spectrum of catalysts is used: Ni, Ag etc. The fuel is limited to non-reactive constituents except for hydrogen. Polymer Electrolyte Membrane Fuel Cell - PEMFC: Electrolyte in this fuel cell is ionic membrane (Sulfuric acid polymer) which is excellent ionic conductor. Water is the only liquid in the PEMFC and consequently the problem with the corrosion of PEMFC elements are minimal. Water management is a key factor for PEMFC efficient operation. During the operation the PEMFC the humidity of the membrane is critical which determines the operational temperature of the PEMFC in the range of 6- C. The fuel is hydrogen H enriched gas with no presence of CO (fuel cell poison at low temperatures). Platinum is used as a catalyst Phosphoric Acid Fuel Cell - PAFC: Concentrated phosphoric acid (up to%) is used in PAFC at operating temperatures in the range of 5-5 C. At lower temperatures, phosphoric acid is bad ionic conductor, and catalyst (Pt) poisoning with CO becomes extremely severe. The relative stability of the phosphoric acid is high compared to the other acids, and that is the reason why this acid is operative at high temperatures with small water quantity which make the water management easy. Electrolyte is put up in silicon matrix while the type of the used catalyst is Pt. Molten Carbonate Fuel Cell - MCFC: MCFC's electrolyte is combination of alkali carbonates or combination of Na and K, placed in a ceramic matrix made up of LiALO. Operational temperature of the MCFC is in the range of 6 C to 7 C at which the alakali carbonates form highly ionian conductive molten salt. Ni (anode) and nickel oxide (cathode) are used to promote reaction. Solid Oxide Fuel Cell - SOFC: The memebrane electrolyte is solid nonporous metalic oxide usually Y O 3 - stabilized ZrO. Operational temperature is in the range from 65 to C where ionic conduction of oxygen ions occurs. Typically anode is made of Co-ZrO or Ni-ZrO cermet and cathode is Sr-doped LaMnO 3. Initial use of the fuel cells was in the NASA's space flights, for power generation and production of fresh water for the astronauts. Today the fuel cells might be used in three categories of aplications: transport, stationary and portable aplications. AFC is the first modern type of fuel cell developed in the 96's for the "Apollo" space program. The excellent performances of AFC comparing to the other types of fuel cells, are due to the active O electrode kinetics and its flexibility to use a wide range of elctrocatalysts. But, pure H has to be used as fuel because CO in any reformed fuel reacts with KOH electrolyte to form carbonate thus reducing the electrolyte's ion mobility. Purification of the fuel is rather expensive and because of that the use of AFC is limited to the space applications where fuel is pure hydrogen. In the NASA's Space shuttle three kw units have been used for 87 missions with 65 hourd flight time duration. PEMFC are used in the transport aplications. Exceptionally interesting for this kind of applications is the Direct Methanol Fuel Cell - DMFC. In this type of fuel cell metilacohol (methanol) is directly used as a fuel needing no reformer stage [4]. PEMFC generate electrical energy with high efficiency and power density (8-5 mw/cm) [7]. Also, this type of fuel cells can be used in a small stationary applications for generation of electrical power and heating in the individual houses. The power range is from to kw. This achievement is made by the cost reduction of the materials and manufacturing. Their main advantage is low operating temeperatures 6- C and solid electrolyte. Due to the low operational temperature anode catalyst poisoning with CO is significant especially at higher current densities. In this case the output voltage of the fuel cell becomes unstable and fluctuating. Also due to the low operating temperatures, expensive catalysts have to be used for increasing the speed of the electrochemical reactions (platinum). 646

210 Zvonko S. Mladenovski, Goce L. Arsov and Josif Kosev PAFC are for the time being the only one commercialized type of the fuel cell. It is relatively simple, reliable and quiet power source with 6% efficiency (with cogeneration). As fuel could be used natural gas. 6 MW of this type fuel cells generators are installed worldwide. The power range of the most of the power stations is between 5 and kw, but also they are constructed in the range between and 5 MW. Operational temperature is around C, and power density reaches values of 3 mw/cm. The PAFC anode is very sensitive to catalyst poisoning even if very small concentrations of contaminants (CO, COS and H S) are present. Compromise between the demand for high power density and good operational performances through the life spans of the PAFC should be made. One of the primary targets for the future PAFC development is the extension of the PAFC's life span up to 4 hours. MCFC operates at around 6 C. At this temperature many of the disadvantages of the lower as well as higher temperature cells can be alleviated with the fact that, for manufacturing MCFC, commonly available materials can be used (utilization of metal sheets reduces fabrication costs), while nickel catalyst is used instead of expensive precious metals. Reforming process takes place within the cell and CO is used directly as a fuel. However, the electrolyte in the MCFC is very corrosive and mobile while the higher temperature influences the mechanical stability and the lifetime of the MCFC materials. An Energy Research Corporation (ERC) in USA has tested a MW power supply which operates from 996 in Santa Clara, Ca. SOFC electrolyte is solid and cell can be made in tubular, planar or monolithic shape. The solid ceramic construction of the cell alleviates hardware corrosion problems characterized by the liquid electrolyte cells and is impervious to gas cross-over from one electrode to the other. The absence of liquid also eliminates the problem of electrolyte movement and flooding the electrodes. The kinetics is fast while CO is directly usable fuel as in MCFC. Also, as in MCFC, there is no requirement for CO at the cathode. Operational temperature is around C which means that the fuel is directly reformed within the cell. Disadvantages of the high operational temperature are the influences on the cell's material properties meaning the different thermal expansion mismatches. Currently two plants (5 kw and kw), produced by Siemens Westinghouse Power Corporation, are installed and they both have cumulative operating time of 95 hours. The eventual SOFC market is for large stationary fuel cell power supply systems ( to 3 MW) using natural gas or coal as a fuel. III. FUEL CELL POWER SUPPLY SYSTEMS Although fuel cells generate power, power supply system based on fuel cells is a very complex system due to the fact that besides pure hydrogen, fuel cell can operate on diverse conventional fuels generating DC output power. There are many components incorporated in the fuel cell power system in order to enable processing of the fuel and to couple the power supply system to AC distributive network (power grid) as well as utilization of the generated heat in cogeneration to achieve high efficiency. In general the power supply system consists of: fuel processor or reformer, fuel cell and power supply conditioning subsystem. In the reformer, hydrogen is extracted from the hydrocarbons by a hydrocarbons steam reforming. CO and CO appear as byproducts. The further treatment of CO with steam under high pressure converts it in CO. Fig. 3 shows a simplified diagram of fuel cell power supply system [4]: Fig 3. Fuel cell Power Supply System The quality of the electrical power generated by a fuel cell power supply system is evaluated acoording to the following three characteristcs: efficiency, reliability and quality. One of the main reasons for the utilization of the fuel cell as power supply is its electrical efficiency (4-57%), and with cogeneration the overall efficiency is even higher (8-85%) [9]. The measured electrical efficiency is the ratio of generated electrical energy and the energy of the fuel used in the fuel cell power supply system. Losses in all the subsystems and the interaction among them influnce on the overall efficiency of the power supply system. In order to gain and maintain the high efficiency high level of the coordination among the subsystems is required. The fast respones of the power conditioner has the key role in maintaing the high efficiency due to the step load variations. Reliability is the second key factor in the development of the fuel cell power supply systems. They do not require complex and frequent maintenance. As alternative power supply sources they can substitute the traditional power supply systems in applications such as:.remote sites applications, e.g. mobile telephony base stations, where service visit and investment costs for power grid construction are rather expensive,.critical power supply systems, e.g. banks and internet servers, where power supply failures could be very expensive. System's reliability as well as the efficiency is function of the combined subsystem's reliability. The quality of the generated electrical power is also very important factor influencing the accpetance of the fuel cell power supply systems on the market. High quality of the electrical power demands the shape of the sinusode of the generated voltage to be close to he shape of ideal sinusoide with a constant frequency and rated value of AC voltage. This characteristcs is very essential for the proper operation of the electrical and electronic devices. The main role of the power conditioner subsystem is to process and control the generated DC power from multiple DC sources and further to generate high quality and reliable AC power maintaining the high efficiency of the power supply system. The AC output of the system should have possibility to interact and to be synchronized with other AC generators including the power grid. []. The power conditioner subsystem should comply to the new standard IEEE 547 referring to the coupling to the power grid and anti-islanding protection. Usually the power conditioner (Fig. 4 [8]) is composed of two coupled converters: DC-DC converter and AC-DC converter and backup power supply: battery or ultra capacitors. DC-DC converter decouples and isolates the fuel cell and the DC-AC con- 647

211 Fuel Cells and Fuel Cell Power Supply Systems an Overwiev verter and step-up the low output DC voltage of the fuel cell (usually the output voltage of the serially connected fuel cell's is 48 VDC, because the connection of the fuel cells to achieve high voltages, >4 VDC, is complicated related to the reliable functioning of the fuel cell stack [4]). DC-AC converter generates stable and rated AC voltage. The battery serves as a backup during the load transients because the fuel cell dynamics is slower than the converters dynamics, and it enables starting of the fuel cell power supply system. Fig 4. Power conditioner subsustem DC-DC converter is usually put up between the fuel cell stack DC-AC converter. Different topologies for the DC-DC converter design can be used: classical high frequency hard switching topologies and resonant topologies. Bridge, half-bridge and push-pull topologies may be chosen. The advantage of resonant topologies related to the hard switching converters is avoiding the switchig losses but the design and control complexity is greatly increased. Fig. 4 shows different topologies of DC-DC converters for possible use in the power conditioner subsystem are given. []. (a) (b) (c) Fig 4. a) bridge, b) serial resonant, c) push-pull converter DC-AC converter should generate stable and rated AC output voltage. Standard high frequency voltage mode, resonant and resonant with DC link inverters can be used. Some of the considerd topologies are given in Fig. 5 [6]. The role of the battery system that is used to prevent voltage transients during the dynamic load variations influences considerably on the topology configuration of the power conditioner subsystem. Most usually the following two methods of the battery connection in the subsystem: ) the battery is connected on the low DC voltage side or ) on the high DC voltage side (4 VDC). Both methods have their advantages and drawbacks [6]. (a) (b) Fig 5. a) Four switch inverter, b) Six switch inverter IV. CONCLUSION The use of fuel cells as a power source in stationary and transport applications is the technology of the future. Highly developed countries, especially USA make great investments in R&D of the fuel cell's technology, production and implementation and power supply systems based on them. For this purpose, for fiscal year, The American Congress provided 7 million USD in fuel cell funding through the Departments of Energy, Transportation and the Environmental Protection Agency. One of the main goals is to decrease the production costs from 3$/kW to 5$/kW (for cogeneration units) in order to achieve wider commercialization of the fuel cells systems. [5]. Also one of the goals is decreasing the power conditioners subsystems production costs on the value of 4$/kW [6]. Replacement of the conventional power sources with the new alternative technologies is inevitable in the near future due to the following reasons: ) depletion of the fossil fuel reserves, especially oil and reduction of its use as a primary power source, ) reduction of the greenhouse gases and air pollutants in transport and 3) direct influence on the deceleration of the global climate changes. Application of the fuel cells power systems for the remote telecommunication and critical sites would influence directly on the reduction of the maintenance costs and in the near future on the investment costs for the power grid infrastructure, and indirectly on the global level regarding the previously mentioned three global factors. Sometimes, instead of connecting in series several fuel cells to obtain the higher output voltage than one cell can give, it may be cheaper and less complex to build a low voltage DC-DC converter and to convert the low voltage of a single cell to some higher levels. Some possibilities using switched capacitor converter are analyzed in []. ACKNOWLEDGEMENT This work has been suported by the Ministry of Education and Science of Republic of Macedonia (prject: 3-936/3-5) REFERENCES [] G. Hoogers, Fuel Cell Technology Handbook, CRC Press, 3 [] J. Laramie, and A. Dicks, Fuel Cell Systems Explained, pp.5-66, John Wiley and Sons, England, [3] M. W. Ellis, M. R. V. Spakovsky, and D. J. Nelson, "Fuel Cell Systems: Efficient, flexible Energy Conversion for st Century", Proc. IEEE, vol. 89, no., pp 88-88, Dec. [4] Sharon Thomas, Marcia Zalbowitz, Fuell Cells Green Power, Los Alamos National Laboratory, New Mexico, 999. [5] Fuell Cells Fact Sheet, Enviromental and Energy Study Institute, Washington DC, February. [6] 3 Fuel Cell Seminar Proceedings, Miami, November, 3. [7] J.H. Hirschenhofer, D.B. Stauffer, R.R. Engleman, M.G. Klett Fuel Cell Handbook, Fouth Edition, US Department of Energy, Nov [8] Andrew R. Balkin, Modelling a 5 W Electrolyte Membrane Fuel Cell, University of Technology, Sydney, June. [9] H. Xu, L. Kong, X. Wen, Fuell Cell Power System and High Power DC-DC converter, IEEE Transactions on Power Electronics, vol. 9 No5 September 4. [] The Fuel Cell Power Converter Power Managenet for Fuel Cell Systems, Sustainable Energy Technologies, White Paper, Jan.. [] F. Blaabjerg, Z. Chen, S. Baekhoej Kjaer, Power Electronic as Efficient Interface in Dispersed Power Generation Systems, IEEE Transactions on Power Electronics, vol. 9 No5 September 4. [] G. L. Arsov, J. Kosev, R. Giral, "Low voltage switched capacitor DC- DC converter for fuel-cell applications Preliminary design considerations", th Int. Conf on Power Electronics Ee 3, Paper No. T5-.4, pp. -5, Novi Sad, Serbia, 3 648

212 Developing and Using Communication Driver for Serial Communication Between PCs and Industrial PLCs Zoran M. Milić, Petar B. Nikolić, Dragana Krstić 3 and Miljana Lj. Sokolović 4 Abstract The basic principles of the asinhronius serial communication between PLC controllers and PC based applications are presented in this paper. In order to create HMI interface between operator and the controller at the machine, the Windows based application for vizualization and data acquisition was developed. This application is using self-oriented driver for serial communication which was developed and used for purposes of communication and data exchange between industrial controllers working at machines in the plant floor, and PCs. Basic principle of generating communication packets and examples of writing data into PLC s memory registers are given. This driver is also used in self-oriented SCADA programming for data exchange between SCADA application and controllers on the industrial network. Keywords PLC, industrial network, network layer. I. INTRODUCTION For visualization of the interface between the operator and the machine, it is possible to use different hardware solutions, from the industrial panels, intended only for these purposed and related to a particular PLC type, to the industrial and nonindustrial PC s with a proper operating system installed as well as the appropriate user s application program. For the realization, it is suitable to use common personal computer since it enables the enlargement of the system and the integration into the wider information system. The industrial application specific panels have no ability for multi-tasking work. They need additional communication with PCs or other controllers in the control system where complex calculations are needed. With the respect to those calculations, a change of the data in the registers of the controllers takes place as well as the activation of the appropriate outputs. These panels have neither additional ports, nor the ability to install additional cards (PCI, PCE express) that are obtained as a part of the sensors and actuators kits. When the industrial panels are connected to the larger industrial network, and one of the devices blocks its message transmition, this can break the communication of the entire network. Zoran M. Milić is with the Tigar MH, Pirot, Serbia Petar Nikoić is with the Tigar MH, Pirot, Serbia 3 Dragana S. Krstć is with the Faculty of Electronic Engineering, University of Niš, Serbia dragana@ 4 Miljana Lj. Sokolović is with the Faculty of Electronic Engineering, University of Niš, Serbia For the communication between the PC and the controller, communicating applications of the controllers manufacturer can be used, or a specific communication driver can be developed. The communication software enables an easy integration into the application for the machine control, or into the SCADA software, and this decreases the time needed for the entire system realization. Some of these communication software are in the same time OPC servers so that they allow relatively easy integration of the data sources devices from different manufacturers into the unique visualization and acquisition systems. Writing its own communication driver allows good controller management. It is possible to adjust the operating mode, which is switching between the programs, run and test modes, which is very useful for application testing. Managing the program upload and download to and from the controller is possible to be achieved directly, through the serial interface, or through the network, using the proper network module. In this way, the change of the status registers, control of the restart or switching of the controller, and the memory initialization can be achieved. Direct changing the parameters of the communication link (for example, baud rate, parity bit, stop bit, hardware/software control, BCC/CRC) is possible, and this represents the advantage considering the entire system testing. HMI (Human Machine Interface) applications, which communicate with the controller directly over the driver, perform it in the real time. This solution has better diagnostics abilities, enables an acceptable real-time response of the machine, gives a bigger independence, but also increases the entire system design and development. Beside the time needed for the design, the important factor in the solution choice is the prize. In the case of SCADA application design, the prizes of the both system are comparable, since In the case of SCADA application design, the prizes of both systems are similar since the number of the acquisition places, and though the places than need the communication software, relatively small, comparing to the number of PLC s in the system. By the HMI applications, the situation is different. The number of places that need the communication software is the equal as the number of PLC controllers - that is, each machine must have PLC and the particular interface toward the operator, which communicate with that PLC. In this case the prize of developing particular driver is much lower than buying a new solution. 649

213 Developing and Using Communication Driver for Serial Communication Between PCs and Industrial PLCs II. NETWORK LAYERS OF THE COMMUNICATION MODEL Network architecture of the DF industrial network has physical layer, data layer, network layer and the application layer []. In the case of serial RS3/RS4 communication, physical layer consists of the RS3/RS4 ports at the PC and PLC, RS3/RS4 cable that connects them, voltage values (zeros and ones), number of data bits, number of stop bits, parity bit, bit rate, and the way of establishing and breaking the connection after PC and PLC finish the data transfer [3]. The data connection layer controls the correctness of the data transfer, and the protocol at this layer should provide the mechanism for the acknowledgement of the correct data transmission and reception []. The network layer controls the packet transmission to its destination, which is for establishing the connection between the nods of the network [4]. Application layer in the case of PLC and PC communication contains the commands that are set and executed by the PC and the PLC applications. III. DF LAYER OF THE DATA CONNECTION At the other end of the link, the separator software separates the command messages from the returning reply messages. The separator software should send the command messages to the particular receiver and the returning messages to the corresponding transmitter. Although the command and the returning messages in the same circuit exist independently from one another, there is a certain relation between them. For example, the command message in the AB circuit will be delayed if the returning message of the receiver A is inserted into the sequence of the common messages of the transmitter A. Each hardware problem that influences the command symbols in one circuit will also have the influence at the returning symbols in the same circuit []. IV. GENERATION AND SEPARATION OF THE DATA FRAME IN THE FULL DUPLEX PROTOCOL The data frame in the Full Duplex protocol has different forms depending on the observed network layer, and considering the fact that different message parts are generated in different network layer. Fig. illustrates how the particular network layer influences generating the message frame [5]. The influence of the physical layer is not shown. DF is the Allen Bradley's protocol for the data connection layer, which is based on the ANSI x3.8 specification. The basic principle of the DF protocol will be explained at the example of Full Duplex data exchange []. Fig.. The data routes for alternating simultaneous communication (Full duplex) At the Full Duplex protocol (Fig. ), link uses two physically separated circuits for the simultaneous data exchange. These circuits provide communication at four communication channels. In the first circuit the transmitter A sends messages to the receiver B (the route ) and the receiver A send the returning control messages to the transmitter B (route 3). In the second circuit the receiver B sends messages to the receiver A (the route 4) and the receiver B sends the returning control messages to the transmitter A (the route ). All messages and the symbols in each of these circuits are transmitted in one direction; from A to B in the first, and from B to A in the second. In order to implement 4 logical routes in physically separated circuits a software multiplexer must be used. Its purpose is to combine the command messages (from the transmitter) with the returning messages (from the receiver) as well as with the replies from the transmitter sent in the same direction. Fig.. Data frame: from the top to the bottom: application layer, network layer, data connection layer The command (for example, read or write) is generated at the application layer, as well as the destination (the address of the PLC controller in the network) and the data related to that command (for example, in the case of reading, this would be the specification of the first memory location and the size of the block to be read). The network layer is responsible for establishing connection between the communicating nodes, and the address from which the message is transmitted is added here (so that returning message has the correct destination address), transmission status field (which contains the error code for the eventual transmission errors), and the identifier, which is a unique for each message (in order to know which returning message is the reply to the given command). Data connection layer is intended for the control of the correctness of the transmitted data one more field is added for the beginning and one for the end of the message as well as the field for the error control. In order to design the corresponding HMI solution one needs to develop a driver which cover a wide spectrum of communication messages that enable: reading and writing into and from the controllers registers, changing the content of the status registers, changing the parameters of the 65

214 Zoran M. Milić, Petar B. Nikolić, Dragana Krstić and Miljana Lj. Sokolović communication link itself, and simultaneous covering relatively large number of communicating devices. The driver is the application specific for the asynchronous serial communication with the Allen Bradley PLC 5 and the SLC 5 families of the controllers as well as for the Allen Bradley KF communicating module [6]. The developed driver supports the following set of commands: For PLC 5 family of controllers and a KF module: Word Range Read reading the block of words from the controller s memory Word Range Write writing the block of words into the controller s memory Typed Read reading the block of data from the controller s memory (this command is also supported for the SLC 5/3 and SLC 5/4 processors from the SLC5 family) Typed Write writing the block of data into the controller s memory (this command is also supported for the SLC 5/3 and SLC 5/4 processors from the SLC5 family) Read Modify - Write bit write command Set Variables adjusting the parameter of the serial link the number of the ENQ packets, the number of the NAK packets and the timeout interval Set CPU Mode changing the controllers' operating mode: Test, Program and Run Diagnostic Status reading the content of the controllers' status registers For the SLC 5 and the MicroLogix family of controllers: Protected Typed Logical Read With Three Address Fields reading the block of data from the controllers' memory, starting from the given address Protected Typed Logical Write With Three Address Fields writing the block of data into the controllers' memory, starting from the given address Change mode changing the controllers' operating mode: Test, Program and Run Diagnostic Status reading the content of the controllers' status registers bases and calculates the parameters of the recipe. According to these data, the machine's parameters are adjusted, and the corresponding machine's operating cycle can begin. During the cycle, several communication sessions between a PC and a PLC can appear. During one session, a certain amount of data needed for the machine control is successively written into, and read from the registers of the controller in order to obtain the data needed for the additional analysis in the PC. After completing the cycle and the analysis, performed by the user's application, the results of the processing should be written into the data base, and the signal for the initialization of the next cycle should be sent. The described HMI application is implemented using the Microsoft Visual Basic.NET 5 developing tool [7]. The example shows writing four integer words (, decimal) into the u Allen Bradley PLC 5/ processor over the serial link, starting from the N8:3 memory location. To do this a command Word Range Write was used. In a similar manner, an entire communication over the serial link is performed, where the packets contain all necessary commands, addresses and data. During this procedure the following data exchange takes place (Fig. 3): The application sends a data for writing into the driver, that is, it specifies the writing command, all four words of the data and the destination address of the controller The driver generates the entire data frame for sending The transmitter of the driver sends the packet to the receiver of the PLC The receiver of the PLC receives the packet, forwards it for processing to the processor and sends the control message about the successful reception of the packet (DLE ACK) to the link After the writing command is successfully completed, PLC processor forwards the message about the successful reception to the transmitter. The transmitter of the controller sends the message to the link The receiver of the driver receives the message about the successful writing, the application forwards that message and sends the control message about the successful data reception (DLE ACK) to the link. V. EXAMPLE For communication between the PCs and the PLCs in the systems that control machines in the tires industry, asynchronous serial communication is commonly used. The role of the interface between the controller and the operator is given to the applications installed at the industrial PC. The reason for that is the additional processor power that system needs for processing the parameters obtained from the smart sensors, which is the task that most of PLCs cannot satisfy, or the prize of it implementation is to high. The communication during one working cycle of the different machines is performed in a similar manner. The controller sends a request for the recipe data from the user application over the asynchronous serial communication. The data are read from the particular sensors and according to their content, the application reads some parameters from the data Fig. 3. Illustration of the packet exchange over the serial link A. The driver The driver was developed as a C#.NET Class Library. Since it represents an interface between the user's HMI application and the physical layer of the network, the following class properties were implemented: command it is a command that forwards the application. 65

215 Developing and Using Communication Driver for Serial Communication Between PCs and Industrial PLCs address represents the address of the controller in the network mem_address - specifies the starting address in the controller's memory where the writing is taking place packet_offset specifies the offset in regard to the of the given mem_address value total_trans specifies the total number of the data words for writing into the controllers' memory during the entire transaction sent_data dada, forwarded by the application error_check can have CRC or BCC receive_data the driver forwards these data to the application nak the number of the NAK packets enq the number of the ENQ packets time_out time interval for timeout mode operating mode comm_status A new object with a dh class is created in the application The appropriate class properties are set depending on the command properties, some of the properties are not necessary In order to inflict the controller to send the command, the method "communicate" must be invoked Coding the memory address, generating the TNS value, calculating CRC or BCC field and assembling the entire packet is performed within the class frame The class itself performs the communication toward teh controller When class extracts all corresponding data, it sets the "receive_data property After a certain amount of time the application reads the "receive_data and "comm_status properties If the property "comm_status is set to "True value, depending on the program logic, the property "receive_data is used further If the property "comm_status is set to "False value, the property "receive_data is not used further, and depending on the program logic, the command could or could not be repeated. VI. CONCLUSION Fig. 4. The example of the class code The class also contains the method "communicate" that is used by the application for forwarding the request for establishing communication with the controller. Developing the unique own communication driver for the communication between PCs and PLCs in the machine control HMI interface design process enables better diagnostics, good real-time response of the machine and gives larger independence comparing to the use of the communication application offered by the machine manufacturer. The own solution and its integration into the HMI application increases the time needed for the design of the entire system, but at the same time the prize of developing the own driver much lower than the prize for buying the entire solution and no addition license expenses are necessary for each application installed later. The paper describes the communication model that uses asynchronous serial communication, and one practical realization of writing data obtained in a PC and needed for machine control, into the controllers' registers, and reading data needed for additional PC analysis or for updating the data shown on the screen. REFERENCES Fig. 5. Writing the dimensions parameters The procedure in which the class is used for forwarding the data between the HMI application and the controller is given next: [] -, DF Protocol and command set Reference Manual, Publication , Allen_Bradley, Milwaukee, USA, 996. [] Andrew S. Tanenbaum, Computer Networks, London, Prentice Hall PTR,. [3] -, Data Highway or Data Highway Plus Asynchronous (RS- 3-C or RS-4-A) Interface Module User Manual, Allen_Bradley, Milwaukee, USA, March 989. [4] Anthony Chiarella, Networks in Cisco and Microsoft technology (in Serbian), Čačak, Computer library, 5. [5] -, Data Highway Plus and DF Communication Protocols, Allen_Bradley, Milwaukee, USA, 4. [6] -, Allen-Bradley DF Serial Communication Interface API, DASTEC Corporation, 3. [7] Michael Halvarson, Visual Basic.NET Step by Step, CET Computer Equimpent and Trade,. 65

216 Spice Model of Magnetic Sensitive MOSFET Nebojsa Jankovic, Tajana Pesic and Dragan Pantic 3 Abstract: A new model for a magnetic-sensitive split-drain MOSFET (MAGFET) consisting of only two NMOSTs in the equivalent sub-circuit is described in this paper. The model developed is based on the non-quasi-static (NQS) MOST model of a conventional NMOST, modified to include the effects of the Lorentz force. Based on the results of 3D numerical device simulations, it is shown that the new model can accurately predict the absolute and the relative MAGFET sensitivity for a wide range of the device biasing conditions. Unlike previous models, the new MAGFET model can also predict device dynamic response to time varying magnetic fields more realistically. Keywords: Magnetic, Sensor, SPICE, Model I. INTRODUCTION A magnetic sensor is a transducer which converts a magnetic field into an electric signal. Many integrated magnetic sensor circuits use a split-drain MOSFET (MAGFET) structure as a sensing device. The MAGFET is a long-channel MOSFET with a single gate and two symmetrical drains sharing the total channel current I D []. An imbalance between drain currents occurs due to the influence of the perpendicular magnetic field B Z. In spite of its large offset, temperature drift and noise [], the MAGFET remains a popular magnetic field sensing device due to its easy integration with other electronic signal conditioning blocks on silicon chips [,3]. Hence, the ability to evaluate the performance of magnetic sensors built using MAGFETs prior to chip fabrication is essential to cost-effective development. For the accurate simulation of magnetic sensors, precise MAGFET electrical models are required that are suitable for implementation in circuit simulators such as SPICE. Until now, recent MAGFET models employed in sensor simulations [,3] were essentially identical to the SPICE Macro Model (SMM) [4]. In the SMM approach, the MAGFET operation is emulated by the parallel connection of two conventional NMOSTs with associated external current-controlled current sources (CCCS) operating in the opposite direction [4]. The CCCS serve to produce the drain current imbalance Δ id expressed as Δ id = S I D BZ, where S is the relative magnetic sensitivity and I D is the total MAGFET drain current. A split-drain MAGFET model based on the SMM approach has also been implemented recently in the VHDL- AMS language [5]. There are two main drawbacks with the SMM approach. Firstly, the magnetic sensitivity S of the MAGFET is included as an external model parameter and its dependence on the device operating point e.g. the gate and the drain voltages V GS and V DS, respectively, is usually included as a polynomial approximation of measured data. Secondly, since the SMM is a static model, the dynamic MAGFET behavior in the presence of fast varying magnetic fields cannot be simulated. Both drawbacks effectively lower the accuracy of MAGFET modeling and have limited the application of the model. To overcome these deficiencies, the authors have developed a new MAGFET model that does not involve external CCCS elements. The equivalent sub-circuit consists of only two magnetic- sensitive NMOSTs whose electrical characteristics are simulated by a modified non-quasi-static (NQS) MOST model [6,7] that includes the effects of Lorentz force. Three-dimensional (3D) numerical simulations of a MAGFET device were performed using ISE TCAD [8] to derive and evaluate the new model. The ability of the new model to predict MAGFET dynamic response to the time varying magnetic fields is also presented. II. 3D NUMERICAL SIMULATIONS A split-drain MAGFET with L=5m, W=μm, t ox =6nm gate oxide, and substrate doping N D = 5 cm -3, is studied in this paper. A concave MAGFET mask layout and standard μm CMOS technology are adopted for process simulation, yielding 45μm wide drain regions separated by a μm oxide gap. The internal potentials and carrier distributions of the MAGFET in presence of the perpendicular magnetic field B Z were then obtained using the 3D device simulator ISE DESSIS [8]. Fig. shows the electric field distribution in the channel simulated for V GS =5V, V DS =V and B Z =mt, where B Z was orientated in the z-axis direction. It can be seen that the electric field iso-lines are asymmetrical with respect to the (z,x)-plane at y =. This asymmetry is caused by the accumulation of electrons in the upper channel region due to the influence of Lorentz force. N. Jankovic (, T. Pesic (, and 3 D. Pantic ( are with Faculty of Electronic Engineering Nis, Aleksandra Medvedeva 4, 8 Nis, Serbia Fig. Electric field iso-lines in the MAGFET channel. 653

217 Spice Model of Magnetic Sensitive Mosfet Let us define the steady-state excess concentration of electrons Δ n (in units of cm 3 ) that is accumulated along the upper channel edge y = -5μm (Fig.) as: Δn ( x, B ) = n( x) () B Z Z ( x) B Z = n ( x) B Z B Z = where n ( x) and n are the electron concentrations with and without the presence of magnetic field B Z, respectively. For constant V GS and V DS, it is assumed that the same amount of electrons have been deflected from the lower channel edge y=5μm (Fig.). From 3D device simulations, Δ n( x, BZ ) was extracted at different points along the channel e.g. x = 3μm, 6μm and 9μm corresponding to L / 4, L /, and 3L / 4, respectively. The variation of Δ n( x, BZ ) with the magnetic field B Z extracted for constant channel positions from numerical simulations is shown in Fig.. Hall excess channel electrons Δn (cm 3 ) x 4 8x 3 6x 3 4x 3 x 3 3D numerical simulation of MAGFET x=l/4 x=l/ x=3l/4,,5,,5, Magnetic field B Z (T) Fig. Excess electron concentration Δ n versus the magnetic field B Z extracted for different channel points at y=-5μm, z=. A general linear dependence of Δ n( x, BZ ) on B Z is obtained as seen from Fig. and the difference Δ n( L, BZ ) Δn(, BZ ) is noted to be small even at very high B Z. Neglecting the latter, we can define an approximate relationship: Δ n( B ) a (3) Z B Z where a is a constant, of units cm 3 /T, whose numerical value depends on the geometry and technology of the particular MAGFET. This empirical relationship (3) forms the basis of the development of the new MAGFET model and is explained in more detail in the following section. III. THE MAGFET MODEL The operation of a split-drain MAGFET is usually approximated with two identical NMOSTs operating in parallel. It is well known that the carrier transport through conventional MOSTs can be accurately modeled with the equivalent n-segment RC transmission line [6,7]. In the case of a MAGFET device, the channel transport has to be represented with two identical RC chains as illustrated in Fig. 3. Depending on the sign ( ± ) of the applied perpendicular magnetic field B Z, the equivalent resistors R k in one of channel chains will simultaneously decrease or increase under the action of the Lorentz force due to carrier accumulation or depletion, respectively. In the expressions underlying the NQS MOST model [6], it can be seen that the magnitude of R k is inversely proportional to the square root of the substrate B Z Δi D Fig.3 Split-drain MAGFET represented with the two magnetic sensitive NMOSTs doping concentration e.g. N (see eq. (A4) in Ref [7]). beff Hence, in order to include magnetic effects into the NQS MOST model [7], we will use an empirical relation (3) assuming that the magnetic field B Z effectively modulates the parameter N by adding or subtracting beff Δ n( B Z ). Consequently, the new effective substrate doping variable N will appear instead of the beff N parameter in the NQS beff MOST model [7] as: N ' beff = N ± Δn( x, B ) = N beff Z beff ± a B where the + and signs stand for the different directions of carrier deflection in one of the NMOST channels as illustrated in Fig. 3. The relation (4) is the key modification to the NQS MOST model [7] and its efficiency in the accurate modeling of the MAGFET will be demonstrated in Section IV. The constant a appearing in (4) is a new fitting parameter for magnetic sensitivity used to calibrate the model. When B Z =, the MAGFET model reverts to the original NQS MOST model [7]. Unlike the SMM approach [4], the relative sensitivity S in the new MAGFET model is calculated a posteriori from simulated electrical characteristics, much the same as it is extracted during experimental MAGFET measurements. Z (4) IV. MODELING RESULTS AND DISCUSSION The new MAGFET model is implemented in SPICE in the form of a sub-circuit with two NMOSTs as illustrated in Fig.3. The magnetic field is represented with a separate voltage generator sourcing a voltage equal in magnitude to B Z. This voltage source drives a special magnetic node in the ' MAGFET sub-circuit that connects B Z with the N beff variable of the modified NQS MOST model following relation (4). A. Steady-State Analysis The new model was first calibrated to fit the electrical characteristics of a MAGFET obtained from the 3D device simulator ISE DESSIS for the case of B Z =. Thus, Fig.4 shows the modeled drain current imbalance Δ id = I D I D 654

218 Nebojsa Jankovic, Tajana Pesic and Dragan Pantic versus B Z, together with the numerical results. The experimental data of R. R.-Torres et al. [] is also included in Fig.4 for reference. In addition, Fig.5(a) and Fig.5(b) show the dependences of the relative magnetic sensitivity S on voltages V GS and V DS, respectively, calculated for B Z =mt from the simulated electrical characteristics of the MAGFET. A good agreement is obtained between the modeling results and the numerical simulations for wide range of MAGFET biasing conditions as shown in Fig.5. Magnetic current inbalance Δi D (μa),35,3,5,,5,,5 Experiment ISE DESSIS The new model V GS =4.95V, V DS =V,,,,4,6,8, Magnetic field B Z (T) Fig.4 Comparisons of simulated, modeled and experimental MAGFET current imbalance Δi D versus the magnetic field B Z. The experimental data were taken from Ref. []. Relative sensitivity S (%/T) Relative sensitivity S (%/T) V DS =.V V DS =.6V The new model ISE DESSIS B=.T V DS =.V V DS =.4V,,5,,5 3, B=.T Gate voltage V GS (V) (a) V GS =V V GS =3V V GS =4V V GS =5V The new model ISE DESSIS,,5,,5,,5 3, Drain voltage V DS (V) (b) Fig.5 Relative sensitivity S of MAGFET versus: (a) the gate voltage V GS and (b) the drain voltage V DS extracted from 3D device numerical simulations and from the new model B. The MAGFET dynamic performance Unfortunately, the present version of ISE DESSIS [8] cannot perform an electrical device simulation for the case of a time varying magnetic field B Z (t). In addition and to the best of our knowledge, only one set of experimental data has been published in relation to the dynamic performance of splitdrain MAGFETs under the influence of a pulsed magnetic field [9]. Consequently, we can only demonstrate here the advantages of the new MAGFET model over the SMM approach [4] in predicting MAGFET dynamic behavior. For proper comparison, the NMOSTs of the SMM [4] are taken to be identical to the ones used in the sub-circuit of the new MAGFET model. Also, in order to obtain the same maximal Δ i D response in both models, the relative sensitivity S found for given values of B Z, V GS and V DS in the new MAGFET model simulations were subsequently used as the prerequested input parameter of the SMM [4]. Magnetic current inbalance Δi D (μa) Magnetic current inbalance Δi D (μa),4,, -, -,4,4,, -, -,4 The new model, B Z Δi D The SMM,,5,,5, The new model, Time (μs) (a) B Z Δi D The SMM 5 5,,, -, -,,,, -, -, Magnetic field B Z (T) Magnetic field B Z (T) Time (μs) (b) Fig.6 Drain current imbalance Δi D of MAGFET simulated with the new model and with SSMM in case of pulsed B Z signal with: (a) 3ns rise/fall times and (b) μm rise/fall times. Let us assume that the MAGFET is subjected to an extremely steep B Z pulsed signal with 3ns rise/fall times. Then, the simulated pulsed response Δ i D (t) is as shown in Fig. 6.a. As it can be seen, the new MAGFET model yields transient peaks in the simulated response, whereas these peaks do not appear in the Δ i D (t) pulses obtained with the SMM [4]. Note that the Δ i D (t) peaks would be expected in reality due to 655

219 Spice Model of Magnetic Sensitive Mosfet the transient charging of channel distributed capacitance before reaching steady-state conditions. This conclusion can be indirectly confirmed from the results shown in Fig.6.b. Namely, for substantially slower B Z pulses, the transient peaks are small, becoming negligible if plotted on a long timescale. Hence, the device response Δ i D (t) from the new MAGFET model to the slow B Z pulses with μs rise/fall times and the response of the SMM [4] will be in better agreement as shown in Fig.6.b., The SMM Δi D amplitude (μa),5,,5 V GS =5V, V DS =V B Z (t)=.5sin(πft) The new model, Frequency f (Hz) Fig.7 Drain current imbalance amplitude Δi D of MAGFET versus frequency f in case of the sinewave magnetic signal B Z simulated with the new model and with SSMM. Fig.7 shows the simulated frequency characteristics of the Δ i D (t) response obtained for V GS =5V and V DS =V from the new MAGFET model. For the purposes of the simulation, a sine-wave magnetic field of B Z =.5 sin(π f t) in units of Tesla with variable frequency f is assumed. Fig.7 clearly indicates the existence of some limiting frequency f t for which the MAGFET sensitivity drops to zero. In contrast, the SMM [4] is not able to predict any frequency response limitations of MAGFET sensitivity as illustrated by the dashed line in Fig.7. It is important to emphasize that the limited bandwidth of MAGFET sensitivity is a more natural simulation result, since f t commonly appears in the sensitivity characteristics of other sensors in different signal domains []. A rather high f t of around 7 GHz is predicted in Fig.7 in the new model, most likely due to the assumption of an ideal MAGFET device. A much lower f t would be expected due to the influence of the device geometry, noise, and offset [3] as well as the presence of parasitic RC elements in practical MAGFETs. These nonidealities are not included in the present model. The model also predicts a slight increase of Δ i D (t) appearing at high frequencies of B Z as shown in Fig.7. From the Fourier analysis, we found that the Δ i D (t) sinewave response of MAGFET in case of large B Z swing has been distorted at high frequencies by the appearance of additional harmonics which slightly increases the overall output signal amplitude. Since, for small amplitude of B Z signal the Δ i D (t) overshoot is negligible, we can attributed this effect to the highly nonlinear model equations describing the RC elements. Whether the effect exists in practical device frequency characteristics or it stems from model approximations can be only verified by the experiments. V. CONCLUSIONS A new MAGFET model consisting of only two magnetic sensitive NMOSTs in the equivalent sub-circuit is described in this paper. The new model developed is based on the nonquasi-static (NQS) MOST model of conventional NMOSTs, modified to include the effects of Lorentz force. Based on 3D numerical device simulations, it is shown that the new model can accurately predict the absolute and relative MAGFET sensitivity for differnt biasing conditions of the device. It is also shown that unlike the widely used SMM, the new MAGFET model is able to simulate the device dynamic response to time varying magnetic fields far more realistically. REFERENCES [] R.S. Popovic, Hall Effect Devices, Taylor & Francis; nd edition (3) [] C. Rubio, S. Bota, J.G. Macias, J. Samitier, Monolithic integrated magnetic sensor in a digital CMOS technology using a switched current interface system, Proc. IEEE Instrumentation and Measurement Technology Conference,, pp [3] C. Rubio, S. Bota, J. G. Macias and J. Samitier, Modelling, design and test of a monolithic integrated magnetic sensor in a digital CMOS technology using a switched current interface system, Journal of Analog Integrated Circuits and Signal Processing, Vol. 9,, pp. 5 6 [4] Shen-Iuan Liu Jian-Fan Wei Guo-Ming Sung SPICE macro model for MAGFET and its applications IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, Volume: 46, 999, pp [5] M. Zawieja, A. Napieralski, J. J. Charlot, Application of VHDL- AMS Language for simulation of magnetic sensors, Proc. TCSET', Lviv - Slawsko, Ukraine,, pp [6] T. Pesic T, N. Jankovic Physical-based non-quasi static MOSFET model for DC, AC and transient circuit analysis, Proc. 4th International Conference on Microelectronics, MIEL 4, Vol. : 4. pp [7] T. Pesic T, N. Jankovic, A compact non-quasi-static MOSFET model based on the equivalent non-linear transmission line, IEEE Trans. on Computer-Aided Design of Integrated Circuits and Systems, Vol.4, 5, pp [8] ISE TCAD Users Manual, Release 7., Integrated System Engineering AG, Zurich, Switzerland [9] E. A. Gutierrez-D, E. Torres-R, T, R. Torres, Magnetic sensing as signal integrity monitoring in integrated circuits, Proc. of Solid-State Device Research Conference (ESSDERC 5), Grenoble, France, 5, pp [] R. R.-Torres, E. A. G.-D., R. Klima, S. Selberherr, Analysis of split-drain MAGFETs, IEEE Trans. Electron Devices, Vol. 5, No., 4, pp [] J. W. Nilsson and J. R. Evans, PSPICE Manual Using Orcad Release 9. for Introductory Circuits. Upper Saddle River, NJ: Prentice-Hall, [] S. Soloman, Sensors Handbook, McGraw-Hill Professional (998) [3] G.-M. Sung and S.-I. Liu, Error correction of transformed rectangular model of concave and convex MAGFETs with AC bias, IEE Proc.-Circuits Devices Syst., Vol. 5, No. 6, 4, pp

220 Reduced Data Sample Transmission Implementation to PIC Microcontroller Mile I. Petkovski and Cvetko D. Mitrovski Abstract In this paper, an implementation of simple adaptive sampling algorithm to the general purpose microcontroller is presented. Algorithm is based on the multi-resolution signal analysis using Haar wavelet. As a target system Microchip PIC 6F877A microcontroller on a single chip is used. The algorithm for signal acquisition and adaptive sampling is deployed to the microcontroller s flesh memory and tested. Possible applications are explored and studied. Keywords Discrete Wavelet Transform, Microcontrollers. I. INTRODUCTION The principal problems of signal analysis are analogue signal sampling and their reconstruction (or approximation) on basis of their discrete samples. In the conventional distributed autonomous measuring systems, the common practice is based on the Nyquist sampling theory. This means that the measured signals are converted to the series of equally spaced samples, preprocessed in the distributed measuring system, and passed to the host computer via a serial channel for collecting, storing and for further processing []. The available storage capacity and the energy consumption of the distributed measuring systems in some cases are primarily limiting factors [], which motivate to develop new algorithms based on adaptive sampling [3], and/or sampling and transmission of samples only at instances of time when the signals exhibit nonlinear changes of the slope. The last means reduction of the number of transmitted samples in Nyquist sense, but sufficient for the satisfactory signal reconstruction. In the paper we propose a design of a system which transmits a subset of the regularly sampled signals, on basis of which the original signal could be satisfactorily reconstructed.. Section II presents theoretical background and basic idea for reduced data transmission. Section III describes the target system for algorithm implementation and section IV graphically presents experimental results. A.Wavelet Packets II. THEORETICAL BACKGROUND Time localization at high frequencies can be enhanced by wavelet packets decomposition. Here approximations as well as detail coefficients are successively decomposed by Mallat s algorithm creating a binary decomposition tree. Each leaf corresponds to a certain frequency. In the orthogonal wavelet decomposition the information lost between successive approximation is captured in the detail coefficients; successive details are never reanalyzed. In the case of wavelet packets each detail coefficient vector is decomposed in two parts using the same approach as in approximation vector splitting (Fig. ) which offer the richest analysis. Fig.. Wavelet packet decomposition tree. B. The Adaptive Signal Transmission The idea of adaptive signal transmission is to transmit only the samples (of the regularly sampled signal) at the instances where the signal exhibits nonlinear changes. Hence the transmitted signal y=[ y y... y P- ] T, is composed from the samples of the uniformly sampled signal x = [x x... x N- ] T by omitting samples with indexes that correspond to instances of linear changing of the analogue signal. Although the size of y is les than the size of x (P<N), The analogue signal could be satisfactorily reconstructed by using first order hold circuit. Mile I. Petkovski is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, Cvetko D. Mitrovski is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, Fig.. Adaptive sampling block diagram 657

221 Reduced Data Sample Transmission Implementation to PIC Microcontroller The basic idea is illustrated via the following example. A uniformly sampled signal composed of 3 samples is shown in Fig. 3a bit multi-channel Analog to Digital (A/D) converter - Universal Synchronous Asynchronous Receiver Transmitter (USART) It is important to refer to the fact that the smaller microcontrollers in the same family have the same instruction set. This can be an advantage to contribute to the portability of the source code. The Analog-to-Digital (A/D) Converter module has eight multiplexed inputs for PIC 6F877A microcontroller, out of which we use only one for the experiment. Fig. 4 shows an analog input model Fig. 3. a)signal used for algorithm explanation; b) Detail coefficients of DWT; c) Absolute values of detail coefficients of DWT of previously obtained details. Detail coefficients obtained as a result of discrete Haar wavelet transform of the original signal represent the difference between two successive samples multiplied by a factor of -/ (Fig. 3.b). If the successive samples belong to a linear function the detail result will be a constant. At the second step of decomposition of details coefficients, the two successive constant coefficients will generate zero value coefficients, Fig. 3.c. This leads to the idea, that the decomposition of detail coefficients represents could be used to determine the instances when the transmission of the samples should occur. The lower the absolute value of the detail coefficients at the second stage of the discrete Haar wavelet transform of the detail coefficients correspond to lower transmission rate of the signal and vice versa.. Fig 4. Analog input model. The A/D conversion as result of successive approximation of the analog input signal corresponds to bit digital number. The acquisition time which is important to determine the upper sampling frequency can be calculated by the following expressions: T T ACQ ACQ = ( AmplifierS ettingtime ) + ( HoldCapaci torch arg ingtime ) + ( Temperatur ecoefficie nt ) = μs μs + o o [( 5 C 5 C )(.5 μs / C )] = 9.7 μs The A/D Conversion time per bit is defined as T AD. The A/D conversion requires a minimum T AD per bit conversion. In our case the internal RC oscillator is used and the typical T AD time is 4µs.[4] () III. TARGET SYSTEM DESCRIPTION A. Basic Characteristics of PIC6F877A In this section we describe the basic features of the microcontroller used for our experiment and the parts of the controller which play the most important rule in this work[4]. Microcontroller basic core features are: - High Performance RISC CPU - 35 single word instruction set - All; single cycle instructions except for program branches which are two cycles - Operating speed DC ns instruction cycle - Up to 8k x 4 words of Program Memory - Up to 368 x 8 bytes of data Memory (RAM) - Up to 56 x 8 bytes of EEPROM data Memory B. Experimental system The schematic diagram of the input circuitry is presented in Fig. 5. Sensor - potentiometer produce DC voltage signal depending of the measured value. That signal is sent to the microcontroller analog pin for A/D conversion. Signal generated by the sensor circuit, is in the range of [ - 5 VDC], and is delivered to the PIC analog port for further A/D conversion. After performing a double level decomposition i.e. discrete wavelet transform, through the High-Pass filtering and down-sampling, transmitting rate estimation and the resulted non-uniformly, adaptive sampled signal is transmitted to the PC for further observing using serial communication port which is supported by the microcontroller hardware and RS3 level converter.. 658

222 Mile I. Petkovski and Cvetko D. Mitrovski Fig. 5. Schematic Diagram for A/D conversion. Fig. 6. Schematic Diagram for serial communication. The system for serial communication is depicted in Fig. 6. C. The Implemented Algorithm Program downloaded to the microcontroller non-volatile memory consist of three integral parts as follows: Signal acquiring, where analog signal is converted to digital form and storage the sequence to microcontroller s RAM; Signal processing, where a two successive Haar Discrete Wavelet Transforms are performed, and transmission rate determining according to obtained results; Signal transmission through serial communication port. The next block diagram graphically describes the Implemented algorithm with enhanced details to second part mentioned above. Fig. 7. Block diagram of proposed algorithm IV. EXPERIMENTAL RESULTS Analog signal generated by the sensor is uniformly sampled and stored into the microcontroller s RAM. Fig.8 shows 64 bits uniformly sampled signal acquired by the microcontroller. That signal is sent to the host computer for further comparison with reconstructed signal in the latter phase of experiment. Due to the small amount of storage, only the detail coefficients are calculated in the first stage of discrete Haar wavelet transform (Fig. 9). The further processing and the estimation of the instances at which the samples are permitted (prevented) to be transmitted require another Haar discrete wavelet transform performed to the detail coefficients, as illustrated in Fig.. 659

223 Reduced Data Sample Transmission Implementation to PIC Microcontroller Fig. 8. Uniformly sampled signal generated by the sensor Fig. 3. Reconstructed and original signal On basis of the transmitted samples the analogue signal could be reconstructed in the host computer either by using first order hold circuit, or by using more complex algorithms for interpolation, such as cubic spline, illustrated in Fig V. CONCLUSION Fig. 9. Detail coefficients of DWT Experimental results shows that the adaptive sampling algorithm is not complex for implementation to the PIC microcontroller. Small storage resources can be exceeded using the same memory locations for the different stages of discrete wavelet transforms. Possible applications could be extended to wireless sensor subsystems where the reduced number of transmitted samples solution have great impact on energy consumption Fig.. Absolute value of second stage DWT coefficients Fig.. Transmission rate estimation According to results illustrated in Fig., the transmission rate is estimated. The higher values correspond to shorter sampling period and vise versa. The samples of the transmitted signal are shown in Fig.. VI. REFERENCES [] J. Gajda, R. Sroka, M. Stencel, A. Wajda, and T. Zeglen, A Vehicle Classification Based on Inductive Loop Detectors, IEEE Instrumentation and Measurement Technology Conference, Budapest, Hungary, May 3,. [] R. Jaskulke and B. Himmel, Event-Controlled Sampling System for Marine Research, IEEE Transactions on Instrumentation and Measurement, vol. 54, no. 3, June 5. [3] M. Petkovski, S. Bogdanova and M. Bogdanov, A Simple Adaptive Sampling Algorithm, 4 th Telecommunications forum TELFOR 6, Serbia, Belgrade, November 6, 6. [4] DS39B, PIC6F87X, 8/4-pin CMOS FLASH Microcontrollers, 999 Microchip Technology Inc Fig.. Adaptive transmitted signal (asterisk) and original signal (solid line) 66

224 SESSION PTDS&EM Power Transmission and Distribution Systems II & Electrical Machines


226 Fast High Voltage Signals Generator for Low Emittance Electron Gun Martin Paraliev Abstract The construction of X-ray Free Electron Lasers (XFEL) requires high brightness, low emittance, monoenergetic, relativistic electron beams. In order to achieve those parameters and to reduce scale of needed accelerator facilities, alternative electron sources should be designed. This article is to describe the progress of the 5kV pulser dedicated to provide HV signals for cold (field) emission based cathodes test bed. Keywords High Voltage, Low emittance, 5kV Pulse Generator, High gradient electron acceleration, Tesla coil. ΔE change of electric field E is given by equation (). The shot-to-shot voltage stability is set by the requirement emitted current variation to be less than %. For the targeted current (~5A) emitted by ZrC ( φzrc 3. 5 ev) single tip (emitting area.5pm ) the surface electric field (including the field enhancement factor) should be in the range of 7GV/m and the relative voltage shot-to-shot stability should be better than %. I. INTRODUCTION ΔJ J = dj de E J ΔE E () To reduce size and cost of XFELs better quality electron sources are needed, capable of producing high brightness, low emittance, monoenergetic electron beams and to preserve those parameters during acceleration. Low Emittance electron Gun (LEG) project at Paul Scherrer Institut (Switzerland) is dedicated to the studies of an alternative low emittance electron sources based on cold (field) electron emission and high gradient acceleration []. In order to achieve the needed surface electric field for cold emission, and the necessary acceleration gradient to prevent space charge-driven beam emittance degradation, high voltage (HV) should be applied across anode-cathode gap. To decrease the probability of a vacuum breakdown the HV pulse should be short. The current task is to design and to construct a HV pulse generator to test and to optimize cold emission based electron sources. The next step is, based on this design, to develop a LEG injector for a XFEL. The behavior of the field emitted current from vacuumconductor interface is described by Fowler-Nordheim equation (), [], [3] and [4]: 3 / φ C E / Bφ e J ( E, φ) = A E e () φ where J is emitted current density in A/m, A, B and C are constants (A =.4x -6, B = 9.85, C = -6.53x 9 ), E is electric field in V/m (including the field enhancement factor) and φ is work function of emitting material in ev. Relative sensitivity ΔJ of the field emission process in function of the relative J The electric gradient, for the first phase of the project, was set to 5MV/m (5kV across 4mm anode-cathode gap). A direct connection to the ground potential will be beneficial because it will allow future development including adding a fast gating impulse and testing of multi layer cathode array structures with additional DC focusing. This HV pulse generator is to be used for testing of different field emitting (FE) structures (single emitting tips, FE arrays, laser-assisted FE cathodes). In addition, operating experience will be used for a design of a higher voltage (MV) pulse generator for a future XFEL low emittance electron injector facility. Table. Advantages and disadvantages of considered pulsed technologies capable of producing fast HV pulses Technologies HV Stability and load Short/ Single pulse Scalability Variable output Unipolar pulse Different pulsed power technologies were considered for the HV pulse generation [5], [6]. Table. gives an overview of their advantages and disadvantages. Critically coupled air-core resonant transformer was chosen to give the best engineering compromise for the particular application. Long life Direct contact Spark gaps no ++ Semiconductor switches no - Magnetic voltage adder yes - - Resonant circuit yes + Coupled resonators (k=.6) yes - Transm. line based U adders no + Nonlinear transmission lines yes - - Nonlinear magnetic transf yes - - Low cost Martin Paraliev, Paul Scherrer Institut, 53 Villigen PSI, Switzerland 663

227 Fast High Voltage Signals Generator for Low Emittance Electron Gun II. RESONANT AIR-CORE TRANSFORMER A. Critical coupling In the particular design a rather unusual resonant configuration is used, where the resonant amplitude is built in one cycle only. To achieve this, primary and the secondary resonant circuits are tuned to the same frequencies and the coupling K between them is set to.6 (critical coupling). In this condition the transformer resonance is split in two spectral lines, with frequency of one twice the frequency of the other. The voltage of the secondary side (as well as in the primary) is a superposition of fundamental and second harmonic with zero phase difference between them. and ω and ω are the two resonant lines of the air-core transformer. The critical coupling factor K was analytically calculated for this ideal case using the condition ω = ω [7]. The normalized voltage u in the primary side and u in the secondary one are shown on fig.. Normalized Amplitude u u -. 3 Fig.. Primary (u) and secondary Cycles (u) voltage signals of an ideal critically coupled (K =.6) resonant transformer The lossy case was simulated (using pspice ) to find the optimal values for coupling factor and tuning of a real transformer structure [7]. The equivalent circuit shown on Fig. is used to study analytically the behavior of the air-core resonant transformer. After referencing the secondary to the primary side the differential equation system (3) was derived. It describes analytically the behavior of the ideal transformer. where u& & + ku ku = u& & + k u k u (3) = K L = L K (4) k LC( K ) K k = LC( K ) (5) The solution of the differential system is given below [8]: where Fig.. Equivalent circuit of an ideal air-core resonant transformer U = [ cos( ω t) + cos( ω t) ] (6) U = [ cos( ωt) cos( ω t) ] (7) u u ω = (8) LC + K ( ) ω = (9) LC K ( ) B. Transformer geometry In order to achieve the necessary coupling without a magnetic core, the transformer geometry was studied. Scaled models and numerical simulations were employed to determine the optimum coil layout. Simulated and measured results from the reference layouts were in a good agreement. A single-turn primary around flat spiral secondary gave the best coupling for given transformer dimensions and coils conductor cross-sections. An interesting observation was that the coupling factor does not depend much on the conductor s cross-section shape, but rather than the conductor s crosssection perimeter. In general, the larger conductor crosssection perimeter gives better coupling. Coupling C. Commutation Turns Measurement Simulation Fig. 3. Measured and simulated coupling factors of reference layouts as a function of secondary number of turns In order to keep the transformer pulse response fast the length of its secondary winding was kept as short as possible. 664

228 Martin Paraliev This adversely affects the transformer step-up ratio and requires higher primary voltage. The type of the switching element in the primary side was determined by the operating voltage, speed and life time requirements. There are basically two switch types suitable: thyratrons and spark gaps. In spite of their low cost, the spark gap technology was not used because of short life-time and jitter. Two air-cooled thyratron switches are used to energize the resonant step-up transformer. They are connected in parallel to decrease the stray inductance. The thyratrons switch the charged primary capacitor to the primary transformer winding. A hollow anode thyratron (CX75A) was chosen in order to tolerate reverse current. The thyratrons are placed in two electrically sealed aluminum boxes to screen the electromagnetic noise generated during commutation. E. Electrical modelling A detailed electrical model was made to study the HV pulse generator behavior. Numerical PC based simulator (pspice ) was used to model the output waveforms and to define the optimum tuning of the resonant transformer. F. Measured results The first tests proved the capability of the pulser to deliver an HV pulse to the anode-cathode system. The designed pulse amplitude of -5kV (5ns FWHM) was reached successfully with % voltage margin. Fig. 5. shows a very good agreement of the measured and the simulated waveforms. D. Construction To ensure HV performance of the transformer, it is operated inside a pressurized tank with sulfur hexafluoride (SF 6 ) gas. The secondary coil is fixed in a grove machined on an insulating plastic base. Concentric undulations are machined on the back side of the transformer base to prevent surface discharges. The single-turn primary is 8mm copper strip wound around the secondary coil. Fig. 5. Simulated and measured output waveforms Fig. 4. Cross section of the HV pulse generator. a thyratron, b capacitor bank, c feedthrough, d HV transformer, e pressure tank, f vacuum feedthrough, g middle stalk, h vacuum chamber with anode and cathode The primary capacitor is split in two a capacitor b banks, one in each thyratron h box, and g it is connected to the primary coil trough low inductance gas-tight f c feedthrough. The secondary HV pulse is conducted by the middle stalk and through a large ceramic feedthrough is applied eto the vacuum chamber. A d removable cathode electrode is attached to the other end of the stalk. The anode-cathode separation is adjustable (..3mm). The entire pressure tank is sitting on a precision 5-axes positioning system in order to control the location of the cathode with respect to the anode and the rest of the accelerating structures. Fig. 4. shows a cross section of the pulse generator 3D CAD model. Due to mechanical modifications the secondary inductance was higher than the designed value. This shifted the tuning range of the transformer and made the tuning for critical coupling impossible. To decrease the shot-to-shot variations due to pulsed charging supply voltage fluctuation an active amplitude stabilization system was built. The stability measurements of hundred consequent pulses (Fig. 6.) showed a reduction of shot-to-shot fluctuation from about.4% (stabilization off) to less than.4% (stabilization on). Relative error,%,8%,4%,% -,4% -,8% Stabilization Off Stabilization On -,% Pulse number Fig. 6. Shot-to-shot stability measurement with and without active stabilization 665

229 Fast High Voltage Signals Generator for Low Emittance Electron Gun G. Radiation safety Due to HV potentials in the vacuum tank any parasitically emitted electrons are accelerated to high energies and their impact on the vacuum chamber will generate a shower of secondary electrons and high energy photons, including X- rays. In order to ensure the radiation safety for personnel, the HV pulse generator is situated in a bunker with m thick concrete walls. A built-in safety interlock system blocks the HV if there are people in the bunker. The control electronics is placed in two 9 racks one next to the pulser and the other in the control area. Fig. 7. shows the present position of the HV pulse generator in the bunker. Fig. 7. The HV pulse generator and the control electronics rack in the concrete bunker III. CURRENT TASKS The HV transformer is being modified to decrease the secondary inductance. This will make the tuning for critical coupling possible and the positive peaks before and after the main pulse will be with the same amplitude. There is an ongoing task to add a tail biter to cut-off the following oscillations. The electric simulations showed that it is possible to take the energy out of the system after the first cycle. Unlike common resonant systems, in critically coupled one the energy is fully exchanged between the resonators within one cycle. When the primary capacitor has maximum voltage, the currents in the inductors and the voltage over the secondary capacitance are zero. At this moment the entire energy of the system is in the primary side. With an additional switch to discharge quickly the capacitors, the energy is removed and the oscillations stop. A motorized spark gap is being built into the third aluminum box to prove the concept. Measuring the output voltage of the pulse generator is not a trivial task. The standard RC dividers suitable for this voltage amplitude are large and with limited bandwidth. A capacitive divider is used to measure the HV pulse. It is used to monitor the output from the control area. An optical HV measurement system is being designed to give a second independent measurement of the output pulse. This serves to increase the reliability of the system in case of slow degradation or failure of the voltage monitor. IV. SUMMARY An extensive study was done to choose the most suitable technology for generating short HV pulses for cold emission cathode test bed. A critically coupled air-core transformer was chosen to give the best engineering compromise for this particular application. The behavior of coupled resonators was studied analytically and by simulations. The tuning and coupling of the transformer were parametrically optimized. Using scaled physical models and simulations, magnetic coupling of different transformer layouts were examined. The measured results of reference models were in a good agreement with the simulations. Based on these studies, a fast HV pulse generator was successfully designed and constructed. The tests confirmed that it is capable of reaching the designed pulse voltage amplitude of -5kV with pulse length of 5ns (FWHM). The stability studies showed shotto-shot peek voltage fluctuations within.4%. This was utilized by employing an active voltage stabilization subsystem. A detailed equivalent circuit of the pulse generator was created in order to further optimize its performance. The measured output waveforms were in a very good agreement with the numerically generated ones. There is an ongoing task to prepare the pulse generator for the high gradient tests and, as well, to define the possible ways to upgrade the pulser towards MV goal. ACKNOWLEDGEMENT The constant support of our group leader C. Gough and the other pulsed magnets group s members S. Ivkovic and B. Weiersmueller is highly appreciated. The help provided by the construction department of PSI and especially by W. Pfister was essential for the project advancement. REFERENCES [] R. Abela, R. Bakker, M. Chergui, L. Rivkin, J. Friso van der Veen, A. Wrulich Ultrafast X-ray Science with a Free Electron Laser at PSI Field emission [] F. Charbonnier, Developing and using the field emitter as a high intensity electron source, Applied Surface Science 94/95, 996, p6-43 [3] T. J. Levis, Some Factors Influencing Field Emission and the Fowler-Nordheim Law Proc. Phys. Soc. (London) 68B [4] K. Li, Novel Electron Sources for a Low Emittance Gun, Diploma thesis, institute for Particle Physics, ETH Zurich, 4 [5] C. Gough, M. Paraliev, Pulsed power techniques for the Low Emittance Gun (LEG) PSI Scientific and Technical Report 3 Vol. VI, p. 76, Villigen PSI, Switzerland [6] C. Gough, Audit on High Voltage High Gradient Generation for the PSI-FEL Project, Villigen PSI, 7 [7] M. Paraliev, C. Gough, S. Ivkovic, Tesla Coil Design for Electron Gun Application, 5 th IEEE International Pulsed Power Conference, Monterey, CA USA, 5, p [8] F. L. H. Wolfs, Physics Lecture Notes 4, University of Rochester, Rochester, NY 467, USA 666

230 Effect of Perforation in High Power Bolted Busbar Connections Raina T. Tzeneva and Peter D. Dineff Abstract -The work reported describes how introducing perforation groups of two or three small holes in a proper way around the bolt holes in high power bolted busbar connections increases significantly the true contact area and therefore reduces contact resistance. The new design is compared with the classical one of bolted busbar connections by the help of several computer models. It has been estimated that the new case leads to a considerable rise of contact pressure and contact penetration in the contact interface between the busbars. Keywords Bolted busbar high power connections, Contact penetration, Contact pressure, Contact resistance, Groups of small holes, New hole shape. I. INTRODUCTION Steadily increasing energy consumption in densely populated regions imposes severe operation conditions on transmission and distribution systems which have to carry greater loads than in the past and operate at higher temperatures. Power connections are generally the weak links in electrical transmission and distribution systems both overhead and underground systems. Mainly, there are two factors that affect the reliability of a power connection. The first is the design of the connection and the material from which it is fabricated. The second is the environment to which the connection is exposed. The fundamental requirements for the design of reliable high-power connections used in bare overhead lines are given in []. The fundamental design criteria for power connectors are: maximization of electric contact true area, optimization of frictional forces with conductors (buses), minimization of creep and stress relaxation, minimization of fretting and galvanic corrosion, minimization of differential thermal expansion along and normal to interfaces. Summarizing the major connection design criteria, mentioned above it is worthwhile noting that all the criteria can be met simultaneously by working out an outline that achieves a sufficiently large contact load, a large area of metal-to-metal contact and sufficient elastic energy storage in the connection to maintain an acceptable connector s contact load throughout the service life of the connection. II. THEORETICAL BACKGROUND All joint surfaces are rough and their surface topography shows summits and valleys. Thus under the joint force F two joint surfaces get into mechanical contact only at their surface summits. Electrical current lines are highly constricted at the contact spots when passing through, as presented schematically in Fig. a. This constriction amplifies the electric flow resistance and hence the power loss. Obviously, the more the contact spots, the smaller the power loss at the interface of the conductors. Power connections with superior performance are designed to maximize both the number and the life of the contact spots. For this reason, it is essential to keep in mind that the load bearing area in an electric joint is only a fraction of the overlapping, known as apparent area. Metal surfaces, e. g. those of copper conductors are often covered with oxide or other insulating layers. As a consequence, the load bearing area may have regions that do not contribute to the current flow since only a fraction of may have metallic or quasimetallic contact and the real area of electric contact, i.e. the conducting area, could be smaller than the load bearing area (Fig. b) []. Current I lines F films Raina T. Tzeneva is with the Department of Electrical Apparatus, Faculty of Electrical Engineering, Technical University of Sofia, 8 Kliment Ohridski, Sofia, Bulgaria, Peter D. Dineff is with the Department of Electrical Apparatus, Faculty of Electrical Engineering, Technical University of Sofia, 8 Kliment Ohridski, Sofia, Bulgaria, Fig.. a) Contact surface and current lines; b) Contact area with α- spots A conducting area is referred to as quasi-metallic when it is covered with a thin (< Å) film that can be tunneled through 667

231 Effect of Perforation in High Power Bolted Busbar Connections by electrons. This quasi-metallic electric contact results in a relatively small film resistance R f. The summits of the two electric joint surfaces, being in metallic or quasi-metallic contact, form the so called α-spots where the current lines bundle together causing the constriction resistance R c. The number n, the shape and the area of the α-spots are generally stochastic and depend on material parameters of the conductor material, the topography of the joint surfaces and the joint force. For simplicity it is often assumed that the α-spots are circular. Looking at one single circular α-spot its constriction resistance R c depends on its radius a and the resistivity ρ of the conductor material. III. MODELLIN BOLTED BUSBAR CONNECTIONS In this paper, the mechanical changes, associated with the contact penetration depth and the contact pressure, in the contact area between two busbars in a high power bolted busbar connection are studied by the help of the finite elements simulation tool ANSYS Workbench. If a higher contact penetration increases α-spots both in numbers and dimensions, which in turn expands the true contact area and decreases contact resistance, then a new hole-shape could be introduced for this connection. The new slotted hole shape arises from [3]. Boychenko and Dzektser have shown that changing the connection design can equally be effective in increasing the contact area. In other words, cutting longitudinal slots in the busbar, the actual surface area of a joint can be increased by.5 to.7 times of that without slots. The contact resistance of joint configuration with slots is 3-4% lower than that without slots and is mechanically and electrically more stable when subjected to current cycling tests [4], [5]. The beneficial effect of sectioning the busbar is attributed to a uniform contact pressure distribution under the bolt, which in turn, creates a larger contact area. This case is investigated in [6]. This idea is developed in [7], [8] and a new slotted hole shape for bolted high power connections is proposed. Fig. shows the hole shape of the investigated cases. Significant rise of the contact pressure and contact penetration is obtained. But to cut these thin slots in copper or aluminum busbars is a difficult procedure and in this investigation the slots are replaced by groups of two or four small holes. For that purpose there have been investigated 3 different models. case the classical case copper busbars with bolt holes; case two horizontal groups of two holes of diameter Ømm and distance of.9mm between the holes, parallel to the busbar axis; case 3 two vertical groups of two holes of diameter Ømm and distance of.9mm between the holes; case 4 mixed one of the busbars in the connection is of case and the other is of case 3; case 5 eight groups of two holes of diameter Ømm and distance of.9mm between the holes, displaced at angle of 45 degrees; case 6 two horizontal groups of three holes of diameter Ø.8mm and distance between the holes, parallel to the busbar axis; case 7 two vertical groups of three holes of diameter Ø.8mm and distance between the holes; case 8 four groups (two horizontal and two vertical) of three holes of diameter Ø.8mm and distance between the holes; case 9 four groups of three holes Ø.8mm and distance between the holes, laying on two mutually perpendicular axes, rotated at an angle of 45 degrees in relation to the busbar axes; case two horizontal groups of three holes of diameter Ø.9mm and distance between the holes ; case two vertical groups of three holes of diameter Ø.9mm and distance between the holes; case - four groups (two horizontal and two vertical) of three holes of diameter Ø.9mm and distance between the holes; case 3 - four groups of three holes Ø.9mm and distance between the holes, laying on two mutually perpendicular axes, rotated at an angle of 45 degrees in relation to the busbar axes; Fig. 4 shows the hole shapes of the cases with two groups of small holes (cases, 3, 6, 7, and ). Fig.. Hole shape with or 4 slots Additionally a new shape of slotted holes in which the slots end with small circular holes is raised and investigated in [9] and illustrated in Fig. 3. Positive results for the contact pressure and contact penetration are obtained too. Fig. 4. Hole shape with groups of small holes Fig. 5 presents the new hole shapes with 4 and 8 groups of small holes (cases 8, 9,, 3 and 5). Fig. 3. Hole shape with slots, ending with small circular holes Fig. 5. Hole shape with 4 and 8 groups of small holes 668

232 Raina T. Tzeneva and Peter D. Dineff The cases are suggested to: decrease radial loadings on bolts that emerge after the connection is assembled; increase the contact penetration in the busbars near the bolts area; maximize the true area of metal to metal contact in an electrical interface. The investigated assembly consists of: Copper busbars (Young s modulus Е =.. Pa, Poisson s ratio µ =.34, width 6mm, height mm, length 6mm, busbars overlap 6mm with holes of Ø.5mm; Fasteners: bolts Hex Bolt GradeB_ISO 45 M x 4 x 4 N, steel E =. Pa, µ =.3; nuts Hex Nut Style GradeAB_ISO 43 M W N, steel E =. Pa, µ =.3; washers Plain Washer Small Grade A_ISO 79, steel E =. Pa, µ =.3. Tension in each bolt F = 5N. Models are studied for contact pressure and penetration within the busbars electrical interface. Fig. 6 shows contact pressure for case. It is obvious that the pressure in the surrounding area of the perforation is increased significantly. where each colored zone is identified with a certain number of pixels. The results obtained are summarized in Fig. 9 and Fig.. Fig. 7. Contact penetration for case with 4 groups of 3 small holes The aspect of model meshing is distinguished as a key phase for proper analysis of the problem. This is because on the one hand it is an established certainty that the reason for the good quality of physical space triangulation is closely related to the consistent mapping between parametric and physical space. On the other hand a properly meshed model will present a fairly close-to-reality detailed picture of stress distributions which is a hard task for analytical solution and is usually an averaged value. It is evident from Fig.6 and Fig.7, for the uneven allocation of pressure and penetration, that the perforated cases bring even more complexities. The meshed model incorporates the following elements: - Node Quadratic Tetrahedron, -Node Quadratic Hexahedron and -Node Quadratic Wedge. Contacts are meshed with Quadratic Quadrilateral (or Triangular) Contact and Target elements. Fig. 6. Contact pressure for case with vertical groups of 3 small holes Contact penetration for case is shown in Fig.7. When the 4 groups of small holes are introduced the high penetration zone expands covering the region between the perforations. All the thirteen cases have been evaluated by comparing the max values of pressure and penetration for each one of them as well as the percent participation of the 8 zones according to the legends. With that end in view, all zones are set to have equal upper and lower limits. The zones of highest pressure or penetration are set to equal lower limits while the max values define their upper limits. This comparison procedure is performed by the help of the Adobe Photoshop software, IV. DISCUSSION AND CONCLUSIONS When the busbars have groups of small holes around the bolt holes then there emerges a zone of significantly high contact pressure and contact penetration around the perforations. It is confirmed by the models, presented in Fig. 6 and Fig. 7. Contact pressure data for the thirteen cases are summarized in Fig. 9. Based on Fig. 9 and Fig. it is obvious that in the extraordinary mixed case 4 the max. pressure is 88.73MPa. This value is times the value of the classical case. Additionally the zone of pressure > 38.86MPa occupies.83% of the entire contact area while in the classical case it is.75% ( times larger). 669

233 Effect of Perforation in High Power Bolted Busbar Connections Pmax, MPa Pmax Zone case Fig. 9. Max contact pressure and % occupation of the zone of P > 38.86MPa for all cases penetration, micrometers μmax zone case Fig.. Max contact penetration and % occupation of the zone of μ >.673μm for all cases case Zone, % zone, % zone, % Fig.. Percent occupation of the P > 38.86MPa zone for all cases The other excellent case is with max. contact pressure of 67.9MPa and % occupation of the zone of pressure > 38.86MPa 3.43% (3.4 times larger than that for the classical case ). The results for the contact penetration are summarized in Fig. and Fig.. Again in the best mixed case 4 the contact penetration is.377μm. This value is approximately 5 times the penetration of the classical case (Fig. ). The part of the contact area with penetration >.673μm occupies 55.4 %, while for the classical case it occupies.69% (3.6 times larger) by comparison with the classical case. Another excellent case has max contact penetration.36μm and % occupation of the zone of penetration >.673μm 6.44% (36.35 times the value for the classical case ). case zone, % Fig.. Percent occupation of the μ >.673μm zone for all cases REFERENCES [] R. S. Timsit, The Technology of High Power Connections: A Review, -th International Conference on Electrical Contacts, Zurich, Switzerland, p. 56,. [] R. Holm, Electric Contacts, Theory and Application, Berlin, Germany, Springer-Verlag, 976. [3] V. I. Boychenko, N. N. Dzektser, Busbar Connections (in Russian), Energia, 978. [4] M. Braunovic, Effect of Connection Design on the Contact Resistance of High Power Overlapping Bolted Joints, IEEE Transactions on Components, Packaging and Manufacturing Technology, vol. 5, Issue 4, pp , Dec.. [5] M. Braunovich, Effect of Connection Design on the Performance of Service Entrance Power Connectors, IEEE Transactions on Components, Packaging and Manufacturing Technology, vol. 7, Issue, pp.7-78, March 4. [6] R. Tzeneva, P. Dineff and Y. Slavtchev, Bolted Busbar connections, XIV-th International Symposium on Electrical Apparatus and Technologies SIELA 5, -4 June5, Proceedings of papers, vol. I, pp. 7-, Plovdiv, Bulgaria, 5. [7] R. Tzeneva, Y. Slavtchev, N. Mastorakis and V. Mladenov, Bolted Busbar connections with Slotted Bolt Holes, WSEAS Transactions on Circuits and Systems, Issue 7, vol. 5, pp. - 7, July 6. [8] R. Tzeneva, Y. Slavtchev and V. Mladenov, Bolted Busbar Connections with Slotted Bolt Holes, Proceedings of the -th WSEAS Conference on Circuits, Vouligmani Beach, Athens, pp. 9-95, Greece, July 6. [9] R. Tzeneva, P. Dineff, Bolted Busbar Connections with Particularly Slotted Bolt Holes, Proceedings of the XLI International Conference on Information, Communication and Energy Systems and Technologies ICEST 6, pp , 9- th June--st July, Sofia, Bulgaria, 6. 67

234 The Influence of the Supply Voltage Unbalance on the Squirrel Cage Induction Motor Operation Georgi I.Ganev and George T.Todorov Abstract An influence of the three phase supply voltage unbalance on a squirrel cage induction motor operation is investigated. A method for motor s currents prediction is presented. Experimental investigations are performed and the results are analyzed and compared to the predicted values. Keywords squirrel cage induction motor operation, supply voltage unbalance, power quality. I. INTRODUCTION Induction motors due to their better characteristics and cost are the most commonly used device that converts alternating current energy to mechanical one. More than one-half of the total electricity is consumed by motor-driving systems. A large part of them are induction motors with small rated power (less than kw). Being the induction motors most popular in the industry it is very important to carry out studies about the effect of power quality on the efficiency and reliability of the three-phase motors. On the other side the optimization of induction motor s operation will improve the operation of whole power system. The two-ax transformation [] and the symmetrical components transformation [7] are usually applied for induction motors investigation. D-Q transformation is used for investigation of the induction machines transient processes [8], non-sinusoidal supply voltages influence [5] and induction motors driven by speed-control systems [3]. The Fortesque transformation is commonly used for induction motors steady-state operation studying [,,3,4,6,9,,], including symmetrical and non-symmetrical behavior. In case of non-symmetrical supply voltage, the three phase voltage system is decomposed into three subsystems with positive, with negative and zero sequences. This way the induction motor s operation with unbalanced supply voltage is treated as a simultaneously operation of two machines the first one operates as a motor (produces driving torque) and the second one - as machine breaker (produces torque with opposite direction). The zero sequences currents are neglected. Some disadvantages of this method should be mentioned, in spite of its popularity: the induction machine parameters for the positive and the negative sequences are assumed as a constants; the saturation of induction machine magnetic core is Georgi I.Ganev is with the Technical University Sofia, branch Plovdiv, Electrical Engineering Department, 5, Tzanko Dustabanov St., 4, Plovdiv, BULGARIA, George T.Todorov is with the Тechnical University Sofia, Electrical Engineering Department, 8, Kliment Ohridski St., bl.,, BULGARIA, neglected; the superposition principle is used, equalizing the induction machine as a two simple machines with motor and breaker operation, etc. Most of the papers do not discuss the influence of the voltage unbalance on the motor s currents and efficiency. An investigation of the influence of the supply voltage unbalance on the operating performance of three-phase squirrel cage induction motor with small power is presented in this paper. An approach for the motor currents prediction is proposed, in case of unbalanced supply with deviation of onephase voltage. II. UNBALANCED CURRENT PREDICTION METHOD A steady state operation of symmetrical three-phase squirrel cage induction motor is under study in the present paper. An unbalanced three-phase system with variation of the voltage of only one phase supplies the motor. It is assumed that the stator winding is Y connected, the motor s parameters are preliminarily known and they remain constant (independent of the magnetic core saturation). When the squirrel cage induction motor is supplied by three-phase symmetrical voltage system the consumed threephase currents will form symmetrical system also. Each phase current lags behind the corresponding phase voltage to an angle φ. Assume that the voltages applied to phases B and C remain constant and the voltage of the phase A decreases. This causes a change in the current drawn by phase A, but some variation in the currents of the other phases also [4]. As it is shown in the Fig., the currents I b and I c are changed by magnitude and by phases Fig.. Phasor diagram of the input currents Δi + Δi + Δi = () a b c As the voltages U b and U c are constant, we can assert that the currents I b and I c variation is coused only by the current I a change and Δ i = Δ. () b i c The current I b shift is in counter-clockwise direction and the I c shift clockwise. 67

235 The Influence of the Supply Voltage Unbalance on the Squirrel Cage Induction Motor Operation As it is seen in Fig. ( + Δia ) I ( + Δi ) I Ia cos δ + π = =. (3) 3 I. b where Δi a, Δi b are the changes of the phase A and B currents respectively and δ is the shift of I b and I c in relation to the corresponding phasor at symmetrical power supply (the symmetrical currents are shifted at º from each other and at angle φ, compared to the phase voltages). As it is seen, the current I b leads, while I c lags with an angle δ the corresponding currents at symmetrical conditions. Let us assume that the torque and power output of the induction motor are unaltered at symmetrical and nonsymmetrical supply. Hence U a Ia cosϕa + UbIb cosϕb + U cic cosϕc = 3. UI cosϕ (4) The variation of the voltages, currents and their phase shift are: ΔIa ΔIb ΔIc ΔU Δ ia = ; Δ ib = ; Δ ic = ; Δ u = ; I I I U ( + Δu). I. ( + Δia ). cos ϕ a + U. I. ( + Δib ) cos( ϕb + Δϕb )..( + Δi ). cos( ϕ + Δϕ ) = 3. U. I. cos ϕ U. + U I c c Furthermore, Δϕ b = + δ ; Δϕ c = δ (see Fig.). Equation (4) can be rearranged, as follows: ( + Δ )(. + Δ ) + ( Δ ).cosδ = 3 c u i a i a (5) The predicted changes of the currents Δi a, Δi b and the shift angle δ, are: b 3. Δu. ( Δu + ) ( + Δu 3) Δu 3 Δ i a = ± u 3 (6) + Δ Δ i = Δ (7) b i а + Δi δ = ± arccos Δi a a π 3 The predicted values of Δi a, Δi b and δ, versus voltage deviation Δu are presented in Fig.,4 8 (8) III. EXPERIMENTAL RESULTS Series of experiments have been made with an induction motor type AO-7A, with following parameters: U r = 38 V, n r =86 min -, I r =.94 A, η r=.7, cos φ=.83, I st /I r = 5., M st /M r =, M max /M r =.5. The motor under test was coupled with DC generator and supplied by three autotransformers type АТЛ-9 with rated current I r =9A. Power quality analyzer CA833, serial number 6, produced by Chouven Arnoux have been used to measure the voltages, currents and power of the motor. The motor s characteristics have been taken under balanced supply voltage and under two cases of unbalanced supply, changing the voltage of one phase (phase A). The same load torque has been applied at the motor shaft for all three cases. The measured values of the unbalanced factor versus the one-phase voltage deviation (phase A) are shown in Fig.3. Voltage Unbalance, % One Phase Voltage Deviation, % U > U < Fig.3. The voltage unbalance versus voltage deviation Fig.4 shows the three-phase voltage phasors and the threephase current phasors for three cases. Rated load have been applied to the motor shaft for all three cases. Fig.4(a) and 4(b) show the voltage and current phasors respectively in case of symmetrical supply. In Fig.4(c) and 4(d) the phasors when the supply voltage of phase A is bigger than rated value U A =,U r are shown. In Fig.4(e) and 4(f) are shown the phasors at U A =,88U r. All values for the measured current are in ma.,3 Δi a - Δi b - Delta, deg 6 a c e, 4, b d f Fig.4. The voltage and current phasors Voltage Deviation, % Fig.. The predicted values of drawn currents change and the corresponding shift Test results from measurements performed with variation of the voltage deviation of phase A from -5% to +5% are presented in Fig. 5. The measured quantities for currents, power and rotor speed are referred to the rated values. 67

236 Georgi I.Ganev and George T.Todorov,,, I av / I rate, S av / S rate,9,8, ,6,6,5, ,4 Voltage Deviation, % Fig.5. Аverage input current, Voltage Deviation, % Fig. 9. Average input apparent power I x / I rate,, n / n rate ,6 5, ,4 Voltage Deviation, % Fig.6. Оver-loaded phase input current ,6 Voltage Deviation, % Fig.. Induction motor speed,,5 P av / P rate,8, EF,4, ,4,,, ,5,,5 Voltage Deviation, % Fig. 7. Аverage input active power ,9,8 Volage Deviation, % Fig.. Induction motor efficiency Q av / Q rate,95 PF,7,6,9,85,5,4, , Voltage Deviation, % Fig. 8. Аverage input reactive power, Voltage Deviation, % Fig.. Induction motor power factor 673

237 The Influence of the Supply Voltage Unbalance on the Squirrel Cage Induction Motor Operation The characteristics have been taken at four levels of the load applied and four characteristics have been drawn in each figure at rated load (.), at load 64% of the rated (.64), at load 8% of the rated (.8) and at % of the rated (.). All characteristics have been drawn versus the voltage deviation of phase A while the voltage of phases B and C remains equal to the rated value. The operation of the motor under unbalanced voltage supply could be analyzed by aid of these characteristics. Currents of the overloaded phase of the motor are shown in Fig. 6.When the supply voltage of phase A is over the rated value the current I A increases too. If the voltage of phase A decreases below the rated value, the induction motor will draw bigger current from the two other phases. Hence, in case of under-voltage the troubles-free phases (phases B and C) are overloaded. The currents of all three phases are changed not only by magnitude, but by phase too. This means that the active and reactive currents components are changed and it causes the induction motor active and reactive power varying respectively (see Fig.7, Fig.8 and Fig.9). Due to the unbalanced supply voltage the magnetic field of the motor is non-symmetrical. As a result the input current and power are bigger and the efficiency is lower (Fig.), compared to the values at balanced supply voltage. In onephase over-voltage case, the non-symmetrical magnetic field causes saturation of some sections of the magnetic core and the power factor reduces (see Fig.8 and Fig.). Finally a comparison between predicted and measured values of the phase currents and currents phase shift is given in Fig.3. I pred / I meas,4,3,, Predicted Currents Predicted Phases Voltage Deviation, % Fig.3. Predicted values and measured values of the phase currents and currents phase shift IV. CONCLUSION. An approach for motor currents prediction, when the motor is supplied by unbalanced three-phase voltage with deviation of the one-phase voltage, is proposed. The predicted values are close to the measured values. The differences could be caused by the small power of the motor under test δ, deg. It has been found out that at the same voltage unbalance values, the over-voltages and under-voltages cause different effects on the induction motor performance. This means that unbalance behavior establishment has to render an account of the voltage variation direction. 3. The supply voltage unbalance influence to the squirrel cage induction motor is: if the voltage decreases the average input currents will increase, the average input active and apparent power will increase too, the rotor speed will decrease. The efficiency and power factor slightly decrease. In the studied case, the currents increase reaches to high value -,I r with only 3% unbalanced voltage. if the voltage increases over the rated value, the reactive power will increase and it will determine an increase in the apparent power, the rotor speed will remain nearly constant and the power factor and efficiency will decrease slightly. 4. The supply voltage unbalance makes the squirrel cage induction motor performance worst the current drawn increases and the efficiency decreases, which means an increase in exploitation expenses. 5. Further experimental investigations with induction motors with bigger rated power will be made for verification of the proposed method. REFERENCES [] Ангелов А., Д.Димитров, Електрически машини, ч., Техника, София, 976 [] Динов В., Несиметрични режими и преходни процеси в електрическите машини, Техника, София, 974 [3] Иванов-Смоленский, Электрические машины, Энергия, Москва, 98. [4] Сыромятников И.А., Режимы работы асинхронных и синхронных двигателей, Энергоатомиздат, Москва, 984 [5] Boucherma M., M.Y.Kaikaa, A.Khezzar, Park Model of Squirrel Cage Induction Machine Include Space Harmonics Effects, Journal of EE, vol.57., no.4, 6, pp [6] Equiluz L.I., Lavandero P., Manana M., Performance Analysis of a Three-phase Induction Motor under Non-sinusoidal and Unbalanced conditions [7] Fortescue C.L., Method of Symmetrical Co-ordinates Applied to the Solution of Polyphase Networks, 34th Annual Conv. Of A.I.E.E., Atlantic City, N.J., June, 98. [8] Lee R., P. Pillay, R. Harley, D, Q References Frames for the Simulation of the Induction Motors, Electric Power Systems Research, 8 ( 984 / 85 ), pp.5-6. [9] McPherson G., R.D.Laramore, An Introduction to Electrical Machines and Transformers, J.Wiley, 99. [] Park R.H., Two-reaction Theory of Synchronous Machines Generalized Method of Analysis, part,winter Convention of the AIEE, New York, NY, Jan.8 - Feb., 99 [] Pillay P., P.Hoffman, M.Manyage, Derating of Induction Motors Operating with a Combination of Unbalanced Voltages and Over- and Under-voltages, IEEE Trans.on Energy Conversation vol.7, no.4, Dec., pp [] Quispe E., G.Gonzales, J. Aguado, Influence of Unbalanced and Waveform Voltage on the Performance Characteristics of the Three-phase Induction Motors [3] Tamimi J., H.Jaddu, Optimal Vector control of Three-phase Induction Machine, Proc.of the 5 th IASTED, 6, pp.9-96 [4] Todorov G., G.Ganev, Influence of the Non-symmetrical Threephase Loads on the Transformer and Supply Grid, ICEST 5, Nis, Serbia and Montenegro, pp

238 Monitoring of the Electric Energy Quality in the Electricity Supply Tzancho B. Tzanev, Svetlana G. Tzvetkova and Valentin G. Kolev 3 Abstract Ensuring and supporting of the electric energy quality is a basic duty of the electric supplying companies. The consumers have every right to require and to get qualitative electric energy. But they should to have obligations, with their consumers and their work regimes, not to worsen the indexes of electric energy quality in the electric supplying systems. Results from monitoring of the electric energy quality in distribution installation low voltage on kiosk switchgear supplying administrative building are given in the paper. Keywords monitoring, electric energy, quality, low voltage, distribution installation I. INTRODUCTION In the last years is observed a tendency of growing numbers and capacity of the consumers that make worse the electric energy quality, but also these that demand great requirements to the quality. The combination of such characteristics of the supplying system, in which the consumers of electric energy can execute functions deposited in them, is defined by the general term electric energy quality []. Often the concept of electric energy quality is used to describe the specific characteristics of the supply voltage. The electric energy quality has two basic components continuity and voltage level. In 999, the European organization for standardization in the electromagnetic field approved European standard ЕN 56 Voltage characteristics of electricity supplied by public distribution systems, which reflects the theoretical level, level of the measuring techniques and exploitation practice in the electric energy quality field during the past years. Since March 6 this standard was introduced as Bulgarian standard BSS EN 56 and completely replaced BSS The standard BSS EN 56 includes the following basic indexes of electric energy quality []: frequency deviation; voltage deviation; fast voltage fluctuations; flicker; unbalance; harmonics; interharmonics; voltage dips; transient overvoltages; short-time and long-time interruptions. Tzancho B. Tzanev is with the Faculty of Electrical Engineering, Technical University - Sofia, Kliment Ohridski 8, Sofia, Bulgaria, Svetlana G. Tzvetkova is with the Faculty of Electrical Engineering, Technical University - Sofia, Kliment Ohridski 8, Sofia, Bulgaria, 3 Valentin G. Kolev is with the Faculty of Electrical Engineering, Technical University - Sofia, Kliment Ohridski 8, Sofia, Bulgaria, The norms of electric power quality indexes for low and medium voltage electric distribution networks are given in Table I according to BSS EN 56 [, 3]. The electric energy quality in the electricity supply system is formed in jointly operation of various electrical installations and equipment, which influence in different way on electric energy indexes. This necessitates audit and estimation of the energy installations in details and introducing of contemporary systems for monitoring and control of the electric energy quality in the exploitation. In this way energy efficient exploitation of the electrical installations will be ensured. Results from monitoring of the electric energy quality in distribution installation low voltage on kiosk switchgear supplying administrative building are given in the paper. II. MONITORING OF THE ELECTRIC ENERGY QUALITY IN DISTRIBUTION INSTALLATION LOW VOLTAGE To be done a complex assessment of the electric energy quality and its influence on the electrical installations operation in given industrial enterprise or administrative building is necessary to be made investigations in different points of the electric supplying system. Determination of the electric energy quality indexes and theirs influence on the electrical installations operation may be done in the following ways: - Theoretical this is possible only if we know all data for electrical installation elements. The method is very labourconsuming and inaccurate; - By simulation of computer model this also is possible only if we know all data for electrical installation elements. This method is used very often. It is more accurate in comparison with the theoretical method; - By measurements with special instruments. Monitoring of the electric energy quality indexes could be made by using of contemporary fixed or portable measuring instruments. The basic principles in which the measuring instruments have to respond are the following [4]: - To measure the electric energy quality indexes according to BSS EN 56; - To have high accuracy and possibility for data registration in real time; - To have self-contained power supply; - To allow data transfer by modem, optical port or computer; - To have software that allow data processing according to BSS EN 56; - To allow the time for averaging of the measured values to be given by the operator; 675

239 Monitoring of the Electric Energy Quality in the Electricity Supply TABLE I NORMS OF THE TO ELECTRIC ENERGY QUALITY INDEXES FOR LOW AND MEDIUM VOLTAGE ELECTRICAL DISTRIBUTION NETWORKS ACCORDING TO BSS EN 56 Characterization Low voltage networks Medium voltage networks Frequency 49,5-5,5 Hz (for 99,5% from year period) or 47-5 Hz (whole year) 49,5-5,5 Hz (за 99,5% from year period) or 47-5 Hz (whole year) Voltage deviation U Н ±% (for every period from one week, 95% from the average effective voltage value per U Н ±% (for every period from one week, 95% from the average effective voltage value per min); U Н +/-5% (for every period from min) week, all average effective voltage values per min) Fast voltage fluctuations Voltage unbalance Harmonics Voltage dip Short-time interruptions Long-time interruptions Less than 5%Uн; fluctuations up to %U Н with short duration may advent few times per day in some conditions. Flicker: P lt (for 95% from period from week) 95% from the average effective voltage value with back sequence per min must be in limits from to %U Н from the right sequence for every period of week. In some power network areas may have values up to 3%U Н 95% on the average effective voltage value of each harmonic formation of voltage per min for every period of one week must be: U 3 5%, U 5 6%, U 7 5%, U 3.5%, U 3 3%; Total harmonic distortion 8% Expected number may be from few scores to one thousand per one-year period. Values: from a few scores to several hundred Values: (interrupting over 3 min) annual frequency from to 5, depending on area Less than 4%U Н ; fluctuations up to 6%U Н with short duration may advent few times per day in some conditions. Flicker: P lt (for 95% from period from week) 95% from the average effective voltage value with back sequence per min must be in limits from to %U Н from the right sequence for every period of week. In some power network areas may have values up to 3%U Н 95% on the average effective voltage value of each harmonic formation of voltage per min for every period of one week must be: U 3 5%, U 5 6%, U 7 5%, U 3.5%, U 3 3%; Total harmonic distortion 8% Expected number may to be from few scores to one thousand for a year period. Values: from a few scores to several hundred Values: (interrupting over 3 min) annual frequency from to 5, depending on area - The software have to work under WINDOWS 9X, XP, and to allow data collection, statistical data processing, data representing by graphics and tables; - To be convenient for transport and maintenance; - To respond to all safety requirements; - To have acceptable price. The experience show that the determination of the electric energy quality indexes by measurement with special instruments is most accurate and most fast method. For the aim, an investigation of the electric energy quality in distribution installation low voltage is done. It is supplied from kiosk switchgear kva, /,4 kv. The distribution installation supplies administrative building in which the electric energy consumers are mainly electrical heaters for heating (about 9%) and computers (about %). The measurements were done by special instrument for analysis of the electric energy quality indexes Power Quality Analiser MI9 production of company Metrel. The instrument allows measuring of all electrical quantities and indexes of electric energy quality according to BSS EN 56. They are following [4]: - Phase values of voltage ( U rms ); - Values of the line voltage ( U xx ); - Frequency; - Phase values of current ( I rms ); - Value of current in the neutral conductor ( I null ); - Total value of current; - Active power ( P ); - Reactive power ( Q ); - Apparent power ( S ); - Power factor ( Pf ); - Total values of active, reactive and apparent power for Ph... Ph3; - Total value of the power factor; - Active energy (consumption/ generate); - Reactive energy (inductive/capacitive); - Voltage and current harmonics (up to 63 th harmonic); - Voltage and current interharmonics; - Voltage and current Total Harmonic Distortion; - Voltage deviation; - Flicker; - Determination of the voltage dips and peaks; - Determination of the voltage interruptions; - Determination of transient processes; - Level of pulsations; - Unbalance. 676

240 Tzancho B. Tzanev, Svetlana G. Tzvetkova and Valentin G. Kolev Fig. Currents and voltages in the three phases and table with data for the basic electrical quantities that characterized the load and voltage in distribution installation low voltage on kiosk switchgear kva, /,4 kv supplying administrative building Power Quality Analyzer MI9 is used in 3-phase networks with or without neutral conductor. It has voltage and current inputs. The current clamps with different sensibility could be turn to the current inputs. The data are measured and recorded in 48 KB SRAM memory. The measuring and recording data are accessible for reading by the communication port RS3 by software working under WINDOWS. The rate of data transfer is bit/sec. The graphics of the currents and voltages in the three phases and the table with data for the basic electrical quantities that characterized the load and voltage in distribution installation low voltage are given on fig.. It looks that the load is ohmic load and quite small capacitance is due to the pulse electric supplying blocks of the computers and monitors that is appeared ohmic-capacitive load. The maximum voltage deviation is +3,9%. It is less than the norm from +%. The frequency deviation is -, Hz. Therefore, the voltage deviation and frequency deviation are in the admissible norms. The voltage unbalance is insignificant around,8%. It is less than the norm from %. The load of the phases is very unequal. Most loaded is phase. Phases and 3 are rather lighter loaded. Current unbalance is 49%. Therefore, redistribution of the loads in the phases must be done. It is necessary to draw attention that the voltage fluctuations and frequency fluctuations are in very smaller borders from % per sec. For this reason we could not speak for any fluctuations of these two indexes. This is due to the fact that investigated distribution installation supply only two lifts and there are not other more powerful installations with fast changeable operating mode. From the graphics given on fig. is look that the voltage sinusoid is better than current sinusoid. The voltage and current harmonics for each phase are shown on fig.. The odd harmonics predominate. For the voltage this are 3, 5 and harmonics. Their values are very lower than the admissible values given in Table I. Predominate 3, 5, 7 and 9 current harmonics. The maximum value have third current harmonic - 3,5% per phase, 4,5% per phase and 7% per phase 3. The magnitude of 3, 5, 7 and 9 current harmonics according to IEC is respectively 3%, %, 9% and 5% [5]. Hence, the magnitudes of the measured current harmonics are lower than admissible values. The voltage total harmonic distortion is,4% per phase,,% per phase and,% per phase 3. These values of the voltage total harmonic distortion are far under the admissible value from 8% for low voltage networks given in Table I. The current total harmonic distortion is 6,% per phase, 7,% per phase and,57% per phase 3. These values of the current total harmonic distortion are less than the admissible value from 5% [5]. 677

241 Monitoring of the Electric Energy Quality in the Electricity Supply Fig. Voltage and current harmonics measured in distribution installation low voltage on kiosk switchgear kva, /,4 kv supplying administrative building IV. CONCLUSION Ensuring and supporting electric energy quality is a basic duty of the electric supplying companies. On the other hand, the consumers have right to require and receive qualitative electric energy. But they should to have obligations, with their consumers and their work regimes, not to worsen the indexes of electric energy quality in electric supplying systems. The measurements in the distribution installation low voltage on kiosk switchgear kva, /,4 kv supplying administrative building that were done, show that measured electric energy quality indexes are in the norms. The electric energy could not influence negative on the normal operation of the consumers. REFERENCES [] Schlabbach J., D. Blume and T. Stephanblome, Voltage Quality in Electrical Power Systems, UK,. [] BSS ЕN 56 Voltage characteristics of electricity supplied by public distribution systems, 6. [3] Indexes for Electric Supplying Quality, State Energy and Water Regulatory Commission, 4. [4] Tzanev T., S. Tzvetkova, V. Kolev, B. Tzaneva, V. Tzvetkova, Analisys of the possibilities of instruments for electric energy quality measuring and assessment, Energy Forum 6, Varna, pp. 4-3, 6. [5] IEC Limits for harmonic currents I >6 A,

242 Algorithm for Efficiency Optimization of the Induction Motor Based on Loss Model and Torque Reserve Control Branko Blanuša, Petar Matić, Željko Ivanović and Slobodan N. Vukosavić Abstract - New algorithm, based on loss model and torque reserve control, for efficiency optimization of the induction motor drive is presented in this paper. As a result, power and energy losses are reduced, especially when load torque is significant less compared to its nominal value. This algorithm can be used in high performance drive and present good compromise between power loss reduction and good dynamic characteristics. Simulation and experimental tests are performed. Key words- Efficiency Optimization, Induction Motor Drive, Loss Model, Parameter Identification, Torque Reserve I. INTRODUCTION Induction motor is without doubt the most used electrical motor and a great energy consumer []. Three-phase induction motors consume 6% of industrial electricity and it takes considerable efforts to improve their efficiency []. Most of the motors operate at constant speed although the market for variable speed is expanding. Moreover, induction motor drive (IMD) is often used in servo drive application. Vector control (VC) or Direct Torque Control (DTC) are the most often used control technique in the high performance applications. There are numerous published papers which treated problem of efficiency optimization in the IMD in the last years. Although, good results are achieved, there is no generally accepted method. There are three strategies which are usually used in the efficiency optimization of the induction motor drive []: - Simple State Control-SSC; - Loss Model Control-LMC and - Search Control SC. First strategy is based on the control one variable in the drive. This variable must be measured or estimated and its value is used in the feedback control to keep it on predefined reference value. This strategy is simple, but gives good results only for the narrower set of the working conditions. Also, it is sensitive to parameter changes due to parameter variations caused by temperature and saturation. In the second strategy model of the power losses is used for the optimal control of drive. This is the fastest strategy, because the optimal control is calculated directly from the loss model. However, power loss modeling and calculation of the optimal operating point can be very complex. Also, this strategy is sensitive to parameter variations. In the search strategy the on-line efficiency optimization control on the basis of search is implemented. Optimization variable, stator or rotor flux is decremented or incremented in steps until the measured input power settles down to the lowest value. Branko Blanusa is with the Faculty of Electrical Engineering, Patre 5, 78 Banja Luka, Bosnia and Herzegovina, Slobodan N. Vukosavic is with the Faculty of Electrical Engineering, Bulevar Kralja Aleksandra 73, Beograd, Serbia, This strategy has an important advantage compared to other is completely insensitive to parameter changes. The control does not require knowledge of motor parameters and the algorithm is applicable universally to any motor. Besides all good characteristics of search strategy methods, there is an outstanding problem in its use. Flux has never reached its nominal value, then in small steps oscillate around it. Sometimes convergation to optimal value can be to slow. Very interesting problem for any optimization algorithm is its work with low flux level for a light load. When load is low, optimization algorithm settles down magnetization flux to make balance between iron and cooper losses and reduce total power losses. In this case drive is very sensitive to load perturbations. LMC algorithm with on-line parameter identification in the loss model and torque reserve control implemented for indirect vector controlled IMD is proposed in this paper. Parameter identification is based on matrix calculation and Moore-Penrose pseudoinversion. Input power, output power and values of the variables in the loss model must be known. Torque reserve is determined on calculated reference flux from loss model and current and voltage constrains in machine. Algorithm for efficiency optimization is included in the model of IMD and both simulation and experimental studies are performed to validate theoretical development. Functional approximation of the power losses in the induction motor drive is given in the second Section. Procedure of parameter identification in the loss model and calculation of optimal magnetization current are described in the third Section. Experimental results are presented in the fourth Section. II. FUNCTIONAL APPROXIMATION OF THE POWER LOSSES IN THE INDUCTION MOTOR DRIVE The process of energy conversion within motor drive converter and motor leads to the power losses in the motor windings and magnetic circuit as well as conduction and commutation losses in the inverter. Converter losses: Main constituents of converter losses are the rectifier, DC link and inverter conductive and inverter commutation losses. Rectifier and DC link inverter losses are proportional to output power, so the overall flux-dependent losses are inverter losses. These are usually given by: P INV = RINV is = RINV ( id + iq ), () where i d,, i q are components of the stator current i s in d,q rotational system and R INV is inverter loss coefficient. Motor losses: These losses consist of hysteresis and eddy current losses in the magnetic circuit (core losses), losses in the stator and rotor conductors (copper losses) and stray losses. At nominal operating point, the core losses are typically -3 times 679

243 Algorithm for Efficiency Optimization of the Induction Motor Based on Loss Model and Torque Reserve Control smaller then the cooper losses, but they represent main loss component of a highly loaded induction motor drives [3]. The main core losses can be modeled by [4]: PFe = chψmωe + ceψmωe, () where ψ d is magnetizing flux, ω e supply frequency, c h is hysteresis and c e eddy current core loss coefficient. Copper losses are due to flow of the electric current through the stator and rotor windings and these are given by: p Cu = Rsis + Rriq, (3) The stray flux losses depend on the form of stator and rotor slots and are frequency and load dependent. The total secondary losses (stray flux, skin effect and shaft stray losses) usually don't exceed 5% of the overall losses [3]. Formal omission of the stray loss representation in the loss function have no impact on the accuracy algorithm for on-line optimization. Based on previous consideration, total flux dependent power losses in the drive are given by the following equitation: Pγ = ( RINV + Rs ) id + ( RINV + Rs + Rr ) iq + ceωe ψ m + chωe ψ. (4) m Efficiency algorithm works so that flux in the machine is less or equal to its nominal value: ψ D ψ Dn, (5) where ψ Dn is nominal value of rotor flux. So linear expression for rotor flux can be accepted: dψ D Rr Rr = Lmid ψ D dt L, (6) r Lr where Ψ D =L m i d in a steady state. Expression for output power can be given as: P out =dω r ψ D i q, (7) where d is positive constant, ω r angular speed, ψ D rotor flux and i q active component of the stator current. Based on previous consideration, assumption that position of the rotor flux is correctly calculated (Ψ Q =) and relation P in =P γ +P out output power can be given by the following equation: Pin = aid + biq + cω e ψ D + cω eψ D + dω rψ Diq, (8) where a=r s +R INV, b= R s +R INV +R r,, c =c e and c =c h. Input power should be measured and exact P out is needed in order to acquire correct power loss and avoid coupling between load pulsation and the efficiency optimizer. III. DETERMINATION OF THE PARAMETERS IN THE LOSS MODEL AND DERIVATION OF THE OPTIMAL MAGNETIZATION CURRENT Procedure of the parameter determination in the loss model is shown in Fig.. There is a modification in the procedure described in paper [3], so the iron losses is considered separately like hysteresis losses and eddy current losses. The inputs to the algorithms are samples of i d, i q, e D e D ω ψ, ω ψ, ω ψ i r D q and P in and they are acquired every sample time, usually - μs. As the high frequency components do not contribute identification W=[a b c c d] T, input parameters and P in are averaged within Q intervals T=QT S. The averaging is implemented as the sum of Q consequetive samples of each signal (Fig.). Column vectors P(:,), P(:,), P(:,3), P(:,4) and P(:,5) of matrix P Mx5 are created from the M successive 68 values of A N, B N, C N, C N,, D N, N=,..,M and vector Y N is formed from the M averaged values of input power ( n+ ) T ( n+ ) T PIN ( t) dt = a id nt nt c d ( n+ ) ( n nt T ( n+ ) T [ ψ ( t) ω ( t) dt] + c [ ψ ( t) ω ( t) dt] + ) T [ iq () t ψ D () t ωr ( t) dt] nt D e ( t) dt + b nt ( n+ ) T iq nt D ( t) dt + Y N = aan + bbn + cc N + ccn + ddn. (9) Calculation of the vector W g is based on Moore Penrose pseudoinverse of rectangular matrix P Mx5 [3]: T T Wg = [ ag bg cg d g ] = ( P P) PY, () and W g is aproximative solution of matrix equation PW=Y, such the value of PW Y is minimum. New vector W g is usually calculated every.5-s. The choice of Q is essential for the correct parameter identification. Credibility of W g, relies on the excitation energy contained in the input signals. Hence, in absence of any disturbances, matrix P T P is getting near or being singular and values obtained from P should be discarded. In that case values of parameters are not changed and parameter determination is continued. For a known operational conditions of the induction motor (ω r and T em ) and parameters in the loss model it is possible to calculate current i d which gives minimum of the power losses [4]. Based on expression (4) power losses can be expressed in terms related to i d, T em and ω s as follows btem Pγ ( id, Tem, ωe ) = ( a + cl ωe + cl ωe ) id +. () m m ( dlmid ) Assuming absence of saturation and specifying slip frequency: iq ω s = ωe ω r =. () Trid power loss function can be expressed as function of current i d and operational conditions (ω r, T em ): Pγ ( id, Tem, ω r ) = ( a + cl ω r + cl ω r ) id + m m ( cω r + c ) LmT em Tem bt em (3) + c +. dt r ( dtr ) ( dlm ) id Based on equation (3), it is obvious, the steady-state optimum is readily found based upon the loss function parameters and a + c L ω c L ω operating conditions. Substituing α= ( ) em Tr em Lm e m r + T bt and γ = c + value of current i d which gives d d minimal losses is:.5 γ i dopt =. (4) α + m r

244 Branko Blanuša, Petar Matić, Željko Ivanović and Slobodan N. Vukosavić id u j=q u (nt+j τ) Q j= A n P(:,)= P (:,) [ An... Ano+M] i q Ψ D ω e j=q u u (nt+j τ) j= Q j=q u 3 u 3 (nt+j τ) j= Q B n P(:,)= P (:,) [ Bn... Bno+M] C n P(:,3)= P (:,3) [ C n... C no+m] T P(Mx5)MATRIX T 5 det(p P) P W= g T [ a,b,c g g, c, d, g ] = [ a g,b g,c, c,dg] T g g g g T - T (P P) P Y Ψ D ω e j=q u 4 u 4 (nt+j τ) Q j= C n P(:,4)= P (:,4) [ C n... C no+m] T PINV i q Ψ D ω j=q u 5 u 5 (nt+j τ) j= Q D n P(:,5)= P (:,5) [ Dn... Dno+M] Y P IN (DC) j=q u 6 u 6 (nt+j τ) Q j= Y n Y= [ Yn... Yno+M] T Fig. Determination of the parameters in the loss model from input signals. Presented method is loss model based so it is fast [5]. Optimal value of magnetizing current is directly calculated from the model. Online procedure of parameter identification is applied, so this method is robust on the parameter variations. One of the greatest problem of LMC methods is its sensitivity on load perturbation, especially for light loads when the flux level is low. This is expressed for a step increase of load torque and then two significiant problems appear:. Flux is far from its value during transient process, so transient losses are big.. Insufficiency in the electromagnetic torque leads output speed to converge slow to its reference value with significant speed drops. Also, oscillations in the speed response are appeared. These are common problem of methods for efficiency optimization based on flux adjusting to load torque. Speed response on the step change of load torque (from.5 p.u. to. p.u.), for nominal flux and when LMC method is applied, is presented in the Figs.. Speed drops and slow speed convergence to its reference value are more exposed for LMC method. These are reasons why torque reserve control in LMC method for efficiency optimization is necessary. Model of efficiency optimization controller with torque reserve control is presented in Fig. 3. Optimal value of magnetization current is calculated from the loss model and for given operational conditions Eq. (4). Increment of magnetizing current (Δid) is generated from the fuzzy rules through the fuzzy inference and defuzzification, on the basis of the previously determined torque reserve (ΔT em ). Fuzzy logic controller is used in determination of Δi d. Controller is very simple, and there is one input, one output and 3 rules. Only 3 membership functions are enough to describe influence of torque reserve in the generation of i dopt. If torque reserve is sufficient then Δi d and this block has no effect in a determination of i dopt. Oppositely, current i d (magnetization flux) increases to obtain sufficient reserve of electromagnetic torque. Fig.. Speed response on the step load increase for nominal flux and when LMC is applied. ω* e (n) i* q (n) i* d (n) * e(n) T Ψ D (n-) Determination of electromagnetic torque reserve ΔT* em (n) Costrains: ΔT* em (n) i * d (n)+i * q (n)<=i S ( ω* e Ls i (n) * d(n) )( + ω* e(n) Li* γs q(n)) <= Vs a Fuzzy controller dopt i (Optimal value of current idcalculated from the loss model) b Δi d * dopt i Fig. 3. Block for efficiency optimization with torque reserve control. 68

245 Algorithm for Efficiency Optimization of the Induction Motor Based on Loss Model and Torque Reserve Control Two scaling factors are used in efficiency controller [6]. Factor a is used for normalization of input variable, so same controller can be used for a different power range of machine. Factor b is output scaling factor and it is used to adjust influence of torque reserve in determination of i dopt and obtain requested compromise between power loss reduction and good dynamic response. IV. EXPERIMENTAL RESULTS Experimental tests were performed on the Laboratory Station for Vector Control of the Induction Motor Drives -Vectra. Basic parts of the Laboratory Station Vectra are: - induction motor (3 MOT, Δ38V/YV, 3.7/.A, cosφ=.7, 4o/min, 5Hz) - incremental encoder connected with the motor shaft, - three-phase drive converter (DC/AC converter and DC link), - PC and dspace controller board with TMS3C3 floating point processor and peripherals, - interface between controller board and drive converter. Control and acquisition function as well as signal processing are executed on this board, while PC provides comfortable interface toward user. Algorithms observed in this paper is software realized using Matlab Simulink, C and real-time interface for dspace hardware. Handling real-time applications is done in СontrolDesk. Power losses and speed response of the motor drive with and without applied algorithm for torque reserve control are presented in Figs. 4. and 5. respectively. The load torque step changes in t =5s from.5 p.u. to. p.u. and vice versa in t =5s at constant reference speed ω ref =. p.u. mechanical speed p.u. with torque reserve control without torque reserve control time [s] Fig. 5. Graphics of mechanical speed for a step change of load torque. V. CONCLUSION By implementation of LMC method with torque reserve control next results are reached:. Less sensitivity on load perturbation compared to standard LMC methods without torque reserve control.. Better control characteristics 3. Less transient losses Algorithm with torque reserve control gives negligible higher losses in a steady state then standard LMC methods. Power losses [W] Power losses [W] LMC without torque reserve control LMC with torque reserve control timr [s] Fig. 4. Graphics of power losses for a step change of load torque. REFERENCES [] S. N. Vukosavi} "Controlled Electrical Drives - Status of technology", Proceedings of XLII ETRAN Conf., pp. 3-6, No., 998. [] F. Abrahamsen, F. Blaabjerg, J. K. Pedersen, P.B. Thogersen, Efficiency optimized control of medium-size induction motor drives, Industry Applications Conference, pp , Vol. 3,. [3] S. N. Vukosavi}, E. Levi, Robust DSP-based Efficiency Optimization of Variable Speed Induction Motor Drive, IEEE Transaction of Ind. Electronics, Vol. 5, No. 3, 3. [4] W. A. Roshen, Magnetic Losses for Non-Sinusoidal Waveforms Found in AC motors, IEEE Tran. on Power Electronics, pp. 38-4, Vol., No. 4, 6. [5] C. Chekaborty, Fast Efficiency Optimization Techniques for the Indirect Vector Controlled Induction Motor Drives, IEEE Trans on Industry Applications, pp. 7-76, Vol. 39, No. 4, 3. [6] Emanuele Cerruto, Alfio Consoli, Antonio Testa, Fuzzy Adaptive Vector Control of Induction Motor Drives, IEEE Transactions on Power Electronics, Vol., No.6 November 999. [7] C.A. Hernandez Aramburo, T. C. Green, S. Smith, Assement of Power Losses of an Inverter Driven Induction Machine With Its Experimenta; Validation, IEEE Tran. on Industry Applications, Vol. 39, No. 4, 3. 68

246 Computation of Electromagnetic Forces and Torques on Overline Magnetic Separator Mirka I. Popnikolova Radevska and Blagoja S. Arapinoski Abstract: In this paper will be presented an approach to improved nonlinear magnetic field analyses of the Overline Magnetic Separator (OMS), on the basis of FEM as a represent of the numerical methods. By using the iterative procedure, Finite Element Method, it will be calculated the nonlinear distribution of magnetic field, under rated excitation on the (OMS). The electromagnetic field on the basis of the fluxes and flux densities in the particular domain of the (OMS), will be defined. The electromagnetic forces and torques will be calculated. Keywords Overline Magnetic Separator, Finite Element Method, Electromagnetic force, Electromagnetic torque. I. INTRODUCTION The rated data of the(oms) that will be analyzed in this paper are rated data: I n = 3.4 A, U n = V.The three dimensional cartezian system is used for the analyses.(bide analiziran vo pravoagolen koordinaten system.) Maximum clearance distance is d =.4 m. A nonlinear interactive procedure is applied, calculations are carried out quasistatically, at given transportation line position. Real Overline Magnetic Separator OMS, product of Steinert is presented in Fig., and OMS Model is presented in Fig.. Fig.. Model of Overline Magnetic Separator OMS. II. OMS MODEL IN FEM 4. FEM has possibility to solve magnetic vector potential and consequently magnetic flux density by solving relevant set of Maxwell equations for magnetostatic case as well as for time harmonic case. In magnetostatic case field intensity H and flux density B must obey: H = J () B = () subject to a constitute relation between B and H for each material: B = μ H = H (3) v and for nonlinear material permeability μ is actually function of B. FEM goes about finding a field that satisfies Eq. -Eq. 3 via a magnetic vector potential. Flux density is written in terms of the vector potential A, as: Fig.. Overline Magnetic Separator OMS Mirka I. Popnikolova Radevska is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, Blagoja S. Arapinoski is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, 683 B = xa (4) This definition of B always satisfies Eq.. Then Eq. can be redefined as: x A = J ( B) (5) μ

247 Computation of Electromagnetic Forces and Torques on Overline Magnetic Separator As first step in program pre-processing part, input is the OMS geometry and material properties for all separators domains are defined. This includes current density and conductivity in both OMS windings as well as magnetic properties, including magnetization curve for non-linear calculations. In order to be able to solve the problem with FEM, boundary conditions on the outer electromagnets geometry must be defined. For analyzed separator Dirichlet boundary conditions are used. On Fig. 3 mesh of finite elements is presented which is derived fully automatically and it is consisted of 4975 nodes and 4985 finite elements. On Fig. 3, mesh of finite elements is presented which is derived fully automatically. Fig. 4. Magnetic field density on OMS. Fig. 5. Magnetic field density on OMS. Fig. 3. Finite element mesh in cross section of OMS. When a more accurate calculation of the magnetic vector potential is needed, then mesh density should be increased especially on interface between two different materials. In that case contour of integration passes at least two elements away from any interface or boundaries. Greater mesh density increases the computation time. So, the good way to find mesh which is dense enough in order necessary accuracy to be achieved and still computation time to be reasonably small is comparation of results from different mesh densities can be picked smallest mesh which gives convergence to the desired digit of accuracy. In OMS post processing part, we make comparison of electromagnetic characteristics in three cases: OMS with super malloy main core, which separates pure iron, OMS with 7 steel main core, which separates steel and OMS3 with M_45 steel main core, which separates 6 steel. OMS, OMS, are separating materials with one form and OMS3 another. Especialy the differences on magnetic field density can be seen on Fig. 4, Fig. 5 and Fig. 6 respectively. Fig. 6. Magnetic field density on OMS3. III. ELECTROMECHANICAL CHARACTERISTICS The knowledge of electromagnetic forces and torque characteristics is very important matter for analysis of OMS. In this paper numerical calculation of electromechanical forces and torques that are calculated on the base of Maxwell s Stress Tensor and Weighted Stress Tensor are applied on the OMS. Maxwell s Stress Tensor prescribes a force per unit area by magnetic field on a surface.the net force on an object is obtained by creating surface totally enclosing the object of interest and integrating the magnetic stress over that surface. 684

248 Mirka I. Popnikolova Radevska and Blagoja S. Arapinoski The differential force produced is: df = ( H ( B n) + B( H n) ( H B) n) where n denotes the direction normal to the surface at the point of interest. Weighted Stress Tensor Integral greatly simplifies the computation of forces and torques, as compared to evaluating forces via the stress tensor line integral of differentiation of co-energy. Merely select the blocks upon which force or torque are to be computed and evaluate the integral. No particular art is required in getting good force or torque results (as opposed to the Stress tensor line integral), although results tend to be more accurate with finer meshing around the region upon which the force or torque is to be computed. One limitation of the Weighted Stress Tensor integral is that the regions upon which the force is being computed must be entirely surrounded by air/or abutting a boundary. In cases in which the desired region abuts a non-air region, force results may be deduced from differentiation of co-energy. The forces characteristics on directions x and y axis, versus different clearance distances and rated current for OMS, OMS and OMS 3 are presented in Fig.7 and Fig. 8. (6) M (Nm) d (m) M(,) (Nm) [OMS ] M(,) (Nm) [OMS ] M(,) (Nm) [OMS 3] Fig.9. The torque characteristics versus different clearance distances for OMS, OMS and OMS3. The forces on directions x and y axis characteristics versus different currents and constant clearance distance d =. 37 mm for OMS, OMS and OMS 3 are presented in Fig. and Fig.. Fx (N) d (m) Fx (N) [OMS ] Fx (N) [OMS ] Fx (N) [OMS 3] Fig. 7. Forces on direction x-axis, versus different clearance distances for OMS, OMS and OMS3. Fig. Forces on direction x-axis, versus different currents for OMS, OMS and OMS3. Fy (N) d (m) Fy (N) [OMS ] Fy (N) [OMS ] Fy (N) [OMS 3] Fig. 8. Forces on direction y-axis, versus different clearance distances for OMS, OMS and OMS3.. The torque characteristics versus different clearance distances and rated current, for OMS, OMS and OMS3 are presented in Fig. 9. Fig.. Forces on direction y-axis, versus different currents for OMS, OMS and OMS3. The torque characteristics M t f ( I ); d = const. =, were computed with weighted stress tensor integral for OMS, OMS and OMS3 are presented in Fig.. 685

249 Computation of Electromagnetic Forces and Torques on Overline Magnetic Separator Mt [Nm],,5,,5 6 -, , -,5 Mt-OMS Mt-OMS Mt-OMS3 REFERENCES [] M. Popnikolova Radevska: Calculation of Electromechanical Characteristics on Overband Magnetic Separator with Finite Elements, ICEST 6, p.p , Sofia, Bulgaria 6. [] D. Meeker, Finite Element Method Magnetics Version 4., User s Manual, 6. [3] STEINERT Betriesanweisungen fur Uberbandmagnetscheider und Aushebemagnete, Technische Daten TD UME P,. [4] John Wiley & Sons, ELECTROMAGNETIC DEVICES New York-London-Sydney. -, I [A] Fig. The torque characteristics versus different currents and constant distance clearance for OMS, OMS and OMS3. Magnetic field co-energy is defined: H ' ' WC = B( H ) dh dv (7) On base of the Eq. 7, the magnetic co-energies are computed and their characteristics versus different currents and constant clearance distances for OMS, OMS and OMS3 are presented in Fig. 3: Fig 3. The magnetic co-energy characteristics versus different currents and constant clearance distance for OMS, OMS and OMS3. IV. CONCLUSION In this paper the non-linear magnetic field analyses and computation of electromagnetic and electromechanical characteristics are presented. For this purpose as the most suitable, Finite Element Method is applied. Additionally electromagnetic forces and torque are calculated for rated load current and different clearance distance and for constant clearance distance and different currents. Also in this paper forces and torques are computed via Maxwell s Stress Tensor and Weighted Stress Tensor. Magnetic field co-energy is also computed an presented in this paper. 686

250 SESSION EQ Education Quality II


252 Studying on Frequency Modulation in MATLAB Environment Veska M. Georgieva Abstract Frequency Modulation (FM) is widely used in communication systems. FM is used at VHF radio frequencies for high-fidelity broadcasts of music and speech. A narrowband form is used for voice communications in radio settings. It s made a trial in the paper to present the possibility of the computer simulation in MATLAB environment by deeper studying and analyzing the effect of various factors such as modulations index, the form and amplitude of the modulating signal on the spectrum and spectrum band of the modulated signals.the paper can be used in engineering education in studying this process. Keywords Frequency modulation, communication systems, modulations index, narrowband FM, wideband FM, computer simulation. I. INTRODUCTION At the university level, the material studied becomes more abstract and more mathematical. It s described in the paper a laboratory exercises for the course on Signals and Systems of students from faculty of communications and from faculty of computer systems. FM is a form of modulation which represents information as variations in the instantaneous frequency of carrier wave. On the base of theory of the process, with help of computer simulation, the students get deeper insight of effect of various factors such as modulations index, the form and amplitude of the modulating signal. They can investigate their influence on the power spectral density and bandwidth of the modulated signals. The computer simulation can be realized in the program environment of MATLAB with using the system for visual modelling SIMULINK. A model can be created, which generate the analysed signals and functions. There are methods to create a model in SIMULINK. First it s can be used mathematical formulae for creating building blocks, [] and second by direct using of blocks for the investigated process [4,5,6,7]. By generation in case of analogue signals can be used functions or blocks from SIMULINK []. Digital data can be represented by shifting the carrier frequency among a set of discrete values, a technique known as frequency-shift keying. A program can be made in case of digital signals, so the signals can be choose from the students. Veska M. Georgieva is with the Faculty of Communication, TU- Sofia, Kl.Ohridsky str.8, Sofia, Bulgaria, II. PROBLEM FORMULATION The problem on the FM signal analysis can be presented with following features: - There are 3 kinds of signal in case of FM process: modulating, carrier and FM modulated signal. - The analogue modulating signals can have different forms such as sinusoidal, rectangular, triangular, saw-tooth, Gaussian. Their parameters can be determinate by the students. Digital signals and their parameters can be determinate totally from the students. - The sinusoidal signal can be used as carrier. The mathematical description for the modulated signal is given in Eq. afm ( t) = A cos ψ FM ( t) = () = A cos( ωt + mω sin Ωt) where m ω is the modulation index. This indicates by how much the modulated variable varies around its unmodulated level. In this case, for FM, it relates to the variations in the frequency of the carrier signal. The modulation index m ω is depended from the frequency of the modulating signal ( Eq.): Δω m = m ω () Ω If m ω <<, the modulation is called narrowband FM, and it s bandwidth is approximately Ω. If m ω >>, the modulation is called wideband FM, and it s bandwidth is approximately Δωm. - The frequency spectrum of an actual FM signal has components extending out to infinite frequency, although they become negligibly small beyond a point. For a simplified case, the harmonic distribution of a sine wave signal modulated by another sine wave signal can be represented with Bessel functions this provides a basis for a mathematical understanding of frequency modulation in the frequency domain. So on base of computer simulation we can formulate following problems:. To create a model of FM modulation process in case of m ω << and in case of m ω >>.. To analyze the influence of the form of the modulating signal on the power spectral density. 3. To investigate the influence of the modulations index of the spectrum and bandwidth of the modulated signal. 689

253 By the simulation we need to see the going processes. The characteristics can be given in graphical mode. Studying on Frequency Modulation in MATLAB Environment III. EXPERIMENTAL PART The formulated problems are solved by computer simulation in MATLAB, version 6.5 with using the SIMULINK TOOLBOX [3]. For a model creating are used mathematical formulae for creating building blocks. To generate the analogue signals with different forms such as, sinusoidal, rectangular, triangular, saw-tooth, and Gaussian, the students can use the following blocks: Signal Generator, Ramp, Step, Repeating Sequence, MATLAB Function, Product. The using blocks can be connected in the model informative and by the control, too. The type of the connection is dependent on the block and on the logic of his work. The running time can be set in constant or variable step. The students can observe a running process by the block Scope, which is included in the model, too. They can see the spectrum of the FM signals by the Spectrum Scope. An example for a simulated model is given in Fig.. Fig. Model for FM process Fig. presents in graphical mode the FM signal. It s generated a saw-tooth as modulating signal for FM. The following parameters are choosen: A M =V; A =V; F M = Hz; f = Hz; K=5. Fig.3. Power Spectral Density of FM signal by m ω >> IV.CONCLUSION The frequency modulation is one of the very important problems in the theory of the signals. It s made a trial in the paper to present the possibility of the computer simulation in MATLAB environment to get deeper insight on studying this process, on effect of various factors such as modulations index, the form and amplitude of the modulating signal. It s helpful for the students to create models to generate different signals and to change their parameters. So with the help of the simulation they can investigate the influence of the signal parameters on the FM frequency band and the spectrum type, and observe the running processes in the time domain, too. REFERENCES [] E. Simeonov, V. Georgieva, D.Dimitrov, Practical Analysis of Signals with Amplitude Modulation,Proceedings ICEST 3, pp , Sofia, Bulgaria, 3 [] Ingle Vinay K. & Proakis John G., Digital Signal Processing Using MATLAB, Brooks/Cole Publishing Company, [3] MATLAB 6.5, User s Guide, [4] V. Georgieva, S. Lishkov, D.Dimitrov, Spectrum Signal Analysis in MATLAB Environment, Proceedings ICEST 5, pp , Nish, Serbia and Montenegro, 5 [5] V. Georgieva, D.Dimitrov, S. Lishkov, Correlation Signal Analysis in MATLAB Environment, Proceedings ICEST 5, pp , Nish, Serbia and Montenegro, 5 [6] V. Georgieva, S. Lishkov, D.Dimitrov, V. Ivanova, Studying on Digital Filters in MATLAB Environment, Proceedings ICEST 5, pp , Nish, Serbia and Montenegro, 5 [7] V. Georgieva, D. Dimitrov, M. Neykova, Pulse Modulation Analysis in MATLAB Environment, Proceedings of CSICE 5, pp. 79-8, Sofia, Bulgaria, 5 Fig. Modulating and modulated signals by FM The power spectral density of this FM signal is given in Fig.3. It s shown, that the bandwidth of the FM signals, independent of the form of modulating signal. 69

254 An Approach of Application Development for the Virtual Laboratory Access Jelena Djordjević-Kozarov, Milan Jović and Dragan Janković 3 Abstract Continuous development and implementation of information and telecommunication technologies based on Internet, as a global network, and on multimedia notes brings us to the new results in the field of measurement as the science discipline. A model used in realization of a measurement laboratory for remote experiments is described in this paper. A way and realization of user access to the measurement system through the Internet is explained into detail. Keywords Virtual laboratory, Internet application, remote access. I. INTRODUCTION Process control and management are permanent needs in all technic and technical processes. Measurements are usually not located at the same place and it is necessary to acquire them from the different locations, put them in the same database, process and analyze, or use them in the next measurement process. In that case, we can define the measurement system as an set of the measuring features connected in one functional entirety, used for the data acquisition. Beside the indispensable software, in the modern measuring systems it is necessary to receive the measuring data in digital form. Low price of microprocessor s components and systems, made possible the realization of the systems with distributed data processing. Those systems usually have the autocalibration and electric isolation. The development of information technologies has opened new possibilities in realization of measurement data acquisition systems. Described measurement systems present the conception in the field of remote measurement laboratories. The laboratory heart usually consists of a group of measuring instruments, connected to the Internet through the appropriate network equipment. Possibility of the data acquisition and their storing into the databases, enable the easy communication between the laboratories which are on geographically distant places. Distributed architecture of the remote measurement laboratory is described in paper []. System is realized as a Jelena Djordjević is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, E- mail: Milan Jović is with the Faculty of Informatics, University of Lugano, Switzerland, 3 Dragan Janković is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, E- mail: hierarchical structure on few levels. Clients can access through application level. Those realization can be classified as a client / server architecture, where the user s computers are clients, and the computers in the measuring laboratories [] are servers. Basic components of this architecture are: main server, connection with measurement systems (laboratories) and measurement instruments which perform the laboratory. II. INTERNET APPLICATION AS A PART OF THE LABORATORY SOFTWARE SOLUTION Laboratory is realized by program language LabVIEW [3], i.e. the LabVIEW application, through which clients can access measuring instruments and acquire data, is realized. It is needed to have input parameters for each measuring instrument in that virtual laboratory in order to carry out the desired measurement by virtual instruments. Input parameters are the parameters which set the start values of the measuring instruments. When the measurement equipment is connected to the computer and all parameters are set, the LabVIEW application carries out the data acquisition and generates the output file. Digitalized values of the acquired data are placed into the output file. Software solution, which connects the user and the appropriate LabVIEW application, is realized by standard procedure languages. The Internet part of the software solution is written by using the ASP [4] (Active Server Pages) technology in Java Script language and by using the CGI [5] (Common Gate Interface) scripts. The pages for the measuring laboratory access are realized in the same technology. In order to access the laboratory the user has to be logged in. When the user s identification data are checked by the application, the user can access the laboratory recourses. ASP technology allows the work with SQL, which presents the language for the database queries. SQL server is on the main computer in the measurement laboratory. All information about the users who may access the laboratory, and all information about measurements which are made (including the measurement parameters and results) are placed in the database (Fig. ). Depending on chosen measuring experiment, the ASP page with the appropriate fields for the start value inputs will appear on the screen. When the user input parameters for the desired measurement, the appropriate CGI script starts (Fig. ). All scripts are written in C ++ program language. This technology enables a data exchange between the web pages and the classic applications, which are written in standard 69

255 An Approach of Application Development for the Virtual Laboratory Access Password Index Last name Fig.. Virtual laboratory database model program languages. In that way, the extended functionalities of the web applications up to the real possibilities of the classic applications has been achieved. Through the Internet pages the data, which are needed to make the measurements, are gathered by these scripts. All the data are collected into the input text file in appropriate format demanded by LabVIEW standard. Important functionality of all standard programming languages is a possibility to make an external call of the another program application on the computer. This functionality is used here for the calls of a appropriate LabVIEW application. When the input file is created, the CGI script goes into a waiting state, while the appropriate LabVIEW application is running and the output file is not created. Then, a HTML file is created and transferred to the user. ASP page ID Users Name N Param CGI generates input file User s ID Laboratory... Delphy applicat. Param N Result Results Fig.. Virtual laboratory application structure Result N The beginning of the next phase of this application is a moment when a measurement is done and the data should be collected and showed to the user. This moment is not possible to detect by the CGI script. In another hand, a CGI script generates a HTML page which will be presented to a user as a resulted one. Input file for the LabVIEW application is generated by the CGI script. An application which controls the execution of the appropriate LabVIEW application, as well as it s shut down when a measurement is over and an output file is created, is developed in programming language Delphi [6]. In other words, this application has a position of a trigger of the real physical measurement process for the current equipment configuration. Ones a CGI script creates an input file for the appropriate LabVIEW application, the Delphi application detects that input file and starts up a LabVIEW application. When the measurement is done and the output file is created, the Delphi application shuts down the LabVIEW application in order to fulfill demands for the next measurement. In the meanwhile, the CGI script is blocked in the while loop which is aborted when the output file is created and there are no other constrains, and then the HTML file can be created. In system like this, the concurrence of more measuring requirements on the same measuring system at the same time appeares. The first solution of this problem was solved in a trivial manner by not allowing the access to anyone while the ID N CGI generates HTML gen. output file ID... ID Lab. HTML page LabVIEW applicat. laboratory is occupied by the user who is already logged. New solution is to put all logged clients in a queue and the CGI script does not create an input file for the user until that user is not the next one who should get into the laboratory. In this way, increase of users number increases the waiting time (with linear dependency) in the queue, but it is equal for everyone who is in order to login. III. CONCLUSION Nowadays, a computer is an essential tool in a number of areas. They became irreplaceable in the process monitoring, product quality checking, automation, process control and management, realization of the measurement systems, etc. The development of information technologies has opened new possibilities in realisation of measurement data aquisition systems. Remote measurement system is often accessed through the global Internet network. The laboratory heart consists of a group of specialised and/or general instruments, conected to the Internet through the PC. Within a remote measurement laboratory, clients can cooperate to each other and use all laboratory resources, even they are on geographicaly distant places. Suggested system is based on client-server architecture, it is easy to expand and it makes possibilities for distant clients to access the instruments. CGI scripts, written in C ++ program language, generate the measurement input file based on the data achieved from the current user s Internet page. Laboratory, which is realized in LabVIEW, is started by application written in program language Delphi, immediately after the input file is created. Delphi application terminate the LabVIEW application when the measurement experiment is done and an output file with the measured data is created. Then, a CGI script generates a HTML file with the results for the user. System expanding can be shown with the increasing number of laboratories, which are connected to the system. Every laboratory needs a different LabVIEW virtual instrument. In the case of increasing the number of connected laboratories, the application has good bases for further development, which is related to easy handling and to the possibility of easy upgrading and interface changing. REFERENCES [] Jelena Đorđević, Milan Jović, Development of a Measurement Laboratory for Remote Experiments, ETRAN 6, Conference Proceedings, Vol. IV, Belgrade, Serbia, 6. [] Jelena Đorđević, Miroljub Pešić i Miodrag Arsić, An Approach for Distributed Measurement Systems Development, Metrological Congress 3, Conference Proceedings, Belgrade, Serbia and Montenegro, 3. [3] Distance-Learning Remote Laboratories using LabVIEW, User's manuel, National Instruments Corporation, USA, Februar. [4] A. Keyton Weissinger, ASP in a Nutshell - A Desktop Quick Reference, Second Edition, July. [5] Thomas Boutell, Dynamically Generated Web Pages With CGI Programming, ISBN: [6] Ivan Hladni, Inside Delphi 6, Wordware ISBN:

256 The Use of Virtual Reality Environments for Training Purposes in Care Settings Edith Maier, Miglena Dontschewa and Guido Kempter 3 Abstract In our paper we describe the outline of a project that aims to apply virtual reality (VR) technologies in healthcare education, in particular for learning how to cope with aggression in care settings. We consider a Virtual Learning Environment (VLE) populated by intelligent virtual agents a safe and effective medium. After briefly discussing current VR applications in healthcare and therapy we explore the use of VR for behavioural training and examine the critical factors for implementing effective VLEs. Finally, we discuss how to measure the impact of VLEs. Keywords virtual reality, virtual learning environment, aggression, training, emotions in HCI I. INTRODUCTION Nurses and caregivers are the most likely to suffer from aggression and violence in healthcare systems. Incidents range from verbal abuse, physical attacks to sexual harassment. Difficulties in coping with such incidents of aggression can lead to increased stress, low job satisfaction and absenteeism which in turn poses a major socioeconomic problem. This is why a 5-day aggression management training programme is offered by the Institute of Nursing Sciences in St. Gallen, which was originally developed in the Netherlands, and has been widely used in the UK and Switzerland in nursing homes, mental health care and disability care settings. It consists of a mixture of theoretical elements, exchange of experience and hands-on training. The participants are encouraged to perceive aggression from an interactional and situative context and to develop their interventions accordingly. It also emphasises the need for a clear institutional policy. Studies demonstrate that attitudes of nurses influence their behaviour regarding aggression and that training programmes can positively change their attitudes. However, it has proven very difficult to furnish evidence for the effectiveness of such training programmes. User Centered Technologies Research Institute at the University of Applied Sciences Vorarlberg Hochschulstrasse, A-685 Dornbirn,, and University of Applied Sciences St. Gallen, CH-Tellstrasse, 9 St. Gallen, 3 User Centered Technologies Research Institute at the University of Applied Sciences Vorarlberg, Hochschulstrasse, A-685 Dornbirn, and Although participants report a higher degree of confidence in their ability to cope with aggression, a recent study by Hahn et al. [] did not observe any significant attitude changes. It concluded that this might be due to the impact of the pedagogical quality of training courses, lack of organisational support and/or the fact that the measuring instruments were inadequate. So even if recent scientific literature suggests the use of more preventive measures, communication and negotiation skills and de-escalation techniques in the management of aggression, the effectiveness of different strategies so far has not been evaluated systematically. Most authors hope that future studies will help dispel some of the uncertainties that exist at present whilst it is conceded that it is very challenging to design appropriate experiments to investigate the comparative success rates of different strategies both for ethical and cost reasons. This is why we propose to use virtual reality (VR) to enhance aggression management training on the one hand and evaluate the impact of different coping strategies on the other. II. VIRTUAL REALITY ENVIRONMENTS FOR BEHAVIOURAL TRAINING A. Definitions and concepts related to VR Virtual reality (VR) is a technology, which allows a user to interact with a computer-simulated environment, or according to Thalmann -, it refers to a technology which is capable of shifting a subject into a different environment without physically moving him/her []. According to this definition virtual reality environments (VREs) aim at inducing the immersion of one or more individuals in a virtual environment by creating the illusion that they are in a place, time or situation different from their actual real-world location and/or time. The technology was first used by Sutherland [3] and advanced rapidly, e.g. with the invention of the headmounted display or the data glove. Currently, VREs are primarily visual experiences, displayed on a computer screen or through special stereoscopic displays because of people s binocularity. But more advanced VREs include additional sensory information such as auditive and tactile information, and attempts are currently made to simulate smell. Users can interact either through the use of standard input devices (i.e. keyboard and mouse) or through multimodal devices such as the above-mentioned data glove. The simulated environment can be similar to the real world, 693

257 The Use of Virtual Reality Environments for Training Purposes in Care Settings e.g. simulations for pilot training, or it can differ significantly from reality as in VR games. B. Current healthcare and therapeutic uses of VR VREs have been used in a wide range of application domains: engineering, physics, medicine, education, marketing, real estate and many others. Gradually, VR is also finding its way into the training of healthcare professional. VR technology has been used for anatomy instruction or surgery simulation. In the case of laparoscopy, for example, it is important to realistically visualize body parts. More relevant to our project is the use of VR for treating various phobias. Virtual reality exposure therapy is an evolving technique that has been attracting increasing attention and research interest in a range of disciplines such as human-computer interaction, graphics design, psychiatry, clinical psychology and psychotherapy [4]. Quite basic VR simulation with simple sight and sound models, for example, has proven very effective in treating fear of flying, spider phobia (arachnophobica) or various zoophobias [5]. Another promising development is the application of VR for treating post-traumatic stress disorder (PTSD), e.g. in veterans. The U.S. Office of Naval Research is evaluating VR tools which integrate the sights and sounds of combat as well as smell and other sensory factors to treat PTSD [6]. C. Characteristics of virtual environments Several psychological factors have come to be regarded as essential in VRE, namely a sense of presence immersion, involvement, interaction, person view (first or third person view) and emotions. The challenge consists in achieving the best balance between those factors. In behavioural training, the realistic and believable modelling of people s behaviour and their emotional expression is important to evoke real-life reactions in trainee and to create a sense of presence and involvement in the (virtual) situation. There is a large body of research on this topic, but there is little information about emotional expression and visualization in this field. Which emotions can be depicted believably? How realistic does an emotional visualization have to be? What possibilities do we have to represent and evoke emotions? We now briefly discuss the various characteristics: Immersion An immersive digital environment is an artificial, interactive, computer-created scene or "world" within which users can immerse themselves. The degree of immersion is influenced by factors like the amount of detail of the 3D scenes, the degree of isolation from the physical environment, perception of self-inclusion in the VRE, natural modes of interaction and control, perception of selfmovement or interactive user-input [7]. A high degree of Laparoscopy is a type of surgical procedure in which a small incision is made, usually in the navel, through which a viewing tube (laparoscope) is inserted. immersion is important, as it is seen as a prerequisite for a sense of presence (see below). Involvement Involvement is a psychological state experienced as a result of focusing one s energy and attention on a coherent set of stimuli or meaningfully related activities and events. Involvement largely depends on the meaning that the individual attaches to the stimuli, activities or events. For many people, high levels of involvement can be obtained with media other than VRE, such as movies, books or video games. Though the factors underlying involvement and immersion may differ, the levels of immersion and involvement experienced in a VRE are interdependent: increased levels of involvement may lead users to experience more immersion in an immersive environment and vice versa [8]. Sense of Presence (SoP) Both involvement and immersion are necessary for experiencing presence. Presence can generally be defined as the subjective experience of being in one place or environment, even when one is physically situated in another. However, SoP is not a characteristic of a medium but the feeling of the user of being there and thus has to be clearly distinguished from immersion. First person view In real-life one is able to see one s own limbs, therefore it might be disturbing not to be able to see one s own body in VREs. This perspective is called first person view, as opposed to the third person view in which the on-screen character is seen at a distance from a number of different possible angles. A third person perspective provides for more awareness of one s surroundings and position within it as well as the distances to objects and other characters. In scenarios that are especially arousing or disturbing such as is the case when confronted with aggressive behaviour, the ability to process the scene from a third person view might be very helpful. Interaction As already mentioned users can interact with a VR system through a range of devices and channels, e.g. haptic (data gloves), magnetic position/orientation sensors, 3D mouse, voice, gesture or face recognition. Interaction implies that users can (more or less) control the pace, order and (sometimes) occurrence of the events in the VRE. Interactivity enhances the sense of immersion, as humans are used to manipulate objects and engage in interaction with animals or other humans. Being a passive observer can be a choice in behavioural training (e.g. to learn from fixed situations), but an active involvement will be more effective. Emotions A factor largely ignored up to now is the expression of affect of virtual characters and the emotions this elicits in a user. Many studies have shown the ability of movies and imaging techniques to elicit emotions. Nevertheless, it is 694

258 Edith Maier, Miglena Dontschewa and Guido Kempter less clear how to manipulate the content of interactive media to induce specific emotional responses. Besides, we do not yet know which emotions, or more generally, which affective states can be conveyed through the setting or environment itself (as opposed to the virtual characters). It has been shown that humans have an inherent tendency to interact with different kinds of media in a natural and social way, mirroring interactions between humans in social situations [9]. But can a virtual character have and show complex, mixed emotions and affective states? Can, for example, aggression be conveyed convincingly? Is an emotional message transmitted through appearance or behaviour? Even less is known about the effect of VREs on the user s affective state. Riva [] found that the interaction with a fear-inspiring VRE produced anxiety, whereas interaction with a pleasant one produced relaxation. Besides, it turned out that a circular interaction existed between presence and emotions, i.e. the feeling of presence was greater in the "emotional" environments and in turn, the emotional state was influenced by the level of presence. The link between presence and emotion enables us to measure the sense of presence by standard psychological methods of emotion measurement. IV. METHODOLOGY AND IMPLEMENTATION For the design and development of the virtual learning environment (VLE) and the virtual agents we shall apply a user-centred approach. This implies that potential users are involved right from the start of the project and are consulted at regular intervals on all aspects of the VLE s design. Their comments and suggestions for improvement are then fed back into the design of the VLE. This participatory and iterative approach relies mostly on qualitative and informal methods and will allow developers to gain a more detailed understanding of the users attitudes and requirements. The fundamental idea is to allow caregivers to try out various coping strategies without being directly involved themselves, i.e. the usefulness of a coping strategy can be learned safely and vicariously through the victim character s experiences. Apart from characters representing the aggressor(s) and the victim(s), the VLE will also include bystanders and/or assistants as well as locations (nursing home, psychiatric ward) and scenarios that are typical for real-life incidents. To elicit real-life reactions in trainees when faced with particular scenarios, it is important to model people s behaviour and their emotional expressions in a realistic and believable way, i.e. the VLE has to be ecologically valid. VR will first of all help to design scenarios that are based on incidents of aggression that have occurred in real life and have been gathered in the course of many in-depth interviews with caregivers as well as experts in the Institute of Nursing Sciences. The interviews also provide the storylines, settings and special characteristics of the intelligent virtual agents to be developed. Since for ethical and privacy concerns it will be impossible to install video cameras in relevant locations such as nursing homes, we shall have to resort to film clips and actors experienced in this type of work. We shall then investigate to what extent VR can help induce behavioural changes by simulating the effects certain types of behaviour or communication strategies have on the patient or elderly person receiving care. VR will also allow us to combine settings, personality types and coping strategies because we are aware of the fact that the success of a particular strategy is highly dependent on context and the psychological make-up of the people involved. Thus, VR also enables us to adapt training to people s personal preferences or needs. From previous studies [] relevant to the aims of our project the following recommendations can be derived for the development of virtual environments in general: Firstly, agent and environment believability can be improved by ensuring cultural similarity with target users. In our case we shall pay particular attention to professional milieu, i.e. the special characteristics of care settings. Secondly and closely related to the first -, the terminology and phraseology used has to reflect the language use prevalent in a particular environment. Besides, a cohesive storyline also contributes to deeper immersion in a virtual environment. In addition to that, the possibility to interact with the characters has shown to be a major factor in successful immersion. However, since our project is primarily about letting people try out various coping strategies without being directly involved themselves, the inclusion of personal avatars is not foreseen in the design of the VLE. However, we might consider to allow at least superficial interaction such as selecting physical characteristics of an otherwise unplayable agent, to make users identify more with a given character. Recent findings [] have also shown that the graphical design of the characters seem to have limited impact on the user s rating of their believability or on the elicitation of empathy. The results suggested that excellent graphical design is not necessary to create an engaging experience as long as characters act in a believable manner. IV. MEASURING THE IMPACT OF VIRTUAL LEARNING ENVIRONMENTS (VLE) Due to technical limitations such as processing power or image resolution problems it is still difficult and timeconsuming to create high-fidelity VR experiences. Even though these limitations are likely to be overcome soon, it is still important to evaluate the effectiveness and learning impact of the VLE to be constructed. It has been shown that learning with attributes such as enjoyment, engagement and increased attention does occur in virtual environments. This is also born out by the empirical results from noneducational VR research, e.g. simulations for task training, e.g. in the military/defence industry or the visualisation of large data sets used for exploration, pattern discovery and investigation. 695

259 The Use of Virtual Reality Environments for Training Purposes in Care Settings A common measure of the quality or effectiveness of a virtual environment (VE) is the amount of presence it evokes in users. Presence is often defined as the sense of being there in a VRE. There has been much debate about the best way to measure presence, and researchers need, and have sought, a measure that is reliable, valid, sensitive, and objective. We hypothesize that to the degree that a VRE seems real, it would evoke physiological and psychological responses similar to those evoked by real situations or environments. Rather than measure attitudinal changes as done previously [], we want to focus on psycho-physiological factors to measure the impact of a VLE. We therefore intend to examine the cognitive judgment of learners by using bipolar rating scales. The ends of each scale comprise opposite adjectives that will be determined in a preceding survey to identify the descriptive features of the synthetic agents. As pointed out before, the link between presence and emotion allows us to use standard methods of emotion measurement. In our project, we intend to monitor the emotional reactions to the aggression scenarios by taking psycho-physiological measurements. Skin conductance, for example, can be measured at the inner hand, heart-rate activity by means of electrocardiography and breathing movements with the help of a stretching belt round the chest. V. CONCLUSIONS AND EXPECTED RESULTS The main benefits of using a VLE for aggression management training can be summarised as follows: - Users can practice skills safely, without experiencing potentially dangerous real world consequences. - The stimuli the user receives can be controlled. - VLEs empower users with disabilities by giving them a sense of control over their environment. - VLEs allow learners to actively participate and focus on their personal abilities. Besides, the project is expected to contribute to advancing the following research issues: - Identify heuristics and guidelines for user interface design to assess the impact of VR applications - Measure the impact of learning in virtual environments (VE) ; possible indicators are presence, engagement, immersion - Identify factors that contribute to or distract from the act of learning in a VE (e.g. social, hardware, network, content and curriculum quality issues) - How can scaffolding be built into the software to guide the users, esp. those with cognitive impairments? - Examine the impact of emotional models on the learning experience Technologies Centre and Virtual Reality Lab at the University of Applied Sciences in Vorarlberg, Austria as well as the Institute of Medical Education at the Inselspital in Berne, Switzerland. REFERENCES [] S. Hahn, I. Needham, C. Abderhalden, JAD Duxbury, RJG Halfens, The effect of a training course on mental health nurses attitudes on the reasons of patient aggression and its management, Journal of Psychiatric and Mental Health Nursing, vol. 3, pp. 97-4, 6. [] D. Thalmann, Introduction to Virtual Environments, Teaching Materials, Virtual Reality Lab, Swiss Federal Institute of Technology, Lausanne, 998. [3] I.E. Sutherland, The Ultimate Display, Proceedings of IFIP 65, vol., pp , 965. [4] C. van der Mast, Technological challenges and the Delft virtual reality exposure system, Proceedings 6 th Intl. Conf. Disability, Virtual Reality & Assoc. Tech., Denmark, pp. 83-9, 6. [5] P. Emmelkamp, Technological innovations in clinical assessment and psychotherapy, Psychotherapy & Psychosomatics, vol. 74, pp , 5. [6] J. Huergo, Evaluating Virtual Reality Therapy for Treating Acute Post Traumatic Stress Disorder, Press Release, Office of Naval Research, 5. [7] W. Sherman and A. Craig, Understanding Virtual Reality Interface, application, and design, Elsevier Science, 3. [8] B.G. Witmer, M.J. Singer, Measuring Presence in Virtual Environments: A Presence Questionnaire. Presence: Teleoperators & Virtual Environments, vol. 7, pp. 5-4, 998. [9] B. Reeves and C. Nass, The Media Equation. How People Treat Computers, Television, and New Media Like Real People and Places, New York, Cambridge Univ. Press, 996. []G. Riva, F. Mantovani, C. Capideville, A. Preziosa, F. Morganti, D. Villani, G. Gaggioli, C. Rotella, Affective Interactions Using Virtual Reality: The Link between Presence and Emotions, CyberPsychology & Behavior, :, pp , 7. [] L. Hall, S. Woods, R. Aylett, L. Newall and A.. Paiva, Achieving empathic engagement through affective interaction with synthetic characters, Proceedings of ACII 5: Affective Computing and Intelligent Interaction, pp , 5. The project will be implemented in close collaboration between several research organisations including the Institute of Nursing Science at the University of Applied Sciences in St. Gallen, Switzerland, the User-Centred 696

260 How to Give a Good Scientific Presentation Stevica S. Cvetković and Saša V. Nikolić Abstract - The major goal of this paper is to serve as a guideline for organization of research presentations. Systematic description of complete process which includes five steps: Constraints considering, Structure planning, Design, Practise and Delivery of presentation is given. This procedure can be used for thesis presentation, as well as for conference, or technical reports to research sponsors, both by graduate students and professional engineers. with illustrations, charts and graphs. It is always a great pleasure to attend presentation where results are well presented, the flow of the lecture is easy to follow, and the illustrating materials are clear. An effective presentation depends on five important steps presented in Fig.. The rest of the paper will give detailed explanation for each of these steps. Keywords Scientific presentation, Engineers education. I. INTRODUCTION The outline of almost every good talk is the same []: Tell them what you re going to tell them; Then tell them; Then tell them what you told them. As a researcher you will have many opportunities to present results of your research. Your presentation may be given in research laboratory, a university course or a conference. The goal of this paper is to provide you with some tools to help you design and deliver your presentation. Major principles for giving a scientific presentation can be found in numerous literature [-]. Inspiration for us to analyze this topic was lack of systematic description of complete process which includes planning, creating and delivering the presentation. The one thing which is emphasized in everything written about presentations is: Content is key! Many speakers forget that the content is the most important issue, not how nice a presentation is. Quite often flashy presentations hide the fact that there is no content. Remember that the discussion after the presentation is when the speaker demonstrates who he really is. This is where many good presentations get blown away. While the content of the presentation is of primary importance, the presentation style also affects the overall impression of the audience and can enhance or detract from the actual scientific impact of the content presented. Number of quality scientific papers were not adequately evaluated because of their poor presentation. Balance between the quality of content and ability to clearly convey scientific information in an oral presentation is critical to both teaching and research. The best presentations are built on a clear message, supported with well-organized facts and enhanced Stevica S. Cvetkovic is PhD student at the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, Saša V. Nikolić is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedeva 4, 8 Niš, Serbia, E- mail: Figure. Five steps for effective presentation II. CONSTRAINTS CONSIDERING The structure of the presentation is strongly influenced by following constraints: A. The Audience CONSTRAINTS CONSIDERING STRUCTURE PLANNING DESIGN PRACTICE DELIVERY Your talk needs to convey information to the audience. It is therefore imperative that you know who your audience is. Knowledge of your audience is an important prerequisite in making decisions about the content, format, language and style of your presentation. You have to give answers to the following questions: Who are the members of your audience? How familiar are they with your topic and content? What do they already know about your topic? What do they want or need to know? 697

261 How to Give a Good Scientific Presentation Consider your presentation from the audience s point of view. What s in it for them? If you can show them early on that they will benefit in some way from listening to your presentation, you will have a much better chance to achieve your outcome. It makes no sense to present a complex topic at your level of understanding, you may be the only one who understands it. For example, if you write formula x μ f ( x) = exp( ( ) ) to a group of arts students, πσ σ they will get nothing from it. Therefore, try to gear the presentation to the audience level of understanding. B. The Time Limit It is very important to plan your presentation time carefully. In the research environment over-time is considered one of the worst sins. Content of the presentation has to be decided according to provided time. Short talks ( 5 min) have a different strategy from long talks (45 6 min). Anyway, assume to minutes per slide depending on complexity. For a short talk, there is no time to explain analytical analysis, implementation details or formulas. Focus on the take home message and the data to support it. If you give a long talk, lots of time can be used to discuss the methods in detail if they differ from standard protocols. III. STRUCTURE PLANNING Structure planning refers to defining of presentation skeleton which includes first level titles and subtitles, as will be described later. Whenever possible, you should first define skeleton and after that, during the design process, you should develop the contents of all paragraphs. Depending on type of your talk, two types of structures are described [], [3]. A. Research Presentation Every research presentation should contain at least one slide for each of the following titles: Introduction, to include the basic facts needed to tune the reader to the presentation; Problem statement, to define precisely the problem being attacked by the research under consideration, and why is that problem important; Existing solutions and their criticism, to survey briefly the major existing solutions form the open literature and to underline their deficiencies from the point of view of interest for this research; Proposed solution and why it is expected to be better, to give the essence of the proposed solution (i.e., the essence of the idea which is to be introduced), followed by a logical discussion about the expected benefits stemming from the idea; Conditions and assumptions of the research to follow, to summarize the environment of interest. The term conditions refers to the specifies of the real environment, and the term assumptions refers to the simplifications which simplify the analysis without any negative impacts on the validity and representativeness of the final results. Analytical analysis, to show one or more of the following: proof of validity of the major idea of the presentation; calculation of initial values for simulation analysis to follow; rough estimation of the performance and complexity; Analytical analysis will not give the final answers; however, it will help understanding the concept. It will be helpful both to the researcher and the reader. Simulational / Implementational results, to show performance and complexity. For some types of research, this one could be the major and the longest part of the paper. Conclusion, with the following three major elements: revisiting the major contribution from the performance/complexity point of view; stating who will benefit from the presented results; what are the newly open problems and research avenues. B. Review Presentation An important prerequisite for a good research paper is that a good review paper is prepared first, to demonstrate that major solutions for the problem of interest are known. In the case of a survey paper, the major requirement is to have two main parts: Concepts part to define the major issues. The concepts part should be preceded by a classification of concepts. Systems part to define various algorithms and implementations, etc. The systems part should be preceded by a classification of systems. Each system in the systems part should be described using the same template (e.g., origin, environment, essence, advantages, drawbacks, relevant details, performance consideration, complexity consideration, conclusion, trends, etc.). The choice of elements for the template is flexible. What is not flexible is that the same elements must be used in each template. IV. DESIGN PROCESS It has been said that we remember % of what we hear, 3% of what we see, but between 5% and 75% of what we see and hear []. Always have in mind that your audience have a limited attention span i.e. they phase in and out of the presentation. Attention span is actually quite short - seconds. So how do you keep them with you? You have to make two certain decisions when designing your presentation: What information? What format? Because of their overlapping, it is impossible to give separate answers to the previous questions. Review of basic principles of designing presentation is shown next. 698

262 Stevica S. Cvetković and Saša V. Nikolić A. Content The basic principle of slide's content design process is: Minimize the content and keep it simple! Create simple and clear slides which can be read easily and get back in touch with the presentation. The slide message should be clean and easy to absorb. All content should not be put on the slides because the audience will focus on reading of slide s content and the speaker will be ignored. Only the most important points should be on the slide: Maximum of 8 lines per slide. No more than 8 words per line. - minutes per slide. B. Headlines Each slide should be clearly titled, indicating focus of slide. Use strong headlines that concisely states the idea of the slides. Headlines should: Orient the audience. Help define presentation's structure. Help keep speaker on track. C. Semantic Splitting The rule of semantic splitting [] could be defined as: If a sentence must be spread over more than one line, each line should represent a separate thought As an illustration, two examples are shown next: Text without semantic splitting: Writes get satisfied on distance or locally, depending on what brings better performance Good if reads and writes are interleaved with similar probabilities of occurrence The previous text with semantic splitting: Writes get satisfied on distance or locally, depending on what brings better performance Good if reads and writes are interleaved, with similar probabilities of occurrence D. Visual Aids General principle for visual aids is to keep them simple and clear. Also, clear explanation of the used variables and how they were measured, is essential. Equations - Use them only where necessary in cases you give a long presentation (45 6 min). If you have enough time to use them, explain clearly every variable. They quite often turn off a non mathematical audience. Instead of using them, a conceptual model may be much useful. Tables - Complicated tables are not visual aids. They have been described as instruments of torture for the audience. Tables of data suitable for written publication are highly unsuitable for a scientific presentation. Try to summarize the findings without using tables. If you must use them, state why the result is important to your hypothesis. Graphs - Graphs should replace tables where possible in a visual presentation. They are better than tables in showing relationships. Always follow next principles: State the significance of the relationship shown and why it is so important to the issue you are examining; Always describe the variables; Always show regression statistics; Use colors in graphs with multiple relationships; Limit relationships to 3 per graph, better to show two graphs than confuse the listener; E. Text Text size - You will find some variations in recommendations for the size of body text. However, use the 4 karat rule for golden presentations - don t use fonts smaller than 4 points. The size of the room is the key, large rooms require large typeface points no smaller than 3. Text height should be one centimeter for every one meter of distance from your audience. Font Use no more than font styles, too many fonts can be distracting. Most references recommended to use only one font, two is the maximum. Try to use clear fonts like Arial or Helvetica. Avoid fancy fonts, scripts, fonts with shadow effects and italics because they are difficult to read when projected. Colors - Although it is possible to change the slide s color scheme, it is best to use the base palette for the template, developed by design experts. However, if you want to experiment, high-contrast colors are only wise solution. Light text (yellow, gray or white) on dark background (blue, olive or purple) is always good solution. Background Colors - Dark colored backgrounds are easiest to read. Blue, black or purple are suggested. For education purposes, use deep forest green, olive or teal. Never use a clear white background, at least apply light color. Remember that color evokes psychological responses. Red is stimulating, it increases excitement, heightens emotion and can cause problems. Brown is also a color to avoid. Foreground Colors - Yellow is easiest to read on blue background. It is stimulating color, excellent for combination with blue and red text. Gray is neutral, it eliminates bias. Light violet is expansive and open-minded color V. PRACTISE Nothing improves a presentation more than one practice talk! This is perhaps one of the most important principle [6]. If you practice your presentation just once, your talk will be infinitely smoother. A. Actually Practise This does not mean running through the slides and going yeah, then that stuff, then the next slide, then the experiment 699

263 How to Give a Good Scientific Presentation part, a couple of diagrams, data, conclusions. Actually stand up and give the talk. Practice improves the flow of the talk. There will be less um s in the talk if you practice. People have a natural tendency when speaking in public to pause and say um when they forget what they were going to say for just an instant. By running through the talk you will develop a natural flow. You will come up with phrasings and ways to describe things that you will use when you give your presentation. Most importantly you will discover things that you don t actually understand. Explaining something to someone else is the best way to determine if you really understand it. Don t fool yourself into thinking you can explain it, try it. If you don t understand, you have time to figure it out before the talk. Even things you know well might be difficult to explain. Practicing helps you to find the words. Also, giving a presentation can be a nervous business, practice can help alleviate that fear. B. Memorize the First Few Lines Starting out is the hardest part of the talk. Once you get going and into a flow things are easier. But that first little bit is nerve racking. One thing you can do is to memorize the first few lines you are going to say. Don t memorize the entire talk, just first few lines: Hello, I m Stevica Cvetkovic. The title of my presentation is How to Give a Good Scientific Presentation. The goal of this paper is to provide you with some tools to help you design and deliver your presentation.... Wing it from here. Keep it interesting. If you have practical examples, interesting tidbits or humorous asides, people will be less likely to drift off to sleep. REFERENCES [] Strunk, W., and E. B. White, The elements of style, 3rd ed. New York: Macmillan, 979. [] Milutinović, V., The Best Method for Presentation of Research Results, IEEE TCCA Newsletter, pp. -6, September 997. [3] Milutinović, V., A good Method to Prepare and Use Transparencies for Research Presentations, IEEE TCCA Newsletter, pp. -6, March 997. [4] E.Bulska, "Good oral presentation of scientific work", Analytical and Bioanalytical Chemistry, 385, 43-45, 6. [5] Barbara Grimes, Tips for a Great Presentation, November, [6] Tips For Giving a Scientific Presentation, 6/TipsforGivingaScientificPresentation.pdf [7] Michael St. John, How to Give a Good Scientific Seminar: Does, Don ts and Strategy, presentation.pdf [8] Samuel B. Silverstein, The Art of Scientific Presentation [9] Smith R, How not to give a presentation, British Medical Journal, 3:57 57,. [] Jason Harrison, Planning a Scientific Presentation, Graduate seminar, October. [] Sorgi M, Hawkins C Research: How to plan, speak and write about it, Berlin, Springer-Verlag, pp. 35, 985. VI. DELIVERY The challenge to the speaker is to hold the attention of the audience. An important part of delivery is your interaction with the audience through: ) Voice, ) Movements, 3) Stage Presence. In order to have effective presentation, you must accept following principles. Prepare strong wording to emphasize strong points or transitions. Examples for beginnings, middles and endings are shown: Beginnings: My name is... and I will be talking about... Middles: That concludes what I have to say about cross sections. I will now discuss... Endings: To summarize, I would like to show you... Talk to the audience, not the overhead, or the computer. Avoid to read your slides. The audience s attention should be on the speaker. A paper can never serve as a speech or vice versa. You are your own best visual aid. Use your body language, facial expression and gestures to add impact to your verbal message. Deliver dramatically. If you mumble to yourself no one will pay attention. Speak with conviction. Change you voice level as much as possible, monotone puts you to sleep. Ask the audience questions, whenever possible. Or at least challenge them to think about the issue. Example: If this worked this way then we would expect this result, but we got this! Why? Then explain. 7

264 Reevaluation and Replacement of Terms in the Sampling Theory Petre Tzv. Petrov Abstract Terms should be short, unique, unambiguous and selfexplanatory. Transferring terms from one field of the science to the other should be made with caution. Basic terms of the sampling theory are discussed in the paper. It is shown they are incorrect, misleading or inexact and should be replaced. New terms with definitions and explanations are given for the replacement of the incorrect terms. Keywords - terms, terminology, sampling theory I. INTRODUCTION Each engineering term should as short as possible, unique and self-explanatory. The term should be correct and leading to exact mathematical presentation if possible. The definition of the term should be a logical expansion of the term and not unexpected explanation. The term should be an abbreviation of the definition or an abridged definition. The signal sampling theory (SST) as explained and applied in [-] contains a lot of terms which should be reevaluated, rejected and replaced with new terms explaining better the nature of the signal sampling and reconstruction process. The paper is dealing with that subject. It is intended to help students, researchers and engineers to clarify the SST. II. OLD, INCORRECT AND MISLEADING TERMS The terms below are considered incorrect and misleading from engineering (physical) point of view. Some of them are still acceptable from mathematical or another point of view. Aliasing - a misleading term meaning in most of the cases that the analog signal (AS) is not adequately sampled and filtered. Also the term is used to show that a sinusoid (or cosinusoid) is changing from one frequency to another. In the second case it is better to use the coefficient of the change of the frequency K ch = F it /F rs, where F it is the initial frequency and F rs is the resulting frequency. (The word alias has a criminal meaning in [, ]). Classical sampling theorem (CST) is an oversimplified sampling theorem generally stating that two samples are enough to reconstruct exactly band limited signals (BLS) with maximal frequency F smax,. According to the CST the sampling rate F s should be selected according to the equation F s = F smax or F s >= F smax. The CST is based on Fourier series. It is proved [3] that with CST: / the simplest band limited signal (SBLS) cannot be Petre Tzv Petrov is with Microengineering, Sofia, Bulgaria, s: and reconstructed always and / the amplitude errors cannot be evaluated. The theorem is not taking into account the errors due to non synchronized signal sampling and is pretending for exact reconstruction. In general the CST is applicable for synchronized sampling with SSF N>= without errors evaluation and with low pass filtering. Co-sine wave (co-sinusoidal signal (CS)) a simplified but still real (not over simplified) version of the simplest band wide signal (SBLS). Co-sine wave is a SBLS consist with direct current (DC) component with zero amplitude and cosine component with non zero amplitude and usually with zero phase. The cosine wave has four parameters to reconstruct. Decimation - incorrect term (nothing to do with the number ), meaning reducing of the sampling rate by calculations or omissions especially during the signal reconstruction. (The decimation in the ancient Roman army is giving misleading, non technical and cruel historical background of the term). Gibbs phenomenon a non existing phenomenon in the engineering world. It is a mathematical phenomenon when a function with infinite slew rate (rate of change or first derivative) and/or ideal angles is approximated with Fourier series. The phenomenon is often illustrated with ideal (physically impossible) rectangular pulse and wrongly associated with the process of ringing due to inappropriate impedance loading. The real signal (RS) is always a smooth function and even if it is truncated it has: / finite rate of change, / rounded (not ideal, not broken) angles, 3/ finite number of spectral lines and 4/ finite energy in every moment. These basics properties of the real signals are making Gibbs phenomenon non-existing and misleading in the engineering world. Nevertheless it is implemented in the software packages used by engineers as Mathlab. Nyquist rate, Nyquist frequency The term means several different things:. The highest frequency in the signal spectrum F smax,. Twice the maximal frequency in the signal spectrum, 3. The sampling rate which is twice the maximal frequency in the signal spectrum, etc. This multi-definition is cased by the misleading interpretation is one of the proof that the classical SST is not accurate. Delta function (or unit pulse) - unreal function used to construct the Dirac comb. No practical value in the sampling theory, because is leading to the model take and forget. In a real system the digital samples are always stored and used. The applicable model is always take and memorize. Comb function or Dirac comb non real function representing the sampling model take and forget, meaning that the sample is not memorized until the next sample came. 7

265 Reevaluation and Replacement of Terms in the Sampling Theory (Related to the delta function, staircase function and trapezoidal function ) Ideal rectangular pulse, ideal triangular pulse and ideal saw tooth pulses Oversimplified models witch do not respect the basic properties of the real signals. If they are applied they are leading to the Gibbs phenomenon and infinite Fourier series. Noise Misleading terms used mainly to represent:. Errors during the conversion process.. Unwanted signal added to the useful one. 3. Signal which cannot be used but is added to the useful signal. 4. Errors due to calculations. Dirichlet conditions conditions wrongly associated with real signals and Fourier series. It should be noted that every real signal is satisfying conditions more strongly that these conditions, and consequently the conditions are not applicable to the signal sampling theory. The Dirichlet conditions are way to say that the Fourier transform and Fourier series are applicable only to real signals, represented by mathematical function and not to all mathematical functions. Every physical signal is:. Satisfying the Dirichlet conditions (finite number of maximums, minimums and discontinuities in a given finite interval),. Integrable for the time of its existence and 3. Can be represented by a sum of the SBLS. Dither a method stating that adding noise to signal could be good thing. In fact adding noise is always a bad thing and the method is giving non reproducible results. First order hold and zero older hold Is it possible to deduce the definition of these terms? The terms take and memorize (until the next sample come), sample and hold or take and forget are much clearer and self-explanatory. Fourier series oversimplified presentation of the real signals as a sum of sine and cosine waves with harmonic frequencies and with zero phases and zero DC components. Mathematically the sum could be infinite. In fact every real signal could be represented as a finite sum of SBLS with not obligatory harmonic sine and cosine components and with not obligatory zero phases and zero DC components. Over sampling sampling with frequency higher than Nyquist frequency and meaning to certain redundancy which is incorrect. Should be replaced with the terms Signal sampling factor (SSF) N=F s /F d stating its value Over sampling and averaging for additional bits of resolution misleading method for adding bits to ADC (beyond its accuracy) with collection of a lot of samples and averaging them. In fact most of the added bits have random or not easy reproducible values. The method is requiring a lot of memory and computational power and is not efficient and not reliable. In fact the method is a kind of low pass filtering Resolution most often the total number of bits used to represent the signal or a function. All of these bits are not obligatory reproducible. In most of the cases the resolution is higher than accuracy and is more or less commercial (non technical and exact) parameter. Reconstruction of the signal from zero samples - A misleading conception. Obviously there is no way the reconstruct a signal parameter if the information for that parameter is not carried by the samples or in another carrier. Reconstruction filter in most of the cases is associated wrongly only with low pass filter. In fact it could be also band pass filter, who is giving the possibility to reconstruct a sine signal even with SSF N< in some cases. Also it could be any filter reconstructing the initial signal with the given accuracy in each parameter. Sine wave (sinusoidal signal (SS)) a simplified but still real (not over simplified) version of the SBLS. It has four parameters to reconstruct. Accepting that the phase and the direct current component are zeros do not suppress them from reconstruction. Sin (x)/x - artificial function wrongly associated with sampling and reconstruction process of the analog signals. The function is one of the proofs that something artificial is used in the sampling and reconstruction of the signals. Step function too idealized transition. If tact should replaced with transition with specified rate of change (slew rate) and defined rounded angles. Staircase function Idealized model of the process take and memorize with idealized angles. Window limitation of parameter. In some cases it is much clear to use terms limit / limited and stating the corresponding parameter. White noise noise with parameters impossible to generate. The amplitude, spectrum, power and number of spectral lines in the real noise are always limited and does not correspond to the definition of the white noise III. NEW TERMS New terms are introduced, defined and listed in alphabetical order. The given definition is self explanatory. Absolute accuracy (of conversion) the basic technical term describing the conversion process giving reproducible results. It should be compared with resolution and precision which are commercial terms and are giving not always repetitive results. Angle of the first sample θ is the angle between the beginning of the coordinate system (x=y= or t=) of the signal and the moment of the first sample. It is measurable in degrees and is defined especially for a SS, CS and SBLS. Angle of the maximal deviation from the maximal value of the SS, or the angle of the maximal amplitude error when a SS is sampled (θ Emax ). θ Emax for N=> is given with the equation below θ Emax = 36/(N) = 8/N () (Law of the) Average amplitude error during the conversion of SS (DC and phase components are zeros) is given with the equation below: E max =-cos(9//n) () (Law of the) Average amplitude error during the conversion of CS (DC and phase components are zeros) given with the equation below: E max =-sin(9/n) (3) Basic parameters of the sampling process The three basis parameters of the idealized but still representative sampling process are: / signal sampling factor (SSF) N, / number of the bits of the converter n (the accuracy of the converter not its resolution) and 3/ angle of the first sample 7

266 Petre Tzv. Petrov φ. The sampling process is fully defined by these three parameters: Coefficient (factor) of the change of the frequency K ch is defined as follows: K ch = F it /F rs, (4) where F it is the initial frequency (or a sine or co-sine wave) and F rs is the resulting frequency. K ch is used to show that the frequency of a sine or co-sine wave is changed (usually due to some non linear process and filtering). Factor of the sample and hold circuits F s/h. Term describing the effectiveness of adding a sample and hold circuit. It should be greater than to show increasing performance. If it F s/h <= there is no increasing the performance and in general the S/H should not be added. F s/h = T ap(s/h) /T ap(adc) (5) Where: T ap(s/h) i s the aperture time of the sample and hold circuit and T ap(adc) is the aperture time of the converter F s3db or 3dB sampling frequency is the sampling frequency guarantied maximal error E max less than or equal to 3db. The corresponding equation is F s3db = 4 F max. Also is called the frequency of 3dB modulation. F s is the main (first) frequency of % modulation The term is intended to replace the Nyquist frequency or the frequency of exact reconstruction. The corresponding equation is F s = F max.. F s is defined with SSF N= and with maximal amplitude error between and % included. (Law of the) Maximal amplitude error E ssmax during the conversion of SS (DC and phase components are zeros) is given with the equation below: E ssmax =(-sin(9-8/n)=(-cos(8/n)) (6) (Law of the) Maximal amplitude error E csmax during the conversion of CS (DC and phase components are zeros) is given with the equation below: E csmax =(-cos(9-8/n)=(-sin(8/n)) (7) (Law of the) minimal errors during the conversion of SS or CS (DC and phase components are zeros) is stating that SSF N=4*k (8) (k=,,3 ) is giving the opportunity to obtain zero amplitude, phase and frequency errors during the regular sampling and DC component error is always zero. Non-reproducible bits (N nrb ) bits which could not be reproduced in a repetitive way, e.g. the bits determining the resolution of (of a converter). N nrb is the difference between the bits (e.g. of the converter) which is determining the resolution N res (or the whole number of bits N all ) and the bits determining the accuracy or the reproducible bits (N ab). N nrb cannot be reproduced or predicted in all of the test cases. The equations below are clarifying the definition: N nrb = N all N ab (9) N nrb = N res N ab () One dimensional sampling of SBLS is the sampling of the signal given with the equation below: X(t)=X m sin (ω x t+φ x )+X () During the one-dimensional sampling at least four samples are needed in order to calculate the four parameters of the signal. Phase modulation during the sampling process the change of the amplitude of the samples when the angle of the first sample is changed. (The sampling and the sampled frequencies and the amplitude of the sampled SBLS are constants). Postulate about basic properties of the real signals the postulate which is stating that every real signal has the following basic properties:. Finite amplitude and peak to peak amplitude.. Finite power in every moment and during its existence. 3. Finite spectrum. 4. Finite number of spectral lines. 5. Finite slew rate, first derivative and every other derivative. 6. Could be represented as a finite sum of SBLS. 7. Is a smooth (uninterrupted) function. Principle One sample per parameter to reconstruct is stating that the signals need at lest one sample per parameter for parameters calculation and reconstruction. Real signal A signal which could be presented as a finite sum of the simplest band limited signals (SBLS) and with the following properties:. Finite slew rate (finite first and every other derivative).. Finite number of maximums and minimums. 3. Representing a continuous mathematical function. 4. Finite number of spectral lines 5. Finite energy (power) in each moment and during its finite existence. Reproducible bits bits which could be reproduced easily and repeatedly. Thee are determining the accuracy (e.g. of a converter). Sampling angle θ s is defined for a SS or CS signal with the equation below: θ s = 36/N ( ) It is measurable in degrees. Angle of the maximum deviation from the maximum of the sine or cosine signal is the angle of the maximal amplitude error during the sampling: θ smax = 8/N (3) It is measurable in degrees. Sampling theorem for SS is stating that for the SS (with DC and phase components are zeros and SF N>= ) sampling factor N is given with the equation below when the maximal amplitude error E max is given: N = 8/(9-arcsin(-E max ) (4) Sampling theorem for CS is stating that for the CS (with DC and phase components are zeros and SF N>= ) N is given with the equation below when the maximal amplitude error E max is given: N = 8/(9-arccos(-E max ) (5) Sampling factor (SF) N or signal sampling factor (SSF) is given with the equation below N = F d /F s =F max /F s (6) where F d is the sampling frequency, F s is the frequency of the sampled sinusoidal or co sinusoidal signal and F max is the maximal frequency of the sampled band limited signal (BLS) SBLS (the simplest band limited signal) is the simplest signal with two lines into its spectrum one is a direct current (DC) and the other is a sine or cosine wave. The following two equations are applicable: A = A m sin (πf + θ) + B (7) A = A m cosin (πf + θ) + B (8) The SBLS is the simplest test signal with two lines into spectrum. Trapezoidal staircase function with no rounded (ideal) angles the function used to approximate the reconstructed 73

267 Reevaluation and Replacement of Terms in the Sampling Theory sampled signal. The finite slew rate is and advantage of that model compared to the rectangular staircase function but the idealized angles is making it too idealized and cannot be approximated with finite sum of SBLS. The trapezoidal function with rounded angles is the better solution. Trapezoidal function with rounded angles (non interrupted or continuous trapezoidal function) the only possible presentation of the reconstructed signal before filtering. The principle of the limited values (finity) of the signal parameters is stating that every real signal has finite values of its parameters e.g. finite slew rate (SR), finite energy, finite spectrum, finite number of spectral lines, etc. Reproducible bits (N rb ) the bits (e.g. of the converter) which is determining the accuracy and which could be reproduced under the testing conditions. N rb = N all N nrb (9) Errors of the direct reconstruction the errors between the corresponding parameter of the input analog signal and the parameter of the reconstructed signal. The method of direct reconstruction with ADC and DAC with the same number of reproducible bits is used as a reference. The sampling process is defined as process of conversion of an analog signal into staircase function with rounded angles. The sampling rate Fs is the frequency of the taken and memorizing of the samples. It is related with the number on the parameters to reconstruct (k) and with the number of the spectral lines to reconstruct (p). The following two rules are respected:. At least one sample per parameter to reconstruct.. At least four samples per alternative current (SS/CS) spectral line. Applying the two rules simultaneously is guaranteeing the exact and predictable signal reconstruction. If we keep the band of the signal constant and if we are increasing the number of the spectral lines we will increase also the number of the samples required to reconstruct the parameters of the spectral lines in the complex signal. Coefficient of changing of the sampling rate - is a term intended to replace the term decimation and an appropriate coefficient of changing should be defined: K chs = F is /F os ( ) where F is is the initial sampling frequency and F os is the resulting (output) sampling frequency. The following terms are much more self explanatory and representative that the terms total harmonic distortion (THD) and aliasing : In band added frequencies sum of (the energies) of the added frequency components in the signal band due to nonlinear sampling (and reconstruction) process. The sum could be divided to the energy (amplitude, power) of the initial signal. Out band added frequencies - sum of (the energies) of the added frequency components outside the initial signal band due to nonlinear sampling (and reconstruction) process. The sum could be divided to the energy of the initial signal. Total added frequencies sum of (the energies) of all added frequency components to signal due to nonlinear sampling (and reconstruction) process. Could be divided to the energy of the initial signal. V. CONCLUSIONS Basic terms in the signal sampling theory are misleading and needs of replacement. Some of the basic concepts and models should be reevaluated and replaced with models which are closed to the real signals. Repeating wrong terms does not make them useful and they should be replaced with more accurate terms which are self-explanatory. New terms are proposed and defined. REFERENCES [] The Scientist and Engineer's Guide to Digital Signal Processing, Second Edition by Steven W. Smith, by California Technical Publishing, USA. [] Cygnal Integrated Products, Inc. Improving ADC resolution by over sampling and averaging, An8-., May, USA. [3] James Potzick. Noise averaging and measurement resolution (or A little noise is a good thing ). Review of scientific instruments. Vol. 7. No. 4. p. 38. April 999. [4] National Semiconductor Corp. Data Conversion / Acquisition Data book, 984, USA. Chapter 5 and 7. [5] National Semiconductor Corp. National Analog and Interface Products Data book,, USA. Chapter 6. [6] V A Kotelnikov, On the transmission capacity of the ether and of cables in electrical communications. Proceedings of the first All-Union Conference on the technological reconstruction of the communications sector and the development of low-current engineering. Moscow, 933. [7] C.E Shannon, Communication in the presence of noise. Proceedings IRE, Vol. 37, pp.-, Jan 949. [8] National Instruments, AN9, Data Acquisition Specifications a Glossary USA. [9] A. Oppenheim, A. Willsky, I. Yong, Signals and Systems, Prentice Hall, Inc., 983 [] Enhancing Data Acquisition with Intelligent Over sampling By Roger Lockhart DATAQ Instruments Originally Published In SENSORS June, 997 [] Harper Collins Publishers, Collins Cobuild, English dictionary, 995, Great Britain. [] Penguin Group, Dictionary of Physics, Edited by Valerie Illingworth, second edition, 99, ISBN , Printed in England. [3] Petre Tzv. Petrov, Sampling analog signals (Abstract with formulas, tables and figures), Sofia, 4, Bulgaria, ISBN

268 SWOT Analysis of Method for Automatic Vectorization of Digital Photos Into 3D Model Zoran G. Kotevski and Igor I. Nedelkovski Abstract Raster and vector are the two basic data structures for storing and manipulating images and graphics data on a computer. The conversion from vector to raster is easy and done automatically in seconds, but it is rather complicated to do an automatic conversion from raster to vector. There has been extensive research efforts focused on this issue during the past decades. This paper gives analysis of methods for creating 3D model out of digital photos, with emphasis on method of automatic vectorization and gives SWOT analysis for this method. Keywords Raster, rasterization, vector, vectorization. I. INTRODUCTION The two basic data structures for storing and manipulating images and graphics data on a computer are raster and vector. All of the major computer graphics software packages available today are primarily based on one of these two structures. They are either raster based or vector based, while they have some extended functions to support other data structures. Raster images are presented in the form of bitmap (matrix) made from individual elements presenting the picture, where each individual element is defined by its position and color. These elements, which form the picture, are called pixels. In this case the file size is determent by the number of pixels forming the picture (resolution) and the number of bits used to define the color of each pixel. Raster data structures usually produce large file sizes. Acquisition of raster images is easy and is done by using scanner or digital camera. On the other hand, data structure of vector images comes in form of points and lines that are mathematically and geometrically associated. Points are stored using the coordinates in a two or three-dimensional space, and lines or curves are stored as a series of points that indicate line or curve. In general vector data structure produce smaller file sizes then their raster equivalents. Also topology among graphical objects is much easier to be represented using vector form instead of using raster form data structure. Acquisition of images in vector data form is much more Zoran G. Kotevski is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, Igor I. Nedelkovski is with the Faculty of Technical Sciences, I.L.Ribar bb, 7 Bitola, Macedonia, difficult to be done than raster image acquisition, because of its abstract data structure, topology between objects and associated attributes. Acquisition of vector data structures of images today is done by process of vectorization. II. WHAT IS VECTORIZATION AND WHAT IS ITS USE Vectorization basically means acquisition of vector data. Raster image acquisition, as mentioned before, is easy to perform using scanner or digital camera, but vector acquisition is much harder to be done. Vectorization today has wide area of implementation like: [] Archaeology [] Architecture and preservation [3] Accident reconstruction [4] Creating digital terrain maps [5] Film, Video, Animation [6] Plant and mechanical engineering etc. Because of its wide area of use there has been extensive research efforts focused on this issue during the past decades, especially automation of the process and 3D vectorization. III. METHODS FOR VECTORIZATION A. Manual vectorization Manual vectorization is method that is mostly used in cases where there are limitations of quality of the taken pictures as well as in the possibilities of taking pictures from different angles. The method uses raster image as backdrop directly in the software and the line tracing is done manually by the operator. During the manual tracing there is great flexibility in the process, and different kind of adjustments can be implemented and realized. The workflow of this method includes: [] Photographs acquisition [] Line tracing [3] Creating the 3D model [4] Texturing the surfaces This method produces good quality vector model as well as good precision and accuracy, but it requires well skilled operator and most often can be very time consuming. B. Semi-automatic vectorization This is more powerful vectorization method, which requires pictures of higher quality. The whole idea of the process is to replace rather boring and very time consuming manual line tracing. In this case the actual tracing is done automatically, 75

269 SWOT Analysis of Method for Automatic Vectorization of Digital Photos Into 3D Model by the software that uses complex algorithms for line tracing, shape recognition, topology creation and attribute assignment. After automated line tracing, the workflow is similar to the manual vectorization. The 3D model must be generated manually on the basis of traced images from different sides of the object; by extruding, revolving, sweeping, etc. traced D vector images. This semi-automatic method produces 3D model with greater precision and is less time consuming than manual vectorization, but requires images of greater quality. C. Automatic vectorization Automatic vectorization is an efficient and effective method for modeling real world objects. 3DSOM s patented technology for model generation and sophisticated texturing technology allows 3D model to be created quickly and inexpensively while requiring low level of technical skill or expensive hardware platforms. The workflow of this method is rather different from the previous methods. It requires larger amount of pictures (about ) to be taken from different angels. The amount of photographs needed to create the 3D model is around 5 taken from lower angle and 3-7 taken from higher angle of horizontal centerline of the object, as well as some pictures from top and bottom perspective if needed. The quality of the photographs is the key element of this method so the photo session needs great deal of attention. Fig. 3 Some of the total 9 taken pictures for vectorization of this model (toy cat) After the pictures are imported in the software the next step is automatic masking of the images what means cutting of the background. A mask is created to identify the outline or silhouette of the object, as seen from the camera s point of view. Fig. Laboratory for photo session Fig. 4 Process of masking of imported pictures Fig. Position of the model for photo session The Generate Wireframe process that constructs the 3D model can be represented as a sculptor taking a block of clay and projecting each image mask onto it from the viewpoint of the camera. Wherever the projected mask appears on the clay, the clay is sliced away until the model s silhouette matches 76

270 the masked silhouette obtained from the object s photo taken from that perspective. Zoran G. Kotevski and Igor I. Nedelkovski Fig.7 Final look of the 3D vector model after applying technique of texturing its surfaces The limitations of this method are consisted in following: it can be used only for small size objects and it is complicated to vectorize transparent objects or objects with large holes like coffee cup. IV. SWOT ANALYSIS Fig. 5 Look of the model after applying techniques Generate Wireframe and Mesh Optimization Mesh optimization and texturing are also done automatically in matter of seconds what puts this method ahead of previous two methods. Fig. 6 Process of texturing surfaces of the model usually is performed by software automatically, but some surfaces sometime requires also manual texturing Table I SWOT analysis of manual vectorization Strengths Weaknesses - Good quality of the - Very time consuming created 3D model - Medium level of accuracy - Line tracing flexibility and precision - Greater control of the operator Opportunities Threats - 3D model construction - Automation of the process from low quality pictures of vectorization - Unavailability of multiple - Complexity of the object photos from different angles being vectorized - Operators skills Table II SWOT analysis of semi-automatic vectorization Strengths - Greater precision of the created model - Time consumption Opportunities - Small amount of photos but Weaknesses - Requires photos of greater quality Threats - Automation of the process - Unavailability of photos from different angles - Manual optimization and texturing Table III SWOT analysis of automatic vectorization Strengths Weaknesses - Great precision of the 3D - Requirement of greater model created number of high quality - Exceptionally low time pictures consuming - Large objects - Low requirements - Transparent objects and regarding of operator skills objects with larger holes (eg. coffee cup) Opportunities - Creating a model of complex real world objects Threats - Unavailability of photos from different angles 77

271 SWOT Analysis of Method for Automatic Vectorization of Digital Photos Into 3D Model V. CONCLUSION From this analysis can be concluded that for vectorization of larger objects and objects that can not be easily photographed more convenient are the first two methods, but for smaller objects the more appropriate choice is the automatic vectorization which can be used mostly in the fields of preservation of historical and cultural heritage, as well as for 3D representation of real objects in animation industry. REFERENCES [] I. Nedelkovski, Z. Kotevski Internet and Multimedia Faculty of Technical Sciences - Bitola 5 [] [3] [4] [5] [6] [7] [8] [9] [] [] [] [3] [4] [5] 78

272 SESSION ECST II Electronic Components, Systems and Technologies II


274 Monitoring System of Pulsation Processes in a Milking Machine Anatolii. T. Aleksandrov and Nikola D. Draganov Abstract The paper treats alternative solutions for monitoring and control of the processes taking place in the milking bowl of a milking machine. Observations have been conducted and experimental recordings have been made of the processes in the milking bowl of the milking equipment using a video camera placed in the milk chamber of the milking machine. Experimental results have also been obtained using an optic and a galvanomagnetic sensor. On the basis of the results obtained a comparative analysis has been made and conclusions have been drawn about the operation of the milking machine. Keywords - milking bowl, milk chamber, video camera, optic sensor, galvanomagnetic sensor I. INTRODUCTION Modern stockbreeding requires constant development and improvement of its automation tools. Milking machines are widely applied as tools for labour automation in stockbreeding, and they determine to a great extent the efficiency, as well as the cost and quality of the milk [, 3, 4, 5]. It is interesting to study the processes taking part in the teat cup cluster of the milking machines and to establish the relationship between the operation of the pulsator and the position of the rubber teat cup. The aim of the present paper is monitoring, study and control of the processes in the teat cup cluster of the milking machine. Its implementation creates conditions for improving the operating characteristics of the milking machines. [-6] The circuit of the test setting is shown in Fig.. The lens of the video camera (CCD matrix) 3 has been mounted in the artificial teat. LEDs 4 have been mounted around the video camera, which provide background light of the video camera. The control block of the camera provides the connection to the personal computer 7 by means of a video cable 6. The video block created in this way is placed in the teat cup 8 of the teat cup cluster 9. Block 5 is composed of a LED and its function is to light the inside of the teat cup. The power supply of the test setting is provided by two stabilized rectifiers and. The video camera, model GV5, enables transmission and processing of information on a PC using the GeoCenter software. It has a resolution of 4 lines per frame. The video controller GV-6V, operating on an NTSC video system, can have 4 video cameras connected to it. There is no storbing input, but frame synchronization is possible by means of the 7 Computer V 6 II. INSTRUCTIONS FOR THE AUTHORS 3 To improve the operation and efficiency of milking machines, a good knowledge and description of the processes taking place in the teat cup cluster is necessary. To achieve this, alternative solutions for monitoring and control of these processes have to be developed. Experiments have been conducted for observing the processes taking place in the milking chamber, using a video camera, model GV5, enabling transmission and processing of information on a PC [7]. ma Anatolii T. Aleksandrov is with the Technical University, 4 H. Dimitar str. 53 Gabrovo, Bulgaria, Nikola D. Draganov is with the Technical University, 4 H. Dimitar str. 53 Gabrovo, Bulgaria, 5 7 Fig.. Circuit of the test setting

275 Monitoring System of Pulsation Processes in a Milking Machine camera illuminance. The minimum configuration that the PC needs to have for the video controller and the camera to be installed is: galvanomagnetic sensors as well [, 6]. Fig.6 presents the flow chart, and Fig.7a and 7b the conceptual electric circuits Fig.. Frame of the milking chamber from the first phase of the milking process squeeze phase Fig. 4. Processing the resultant image using the MathCAD Fig. 3 Frame of the milking chamber from the second phase of the milking process massage + stimulation Windows-98 operating system, CPU-Pentium II, 8MB RAM, 44MHz Bus speed, 6MB video controller, 5GB HDD. [7] A film of any duration and extension *.avi can be recorded. The test setting enables recording and monitoring of the processes in the teat cup cluster at different illuminance, with or without background light. The film can be broken into frames, the resolution being 5 frames per second using WirtualDibMod software product. Therefore, in addition to monitoring of the processes taking place in the teat cup cluster, photographs can be taken at specific moments, as well as qualitative and quantitative assessment of the shape of the milking chamber. Fig. and Fig. 3 show frames of the milking chamber from the first phase of the milking process squeeze phase, and the second one massage + stimulation. The experiments that have been conducted give a good qualitative picture of the processes taking place in the teat cup cluster. By processing the resultant image using the MathCAD software package (Fig.4 and Fig.5), a quantitative assessment of the processes in the milking chamber can be obtained. The results are subject to further processing and analysis. The processes in the milking chamber of the teat cup cluster have been studied experimentally using optical and Fig. 5. Processing the resultant image using the MathCAD of the test settings. Both types of sensors provide information about the state of the rubber teat cup by means of double energy conversion. In the circuit with the optical sensor the photosensitive element (phototransistor 3) has been mounted in the artificial teat 5 in place of the video camera. The milking chamber 8 is lit by means of LEDs 4. Under the action of the vacuum the teat cup 7 contracts and the luminous flux to the phototransistor is reduced, i.e. the position of the teat cup is converted into an equivalent light signal. The light signal is converted into an equivalent electrical signal in the phototransistor. [, 6] The galvanomagnetic sensor also gives indirect information about the contraction of the teat cup. This is achieved by sticking a magneto-sensitive element Hall element on the outer surface of the teat cup. A permanent magnet is fixed against it. When the teat cup is loose, the distance between the magneto-sensitive element and the magnet is equal to the diameter of the teat cup in this area. In this case the Hall voltage measured by the voltmeter will be the lowest. When the teat cup contracts as a result of the vacuum, the distance between the magnet and the Hall element is shortened. The generated Hall voltage will rise as the contraction of the teat cup increases []. The conversion characteristics measured experimentally U CE =f(l) (U CE voltage drop across the phototransistor; L internal diameter of the teat cup) at I F =const (I F current 7

276 Anatolii. T. Aleksandrov and Nikola D. Draganov across the LCD) with the optical sensor are shown in Fig.8, and the conversion characteristics U H =ƒ(l) show the change V com V a) ma b) 3 >A ma V ON/OFF Fig. 7. Conceptual electric circuits of the test settings in the diameter of the teat cup as a function of the Hall voltage (U H Hall voltage; L - internal diameter of the teat cup). ON/OFF Fig. 6. Circuit of the test setting for optical and galvanomagnetic measurements Fig.8. Conversion characteristics measured experimentally with optical sensor Both signals are information signals and can be used successfully for monitoring and control of the processes in the milking chamber. The conversion characteristics measured experimentally enable indirect quantitative assessment of the degree of contraction of the teat cup through the value of the voltages measured (U CE and U H ), depending on the sensor used. Although the conversion characteristics are not linear, they can be used for control of the milking machine, since the two final states of the teat cup are important squeezed and loose. The conversion characteristics that exhibit the greatest conversion transconductance are the ones obtained for current across the LED I F =,5mA and current across the Hall element Ie=9mA, which also determines higher sensitivity of the respective sensors in these operation modes [,6]. III. CONCLUSION Fig. 9. Conversion characteristics measured experimentally with galvanomagnetic sensor Experimental studies have been conducted of the processes taking place in the teat cup cluster of the milking machine, using a video camera, model GV5, enabling 73

277 Monitoring System of Pulsation Processes in a Milking Machine transmission and processing of information on a PC by means of GeoCenter software. Circuits with an optical and a galvanomagnetic sensor have been proposed for studying the processes in the milking chamber. Experimental conversion characteristics have been obtained, and the voltage values U CE and U H that have been measured are used as a quantitative criterion for assessing the degree of contraction of the teat cup. The video camera and the test setting for studies by means of an optical sensor can only be used in the simulation of the milking process, while the test setting with the galvanomagnetic sensor can find application under actual working conditions. [3] Banev B., K. Peichev. Study of stress from of the rubber teat cup over false teat, SU Stara Zagora, 3, p ІІ, [4] Banev B., B. Bonchev. Study of possibilities to use of pulse machine POLANES in milking machine for sheeps, periodical Agricultural technics, 4, 5, [5] Valashev V., K. Peichev, B. Banev. Analysis of mechanical performance of rubber teat cup with different level of tighten, periodical Agricultural technics 3, 3, [6] Valkov S. A. Electronics and semiconductors elements and integrated circuit, Sofia, Technics, 99 [7] from September 7.5 REFERENCES [] Alexandrov, A. Discrete Semiconductor Elements, Gabrovo, Vasil Aprilov Publishing House, (in Bulgarian). [] Banev B., V. Valashev, K. Peichev, Al.. Blagoev. Study of pulsation camera of milking machine working with vibrationpulsator Biopuls, periodical Stock-science, 987,

278 A Method for Improvement Stability of a CMOS Voltage Controlled Ring Oscillators Goran Jovanović and Mile Stojčev Abstract A CMOS voltage controlled ring oscillator based on N-stage single-ended chain of different inverter types is described in this paper. The proposal is characterized by increased frequency stability (Δf/f <%) in term of power supply voltage variations in respect to standard solutions (Δf / f >4%). The presented results are obtained using HSpice simulation and CMOS library model, level 49, for.μm technology. Keywords.Voltage controlled oscillator, ring oscillator, CMOS, frequency stability I. INTRODUCTION A voltage controlled oscillator (VCO) is one of the most important basic building blocks in analog and digital circuits []-[6]. There are many different implementations of VCOs. One of them is a ring oscillator based VCO, which is commonly used in the clock generation subsystem. The main reason of ring oscillator popularity is a direct consequence of its easy integration. Due to their integrated nature, ring oscillators have become an essential building block in many digital and communication systems. They are used as voltagecontrolled oscillators (VCO s) in applications such as clock recovery circuits for serial data communications [], [], diskdrive read channels [3], on-chip clock distribution [4], and integrated frequency synthesizers [5], [6]. The design of a ring oscillator involves many tradeoffs in terms of speed, power, area, and application domain. The problem of designing a ring oscillator is in focus of our interest in this paper. This paper proposes a suitable method for increasing frequency stability of a CMOS ring VCO. The rest of the text is organized as follows. In Section, we give a brief review of voltage controlled ring oscillators, and define some crucial operating parameters. Hardware description of the proposed ring oscillator is presented in Section 3. In addition we present the simulation results which relate to frequency stability in terms of temperature and supply voltage variation. In Section 4, we define the terms of jitter and phase noise in ring oscillators, and present the appropriate simulation results. Finally, conclusions is given in Sections 5. Goran S. Jovanović is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedova 4, 8 Niš, Serbia, Mile K. Stojčev is with the Faculty of Electronic Engineering, University of Niš, Aleksandra Medvedova 4, 8 Niš, Serbia, II. CMOS RING VCO A REVIEW A ring oscillator is comprised of a number of delay stages, with the output of the last stage fed back to the input of the first. To achieve oscillation, the ring must provide a phase shift of π and have unity voltage gain at the oscillation frequency. Each delay stage must provide a phase shift of π/n, where N is the number of delay stages. The remaining π phase shift is provided by a dc inversion [7]. This means that for an oscillator with single-ended delay stages, an odd number of stages are necessary for the dc inversion. If differential delay stages are used, the ring can have an even number of stages if the feedback lines are swapped. Examples of these two circuits are shown in Fig.. A A A N (a) A A A N (b) Fig.. Ring oscillator types: (a) single-ended and (b) differential In order to determine a frequency of the ring oscillator we will use its linear model as is given in Fig.. -g m -g m -g m R C R C R C Fig.. Linear model of ring oscillators We assume that all inverting stages are identical and that they can be modeled as a trans-conductance loaded by a parallel connection of resistor R and capacitor C. The gain of the inverting stage is defined as gmr A ( jω) = A ( jω) =... = AN ( jω) = () + jω RC According to Barkhausen criteria the ring oscillator is operative when the following conditions are satisfied A jω) A ( jω)... AN ( jω), and ( = 75

279 A Method for Improvement Stability of a CMOS Voltage Controlled Ring Oscillators kπ A( jω) = θ = arctanω RC =. () N The frequency of oscillation is given by and the minimal single stage gain is tanθ ω = (3) RC g m R. (4) cosθ Alternatively we can derive an equation for the frequency of oscillation if we assume that each stage provides a delay of t d. The signal goes through each of the N delay stages once to provide the first π phase shift in a time of N t d. Then, the signal must goes through each stage a second time to obtain the remaining π phase shift, resulting in a total period of N t d. Therefore, the frequency of oscillation is f =. (5) Nt d III. RING OSCILLATOR INVERTING STAGE As we have already mentioned, the ring oscillators is realized with N inverter stages. There are numerous types of inverter stages by which a ring oscillator can be realized [8], [9]. Some of the standard solutions are pictured in Fig. 3. Designs given in Fig. 3 b), c), d) are of current starved type, for which the charging and discharging output capacitor current is limited by a bias circuit. More details related to realization of this type of inverter stage can be found in References [8], [9]. Relative frequency deviations in term of temperature variations for 3-stages ring oscillators based on type of inverters stages presented in Fig. 3 are given in Fig. 4. In general all frequency deviations have similar behavior, but the basic type (Fig. 3a)) and current starved with symmetrical load (Fig. 3d)) inverters have the highest, while current starved with output-switching (Fig. 3b)) inverter has the lowest sensitivity. The ratio of relative frequency deviations between basic type (Fig. 3a)) and current starved with outputswitching (Fig. 3b)) inverters is 5:. The difficulty in obtaining a value for the frequency arises when trying to determine t d, mainly due to the nonlinearities and parasitics of the circuit. As is referred in [7] the delay per stage is defined as the change in output voltage at the midpoint of the transition, V SW, divided by the slew rate, I ss /C, resulting in a delay per stage of C V SW /I ss. Using definition (5), the oscillation frequency is given by Iss f =. (6) NV C sw V dd V dd V dd V ctrl bias circuits V dd (c) (a) V dd V ctrl bias circuits V ctrl bias circuits Fig. 3. Invertor: (a) basic type; (b) current starved with outputswitching; (c) current starved with power-switching; (d) current starved with symmetrical load; V dd (b) (d) V dd V dd Fig. 4. Relative frequency deviation in term of temperature variation Relative frequency deviations in term of power voltage supply variations for 3-stages ring oscillators based on type of inverters stages presented in Fig. 3 are given in Fig. 5. As can be seen from Fig 5, the basic type (Fig. 3(a)) and current starved with symmetrical load (Fig. 3(d)) inverters have characteristics with negative slope, while current starved with output-switching (Fig.3(b)) and current starved with powerswitching (Fig. 3(c)) inverters have characteristics with positive slope. Absolute value of inverters sensitivity in function to power supply voltage variation is within a range of % excluding current starved inverter with power-switching (Fig. 3c)) inverters which has sensitivity of 5%. Taking into consideration the opposite slope characteristics of the relative frequency deviations in terms of power voltage supply variations of the mentioned inverters (Fig. 5), we can conclude that is reasonable to design a ring oscillator composed of cascade chain of inverters. For example, odd numbered inverters can have positive, while even numbered negative slope. In this way, the relative 76

280 Goran Jovanović and Mile Stojčev frequency deviation in term of power voltage supply can be drastically reduced (more than %). The relative frequency deviations in term of power supply voltage for all three type of ring oscillators pictured in Fig. 6 are given in Fig. 7. By analyzing the results presented in Fig. 7 we can conclude the following: The relative sensitivity of the ring oscillator from Fig. 6 a) is less than %, while for those given in Fig. 6 b) and c) is less than %. Fig. 5. Relative frequency deviation in term of power supply voltage variation Several typical design solutions of 3-, 5- and 7- stages ring oscillators with reduced sensitivity are given in Fig. 6. a), b) and c), respectively. We call them as combined ring oscillators. Let note that in combined ring oscillators the odd numbered inverter stages are implemented with basic type, while even numbered as current starved with output-switching inverters. V ctrl V dd V dd V dd V dd Fig. 7. Relative frequency deviation in term of power supply voltage variation for proposed ring VCOs IV. JITTER AND PHASE NOISE IN RING OSCILLATORS In general, CMOS circuits are sensitive both to power supply and temperature variations, as well as to noise generated in IC s building blocks (noise is inserted through power supply and the substrate). Due to these effects, the propagation delay, t d, is variable [], [], []. As a consequence there are variations in t d, in respect to its nominal value. This deviation is manifested as variation of the rising and falling pulse edges, and is referred as jitter (see Fig. 8). V dd (a) V dd V dd V dd V dd V dd V ctrl V SW t d Δtd V dd (b) V dd V dd V dd Fig. 8. Jitter effect V ctrl (c) V dd V dd Fig. 6. Combined ring VCOs V dd V dd As can be seen from Fig. 8 the jitter for the rising edge is defined as a rms time error value, Δ t d. The normalized jitter value is defined as a ratio between the effective time error and Δtd rms its nominal delay value, ie.. td Consider now a VCO with nominal period T, and with a timing error accompanying each period that is Gaussian, with 77

281 A Method for Improvement Stability of a CMOS Voltage Controlled Ring Oscillators zero mean and variance Δ t VCO. If this timing error is expressed in terms of phase, Δφ=πΔt/T, then the variance of the phase error per cycle of oscillation is given by [] Δt ( ) VCO rms σ φ = π. (7) T The phase noise power spectral density expressed in terms of frequency is given by [] f Δt ( ) VCO rms Sφ f = f. (8) T a) that for frequency stability in terms of temperature variations the best performance (Δf / f <%) has current starved inverters with output-switching; b) that for frequency stability in terms of power supply voltage variations the best performance (Δf / f <4%) has current starved inverters with power-switching; c) by realizing combined types of ring oscillator the relative frequency deviation in terms of power supply voltage variations can be significantly decreased (Δf / f <%) in respect to the best standard solutions (Δf / f >4%). d) in respect to phase noise, ring oscillators based on current starved inverters with output-switching have the best performance (phase noise approx..6 rad). PhaseNois[rad],35,3,5,,5,,5 current starved with power - switching basic type current starved with output - switching current starved with symmetrical load x current starved + x basic type x current starved + 3x basic type 3x current starved + 4x basic type Fig. 9. Relative frequency deviation in term of power supply voltage variation for proposed ring VCOs The amount of phase noise for all types of ring oscillators discussed in this paper is sketched in Fig. 9. By analyzing Fig. 9 we can conclude that the best performance (phase noise approx..6 rad) have ring oscillators based on current starved inverters with output-switching, while the worst (phase noise approx..3 rad) correspond to ring oscillators realized with basic type or current starved with powerswitching inverters. Combined ring oscillators, composed of basic and current starved with output-switching inverters, have approximately phase noise within the range.6-. rad. V. CONCLUSION Ring oscillators are basic building blocks of complex integrated circuits. They are mainly used as clock generating circuits. Many different types of ring oscillators are presented in literature []-[4]. They differ in respect to architectural, realization of inverters stages, number of inverter stages, etc. In this paper we have considered realization of ring oscillator based on four different types of single-ended inverters. The simulation was performed using HSpice Version 3.6 and library model for.μm CMOS technolgy. According to the obtained simulation results we can conclude: REFERENCES [] C. H. Park, O. Kim, B. Kim, A.8-GHz self-calibrated phase locked loop with precise I/Q matching, IEEE J. Solid-State Circuits, vol. 36, pp , May. [] L. Sun and T. A. Kwasniewski, A.5-GHz.35-μm monolithic CMOS PLL based on a multiphase ring oscillator, IEEE J. Solid-State Circuits, vol. 36, pp. 9 96, June. [3] J. Savoj and B. Razavi, A -Gb/s CMOS clock and data recovery circuit with a half-rate linear phase detector, IEEE J. Solid-State Circuits, vol. 36, pp , May. [4] C. K. K. Yang, R. Farjad-Rad, M. A. Horowitz, A.5-μm CMOS 4.-Gbit/s serial link transceiver with data recovery using oversampling, IEEE J. Solid-State Circuits, vol. 33, pp. 73 7, May 998. [5] M. Alioto, G. Palumbo, Oscillation frequency in CML and ESCL ring oscillators, IEEE Trans. Circuits Syst. I, vol. 48, pp. 4, Feb.. [6] B. Razavi, A -GHz.6-mW phase-locked loop, IEEE J. Solid-State Circuits, vol. 3, pp , May 997. [7] S. Docking, M. Sachdev, A Method to Derive an Equation for the Oscillation Frequency of a Ring Oscillator, IEEE Trans. on Circuits and Systems - I: Fundamental Theory and Applications, vol. 5, No., pp , February 3. [8] G. Jovanović, M. Stojčev, Current starved delay element with symmetric load, International Journal of Electronics, pp , Vol. 93, No 3, March 6. [9] O.-C. Chen, R. Sheen, A Power-Efficient Wide-Range Phase- Locked Loop, IEEE Journal of Solid State Circuits, vol.37, No., pp. 5-, January. [] Todd Charles Weigandt, Low-Phase-Noise, Low-Timing-Jitter Design Techniques for Delay Cell Based VCOs and Frequency Synthesizers, PhD dissertation, University of California, Berkeley, 998. [] S. Docking, and M. Sachdev, An Analytical Equation for the Oscillation Frequency of High-Frequency Ring Oscillators, IEEE Journal of Solid State Circuits, vol.39, No. 3, pp , March 4. [] A. Hajimiri, S. Limotyrakis, T. Lee, Jitter and Phase Noise in Ring Oscillators, IEEE Journal of Solid State Circuits, vol.34, No. 6, pp , June

(Video) P. Diddy [feat. Usher & Loon] - I Need A Girl Part 1 (Official Music Video)

282 Load Characteristics under Optimal Trajectory Control of Series Resonant DC/DC Converters Operating above Resonant Frequency Nikolay D. Bankov and Tsvetana Gr. Grigorova Abstract - In the paper resonant DC/DC converters optimal trajectory control method is examined. This control method allows the resonant tank energy is fully controlled with the tank energy, current and voltage, all staying well within bounds under all circumstance, including a short circuit across the converter output. The emphasis in this paper is on the obtained of steadystate equations, described DC/DC resonant converter operation above resonant frequency as a function of the diode trajectory radius, using as control parameter. Solving equations can be drawn an inverter family of load characteristics. Experimental results from the investigation of the DC/DC resonant converter are shown. Keywords - resonant DC/DC converters, optimal trajectory control. I. INTRODUCTION The tank processes in series resonant DC/DC converters are fast dynamics and exchanges large amounts of pulsating energy with source and with the load in each half-cycle of operation converter. This optimal trajectory control method [],[] allows the resonant tank energy is fully controlled with the tank energy, current and voltage, all staying well within bounds under all circumstance, including a short circuit across the converter output and has many advantages over the existing methods [3-6]. The emphasis in this paper is on the obtained of steadystate equations, described DC/DC resonant converter operation above resonant frequency as a function of the diode trajectory radius, using as control parameter and corresponding normalized load characteristics, which are useful to design such converter. II. ANALYSIS AND LOAD CHARACTERISTICS + VT VD C C VD VT Ud _ Id CF i L ulc VT4 VD4 C4 C3 VD3 VT3 C Fig. Full-bridge DC/DC resonant converter Nikolay Dimitrov Bankov is with the University of Food Technologies, 6 Maritza Blvd., 4 Plovdiv, Bulgaria, Tsvetana Grigorova Grigorova is with the Technical University of Sofia, Branch Plovdiv, 6 Sankt Petersburg Blvd., 4 Plovdiv, Bulgaria, VD5 VD8 VD6 CF VD7 I RT + U _ Fig. shows proposed resonant DC/DC converter. The analysis and design of such converter are presented in [], [7]. The analysis is made under the following assumptions: the converter elements are ideal; the effect of snubber capacitors and the ripples of input and output voltages are neglected; the output capacitor is sufficiently large such that the output voltage U remains constant through a switching cycle. The following common symbols are used: - ω = LC - angular resonant frequency; Z = L C - characteristic impedance; - ω - switching frequency; ' ' ' ' - i ( ) = I, L u c ( ) = U - normalized initial values of the C resonant link current and voltage across series capacitor for each stage of the converter operation; - θ Q = ω t - transistors conduction angle; Q θ D = ω t - D diodes conduction angle; - ' U U = - voltage ratio, where g is topology constant gu d (g=.5 for half-bridge topologies and g= for full-bridge topologies). For unifying purposes all units are presented as relative ones: the voltages to the supply voltage U d ; the currents to the current I = U d Z ; the input power to the power P = U d Z. Under optimal trajectory control of series resonant DC/DC converters operating above resonant frequency, the converter is analyzed with use of state plane. Fig. shows one steadystate trajectory above resonant frequency. In the ( U i L C U ) state plane [], the trajectory u, C d d described by the operating point is arc of a circle with its center at the point of coordinates (u LC, ) and drawn from representative point of the initial conditions (u C (), ( ) L / C i ' ). The four centers are given by {Q/Q3: ( ' U, )}, {D/D3: ( ' + U, )}, {Q/Q4: ( ' + U, )} and {D/D4: ( ' U, )}. The optimal trajectory control, above resonant frequency utilizes the desired diode trajectory as the control low. In the first half period, distance D (fig.) of the state of the system from the center of trajectory located at {(- - ' U ), } is monitored. When this distance is smaller than radius value R D, as set by the control system, transistors Q/Q3 are turned on (segment M M ). When distance D becomes equal to control input R D at M, transistors are turned off and diodes 79

283 Load Characteristics Under Optimal Trajectory Control of Series Resonant DC/DC Converters Operating Above Resonant Frequency D/D4 are switched on (segment M M 3 ). At point M 3 diodes switch off, as resonant current reverses. Then transistors Q/Q4 are turned on (segment M 3 M 4 ). Distance D is once again monitored, this time as measured from the D/D3 trajectory center {( ' +, )}. M -U Cm O --U D U θvd M RVD -+U y=i θvt -U O RVT M +U M3 x=u C U Cm I = d P d I VT AV I VD AV I VTm ( I I ) VTAV VDAV = U arctg ( RVD U ) ( R U ) VD U ( + U )( RVD U ) ( R U ) 4 arctg VD U ( U )( RVD U ) ( R U ) 4 arctg VD U () () () R VD U θvt π R U sinθ θ VT < π (3) ( VD ) VT M4 RVT Fig. SRC steady-state trajectory above resonant frequency From the triangle O O M (fig.) the following equations for the diodes trajectory radius R D and transistors trajectory radius R Q are obtained R D = + U + U Cm () RQ = U + U Cm = RD U. () For the radius R D is evaluated R D = U + + RVD Q D ( ) U tg D θ + θ. (3) In the base of equations () (3) the expressions of base quantities characterizing converter operation get the form shown in table. Тhe purpose is to obtain the expressions as a function of diode trajectory radius R D, which is a base parameter for control under used method. TABLE RESULTS FROM ANALYSIS Quantity Expression I L U RVD + U R VD sin arccos R U VD (4) U U ( RVD U ) (5) C U R VD U (6) Cm θ VT θ VD I arccos arccos ( I + I ) VTAV VDAV U RVD + U (7) R VD U VD + U RVD U (8) = R arctg ( RVD U ) ( R U ) VD U (9) As a proof of the made analysis it is possible to define optimal control low, which is allowed transient processes with large amplitude, at same time converter stable operation (transistors and resonant tank) are ensured. The variable D is defined (fig.) as the distance between the representative point of the system and the commutating center of the reference trajectories: D = i + [ uc + signi ( + U )] (4) Transistors turn-off are realized when D = RD. If D > R D transistors are turned on while D = RD or θ Q = π. Practically, to ensure transistors soft switching conditions (ZVS) it is necessary θ Q < π and I > Imin. Solving equations (4) (3) can be drawn an inverter family of load characteristics. Those are the relationships of the converter main variables as a function of the voltage ratio ' U and radii R D. The following relations can be of great ' ' interest: the converter output current I ( U, R D ), the input current I ' ' d ( U, R D ), the transistors average current ' ( ' I ) Q AV U, R D ' ' and the freewheeling diodes average current I D AV ( U, R D ), the ' ' peak transistors current I Qm( U, R D ), the peak capacitor ' ' voltage U Cm ( U, R D ). Figures 3-8 show corresponding graphs by R D =,;,5; 6 in relative units. 4 I' R D = Fig. 3 Output characteristics for several values of R D 4 5 U' 7

284 Nikolay D. Bankov and Tsvetana Gr. Grigorova 3 I' d R D = U' 5 5. R D = 6 U' Cm U' Fig.4 Normalized input current I d versus U Fig.8. Normalized peak capacitor voltage U Cm vs U.5 I' Q A V R D = 6 5. I' 4 3 U' = U' Fig.5 Normalized transistors average current I' D A V R D = 6 5 I QAV versus U Fig.6 Normalized diodes average current U' I DAV versus U R D Fig. 9. Normalized output current vs normalized control input The relationships I ( U, ) (fig. 3) represent converter R D output characteristics. The converter can be considered as a current source, stable at short-circuit mode. Its operation near no-load running mode is limited from the transistors soft switching conditions (ZVS) [9]. The relationships I ( R D ),U (fig.9) represent dc characteristics, which are useful in determining the control range for require output variation. Another important concern is whether by limiting the maximum value of the control input, a current- limited output can be obtained. It is important in determining the need of additional overcurrent protection circuit. A study of the characteristics shown in figs. 3-9 reveals that they can be used to calculate the circuits elements of the SRC for example, the transistors average current I' Qav can be calculated from fig.5 for a selected values of U and R D. 6 I' Qm RD = U' Fig.7. Normalized transistors peak current I Qm vs U III. EXPERIMENTAL RESULTS The experimental investigation of the resonant DC/DC converter designed under proposed analysis is made. The following conditions are used: power supply U =3V; switching frequency f=5khz; output power Р=3kW; and resonant link elements L= 7,577µH and C= 46,57nF. During the experiment snubbers value is nf. Fig. sows converter start-up optimal trajectory. 7

285 Load Characteristics Under Optimal Trajectory Control of Series Resonant DC/DC Converters Operating Above Resonant Frequency REFERENCES Fig. Start-up of the converter x: u C (V/div) and y: i L (5А/div). IV. CONCLUSIONS In this paper is obtained steady-state equations in optimal trajectory control, described DC/DC resonant converter operation above resonant frequency as a function of the diode trajectory radius and corresponding normalized load characteristics. They allow evaluating the behaviour of the considered converter when the load is changed strongly during the operation process. The converter experimental results are shown. The converters, operated under this control technique, are more suitable for electric arc welding devices, X-ray devices, lasers power supply etc., where for the supply source, characteristics of the current source are required. [] Oruganti R., F.C. Lee, Resonant Power Processor: Part II - Methods of control, Proc. IEEE-IAS 84 Ann. Meet., pp , 984. [] Popov E.I., Antchev M. Hr. A unified analysis and characteristics of a DC DC converter operating above or below the resonance International Scientific Conference ICEST 4, Proceedings, June 4, Bitola, Macedonia. [3] Sivakumar S., K. Natarajan, A.M. Sharaf, Optimal Trajectory Control of Series Resonant Converter using Modified Capacitor Voltage Control Technique, PESC 9, pp , 99. [4] Natarajan, K., S. Sivakumar, Optimal trajectory control of constant frequency series resonant converter, Proc. IEEE- PESC '93, pp.5, 993. [5] Rossetto L., A Simple Control Technique for Series Resonant Converter, IEEE Trans. on Power Electronics, vol., no.4, pp , 996. [6] Kutkut N.H., C. Q. Lee, I. Batarseh, A Generalized Program for Extracting the Control Characteristics of Resonant Converters via the State-Plane Diagram, IEEE Trans. on Power Electronics, vol.3, no., pp.58-66, 998. [7] Bankov N., Tsv. Grigorova, Investigation of a method for power control of a DC/DC transistor resonant converter, XXXIX ICEST 4, Bitola, Macedonia, vol., pp , 4. 7

286 Modeling of the Optimal Trajectory Control System of Resonant DC/DC Converters Operating Above Resonant Frequency Nikolay D. Bankov and Tsvetana Gr. Grigorova Abstract - The paper presents behavioral modeling of the control system of resonant DC/DC converters, operated above resonant frequency under optimal trajectory control technique. This method predicts the fastest response possible with minimum energy surge in the resonant tank. In this paper the investigations are extended and the specific variant of the series DC/DC converter controls system operated above resonant frequency is proposed. Simulation and experimental results from the investigation of the converter are shown. Keywords - resonant DC/DC converters, behavioral modeling, optimal trajectory control. Fig.. The simulation full-bridge resonant DC/DC converter I. INTRODUCTION The several control methods of series resonant DC/DC converters are widely discussed and compared in resent years [-7]. Due to the presence of resonant circuit with its fast transient response, the control of resonant converters is considerably more complex than PWM converters []. The advantages of the optimal trajectory control method over the existing methods are reduced stress on the reactive and power semiconductor switching elements of the circuit and faster response in case of large variations of circuit operating conditions without affecting the global stability of the system [], [8]. This method predicts the fastest response possible with minimum energy surge in the resonant tank. In this paper the investigations are extended and the specific variant of the series resonant DC/DC converter controls system operated above resonant frequency is proposed. The control system is described using Analog Behavioral Modeling (ABM) [9]. II. BEHAVIORAL MODELING OF THE OPTIMAL TRAJECTORY CONTROL SYSTEM The control system is described using Analog Behavioral Modeling (ABM). Fig. shows the simulation resonant DC/DC converter. The transistor individual control circuits are introduced []. Fig.. Converter control system behavioral modeling The proposed control system (CS) is shown in fig. and in fig. 3 the waveforms, which explain the system operation. From the instantaneous values of i, uc, U and Ud, the control circuits computes the variable D [] at every instant as given by BLOCK, where D = i + u C + signi + U () 4A A -4A 5.V I(L) [ ( )].5V Nikolay Dimitrov Bankov is with the University of Food Technologies, 6 Maritza Blvd., 4 Plovdiv, Bulgaria, Tsvetana Grigorova Grigorova is with the Technical University of Sofia, Branch Plovdiv, 6 Sankt Petersburg Blvd., 4 Plovdiv, Bulgaria, V V() V(Ur) 4.V V V(3) 4.V SEL>> V(4) Fig. 3. Control system main waveforms Time 73

287 Modeling of the Optimal Trajectory Control System of Resonant DC/DC Converters Operating Above Resonant Frequency The dependent current control voltage source (CCVS) H senses the current trough the resonant tank. The signals for voltages U and u C are fed to the dependent voltage sources of EVALUE type (E and E3). The output of E (EVALUE) is the logical signal whose state (+ or ) is determined by the ' signi. The dependent voltage source E realizes function: SGN (V(%IN+, %IN-)) () in the EXPRESSION field of the EVALUE element. The received value of D (v () in fig.) is compared with control signal Ur. The voltage controlled switches S6, S7, one-shot multivibrators 74 and logical elements U3, U4, U5 ensure soft switching condition θ Q < π and I > Imin. (BLOCK). The shaped pulses are fed to the flip-flop trigger U. The pulse distributor U forms two channels of the control pulses, dephased at 8. The dependent voltage control voltage sources (VCVS) E4 E7, provide the required power, amplitudes and galvanic separation of the control signals. As noted in [,9], the maximum rate at which the tank energy can change in half a cycle is limited. Thus the system takes more than a half-cycle to reach the target trajectory. Fig. 4 shows the PSpice simulation results of the system response to large changes in control input. In fig.4a, when control input decrease, the tank energy is reduced by a series of successive diode conductions. 3A A A A A A -A -A -5V -4V -3V -V -V -V V V 3V 4V 5V I(L) V(Uc) A A A -A Fig.5-a. Transient under Converter start-up R D =,5 (t = 3µs) -A -5V -4V -3V -V -V -V V V 3V 4V 5V I(L) V(Uc) Fig.5-b. Transient under Load short circuit R D =,5 (t = 98 33µs) Fig.5a shows PSpice simulation results of the system response at converter start-up. The performance of the method under short-circuit is also remarkable as shown in fig.5b. Within a short time, the system abruptly reaches another equilibrium trajectory at energy level only slightly greater than the earlier one. Thus optimal trajectory control fully exploits the potential of a converter to respond quickly to the demands of control and load. -A -3A -8V -6V -4V -V V V 4V 6V 8V I(L) V(Uc) Fig.4-a. Response of optimal trajectory control for control decrease R D =4,3 R D =,5 (t = 98 3µs ) 3A A A III. EXPERIMENTAL RESULTS The computer simulation and experimental results of the resonant DC/DC converter are given by the following conditions: power supply U =3V; switching frequency f=5khz; output power Р=3kW; and resonant link elements L=7,577µH and C= 46,57nF. During the experiment snubbers value is nf. Fig. 6 sows response of optimal trajectory control in the case for control increase. -A -3A -.KV -.8KV -.6KV -.4KV -.KV.KV.KV.4KV.6KV.8KV.KV I(L) V(Uc) Fig.4-b. Response of optimal trajectory control for control increase R D =,5 R D = 4,3 ( t = 98 3µs ) Likewise in fig.4b, when control input increase, the tank energy is built up a series of successive transistor conductions. Thus by utilizing the desired diode trajectory itself as the control low, the new steady state can be reached. In both cases, the system reaches the new equilibrium trajectory in the minimum possible time limited only by the intrinsic properties of the resonant converter. Fig.6-a. Response of optimal trajectory control for control increase Resonant current i L (5А/div) and capacitor voltage u C (V/div) 74

288 Nikolay D. Bankov and Tsvetana Gr. Grigorova REFERENCES Fig.6-b. Response of optimal trajectory control for control increase State plane -x: u C (V/div) and y: i L (5А/div). A very good agreement between simulation (fig.4b) and experimental results (fig.6) can be seen. IV. CONCLUSIONS In the paper are presented the PSpice simulation results of the system response to large changes in control input. Showing different cases: when control input decrease, the tank energy is reduced by a series of successive diode conductions, likewise when control input increase the tank energy is built up a series of successive transistor conductions. Thus by utilizing the desired diode trajectory itself as the control low, the new steady state can be reached. In both cases, the system reaches the new equilibrium trajectory in the minimum possible time limited only by the intrinsic properties of the resonant converter. PSpice simulation results of the system response at converter start-up are shown too. The performance of the method under short-circuit is also remarkable. Within a short time, the system abruptly reaches another equilibrium trajectory at energy level only slightly greater than the earlier one. Thus optimal trajectory control fully exploits the potential of a converter to respond quickly to the demands of control and load. A very good agreement between simulation and experimental results can be seen. [] Oruganti R., F.C. Lee, Resonant Power Processor: Part II - Methods of control, Proc. IEEE-IAS 84 Ann. Meet., pp , 984. [] Cheron Y., La commutation douce dans la conversion statique de l'energie electrique, Technique et Documentation - Lavoisier, 989. [3] Sivakumar S., K. Natarajan, A.M. Sharaf, Optimal Trajectory Control of Series Resonant Converter using Modified Capacitor Voltage Control Technique, Proc. IEEE-PESC 9 Ann. Meet., Cambridge-Boston, pp , 99. [4] Natarajan, K., S. Sivakumar, Optimal trajectory control of constant frequency series resonant converter, PESC '93, pp.5, 993. [5] Rossetto L., A Simple Control Technique for Series Resonant Converter, IEEE Trans. on Power Electronics, vol., no.4, pp , 996. [6] Kutkut N.H., C. Q. Lee, I. Batarseh, A Generalized Program for Extracting the Control Characteristics of Resonant Converters via the State-Plane Diagram, IEEE Trans. on Power Electronics, vol.3, no., pp.58-66, 998. [7] Sendanyoye V., K. Al Haddad, V. Rajagopalan, Optimal Trajectory Control Strategy for Improved Dynamic Response of Series Resonant Converter, Proc. IEEE- IAS 9 Ann. Meet., Seattle, WA, pp. 36-4, 99. [8] Al-Haddad K., Y. Cheron, H. Foch, V. Rajagopalan, Static and Dynamic Analysis of a Series Resonant Converter Operating above its Resonant Frequency, SATECH'86 Proceedings, Boston, pp , 986. [9] OrCAD PSpice A/D User s Guide, OrCAD Inc., USA, 999. [] Bankov N., Tsv. Grigorova, Modeling of a control system of a transistor resonant inverter, XXXIX ICEST 4, Bitola, Macedonia, vol., pp , 4. 75

289 This page intentionally left blank. 76

290 Comparison of Temperature Dependent Noise Models of Microwave FETs Zlatica D. Marinković, Vera V. Marković and Olivera R. Pronić Abstract In this paper, a comparison of hybrid empiricalneural noise models of microwave FET transistors earlierproposed by the authors is done. The models are compared from various aspects such as: model accuracy, model complexity, a number of measured data needed for a model development, etc. Moreover, the models are contrasted to the models based on neural networks only. Keywords Neural networks, microwave FET, noise. I. INTRODUCTION During the last decade, neural networks have found many applications in modelling in the microwave area, []-[7]. Since they have the ability to learn from the presented data, they are especially interesting for non-linear problems and for the problems not fully mathematically described. Considered as a fitting tool, they fit non-linear dependencies better than polynomials. Once trained, neural networks are able to predict outputs not only for the input values presented during training process (memorizing capability) but also for other input values (generalization capability). Neural networks have been applied in modelling of either active devices or passive components, in microwave circuit analysis and design, etc. They have been applied in microwave FET transistor signal and noise performance modelling as well, [3]-[7]. Accurate and reliable noise models of microwave transistors are required for analyses and design of microwave active circuits that are parts of modern communication systems, where it is very important to keep the noise at a low level. Transistor signal and noise performances depend on temperature, but most of the existing transistor signal, and especially noise models refer to a single temperature (usually, nominal temperature). Therefore, for further analyses involving various temperature conditions, it is necessary to develop models for each operating temperature point. Model development is basically an optimisation process, usually time-consuming. Furthermore, measured signal and noise data for each new operating point are necessary for model development. Since these measurements require complex equipment and procedures, measured data acquiring could take much effort and time. Applying neural networks in the noise modeling can make modeling procedures more efficient and more accurate. Authors of this paper have proposed several temperaturedependent noise models of microwave transistors based on the neural networks. On the one side, there are hybrid-empirical models where neural networks are used for including the temperature dependence into an existing empirical device noise model, [5]-[7]. On the other side, there are black-box models based on neural networks only, [5]. In this paper a detailed comparison of the proposed hybrid empirical-neural noise models is done. The models are compared from various aspects: model accuracy, model complexity, a number of measured data needed for a model development, etc. Furthermore, the models are contrasted to the models based on neural networks only. Additionally, several recommendations regarding applicability of the models and their development are given. The paper is organized as follows: after Introduction, In Section II neural networks are shortly described. A brief review of the proposed neural models is given in Section III. Modelling example is presented in Section IV. Finally, in Section VI the main conclusions are reported. II. MLP NEURAL NETWORK A multilyer perceptron neural network (MLP), such has been used in this work, consists of neurons grouped into layers: one input layer, several hidden layers and one output layer, []. The network inputs are inputs of the first layer neurons. Each neuron from a layer is connected with all of the neurons from the next layer but there are no connections between the same layer neurons. The network outputs are outputs of the output layer neurons. Each neuron is characterized by an activation function and its bias, and each connection between two neurons by a weight factor. The neurons from the input and output layers have linear activation functions and hidden neurons have sigmoid activation function. The neural network learns relationship among sets of inputoutput data (training set) by adjusting the network parameters (connection weights and biases of activation functions) using optimisation procedures, such as the backpropagation algorithm or its modification the Levenberg-Marquardt algorithm, []. Once a neural network is trained its structure remains unchanged, and it is capable of predicting outputs for all inputs whether they have been used for the training or not. For all networks trained for the purpose of work, a number of the hidden neurons was determined during the network training process. For each network structure, neural networks with different number of the hidden neurons were trained and the modelling ability of each network was tested. The network with the best modelling results was chosen as the model of the considered structure. Authors are with the Faculty of Electronic Engineering Aleksandra Medvedeva 4, 8 Niš, Serbia, [zlatica, vera, 77

291 Comparison of Temperature Dependent Noise Models of Microwave FETs III. TRANSISTOR NOISE MODELS BASED ON NEURAL NETWORKS A microwave transistor, as a two-port noisy device can be characterized by a noise figure F, which is a measure of the degradation of the signal-to-noise ratio between input and output of the device. Noise characteristics of the device are usually treated in terms of four noise parameters: minimum noise figure F min, equivalent noise resistance R n, and magnitude and angle of the optimum reflection coefficient, corresponding to the generator impedance resulting in minimum noise figure, Mag ( Γ opt ) and Ang ( Γ opt ). In the text below, the proposed microwave transistor noise models based on neural networks are described. The first proposed model, a basic hybrid empirical-neural model, framed with a dotted line in Fig, consists of an existing empirical device noise model based on equivalent circuit representation and a neural network (NNet) trained to model temperature dependence of equivalent circuit elements and parameters, (ECP), [5]. This network has one input neuron corresponding to the ambient temperature. The number of the neurons in the output layer corresponds to the number of transistor ECP (N in Fig. ). The number of hidden neurons is optimised during the training. A drawback of the basic hybrid empirical-neural model is that its accuaracy can not be greater than the accuracy of the empirical model itself. Therefore, an alternative solution for improvement of the hybrid model enabling accuracy has been proposed in [7]. The mentioned improvement of the basic hybrid model is based on adding an additional neural network (NNet) aimed to correct values of the noise parameters obtained by the basic hybrid model, Fig.. The inputs of the NNet network are temperature and frequency and the corresponding values obtained by the basic hybrid model. The training process of the NNet requires the basic hybrid model to be implemented previously, in order to obtain approximate values of the noise parameters for all combinations of the temperature and frequency used for the training. Since the measured values of the noise parameters are used as target output values for the NNet, accuracy greater than the accuracy of the basic hybrid model can be achieved. On the other hand, there is a black-box model, Fig, [5], consisting of a single neural network with two inputs corresponding to the temperature and frequency and four output neurons corresponding to the four noise parameters. The network is trained using the measured values of the noise parameters. Fig.. Black-box neural noise model IV. MODELING EXAMPLE The above described models were applied to an HEMT device (type NE83A) in a packaged form, in the temperature range (-4 6) C. The measurements of the device noise parameters were performed by a research group with the University of Messina, by using an automated measurement system [8], [9]. The Pospieszalski's model, [], is used for the transistor noise representation, Fig 3. The intrinsic small-signal equivalent, framed with a broken line, includes two noise sources. The extrinsic circuit elements Cgdp Cgd Fig.. Hybrid empirical-neural noise models The model development starts from transistor signal and noise data measured at several temperature points. Using these measured values, ECP are extracted for each temperature. Further, the network training is done, and the trained network is assigned to the earlier-implemented device empirical model within a standard microwave simulator. The new model can be used as a user-defined library element, with the ambient temperature as the input, enabling the noise parameters determination at each operating temperature, without need for noise parameters measured values at that temperature and without additional optimizations. G T Lg Rg T S Rd Ld V Cgs g R ds mv e -jwt R gs Cds i ds, Td + C C gsp dsp e gs, Tg Rs Ls 3 Fig. 3. Pospieszalski s transistor noise model T D 78

292 Zlatica D. Marinković, Vera V. Marković and Olivera R. Pronić represent package effects and parasitic effects. Voltage noise source e gs and current noise source i ds represent the noise generated inside the device. The equivalent temperatures Tg and T d are assigned to the voltage source e gs and current source i ds, respectively. The equivalent temperatures are empirical model parameters and are extracted from the measured device noise data through an optimisation process. The noise parameters related to the intrinsic circuit can be expressed as functions of equivalent circuit elements, two equivalent temperatures and frequency, []. Once four noise parameters of intrinsic circuit are determined, other model elements have to be added to the circuit with the aim to determine the noise parameters of the whole packaged device. The noise temperature of all resistances in the extrinsic circuit is assumed to be equal to the ambient temperature. Therefore, the number of the ECP to be modelled is : 9 small-signal model elements and the equivalent drain noise temperature T d. The equivalent gate noise temperature T g is assumed to be equal to the ambient temperature. Firstly, the NNet was trained from the extracted values of the ECP for the mentioned temperatures. The network with 5 hidden neurons was chosen as the best, [5]. Then, the model was implemented in the ADS simulator, []. The noise parameters values obtained by this new model at the temperatures: -4 C, C, C and 6 C and together with the corresponding measured noise parameters, were used for the NNet training. The best-obtained NNet has hidden neurons, [7]. As an illustration, in Fig. 4 there is plot of the magnitude of optimum reflection coefficient in the frequency range (6-8) GHz. It is obvious that the values obtained by the improved hybrid model (solid line) are much closer to the reference (measured) values (squares) then the ones obtained by the basic hybrid model (doted line). The modelling accuracy improvement is achieved not only at the training temperatures but also at the temperatures - C and 4 C, not used for the network training. The effects of the improvement are the most obvious at the boundaries of the temperature range. The black-box neural noise model was developed using all of the available data, since training of the networks with a reduced set of the measured data did not give satisfactory modelling accuracy. The best-obtained NNet has two hidden layers each having neurons. Compared to the basic hybrid model, this model provides better modelling accuracy, as can be confirmed by the scattering plots given in Figs. 5 and 6, showing the values of the magnitude of the optimum reflection coefficient versus the corresponding measured values for both of the models. There is less scattering from the ideal diagonal line y=x in the case of the black-box neural modelling. On the other hand, the modelling accuracy of the black-box neural modelling is similar to the accuracy of the improved hybrid model, which can be observed by contrasting the corresponding scattering plots given in Figs. 6 and 7. As a further confirmation of the above-stated, in Fig. 8. there are values of the magnitude of the optimum reflection coefficient obtained by the black-box model (circles) matching very well to the corresponding ones obtained by the improved hybrid model (crosses). Fig. 4. Magnitude of optimum reflection coefficient V. CONCLUSION All of the proposed models, the hybrid empirical-neural ones and the black-box one, provide efficient noise modelling of microwave FET transistors. Contrary to the most of existing empirical models, where model development should be repeated for each operating temperature, once a neural model is developed, it is valid in the whole operating frequency and temperature range. The advantage of the basic hybrid empirical-neural model is that the temperature dependence is included in the noise model but there are no improvements regarding the modelling accuracy. Modelling accuracy can be enhanced by the improved hybrid empirical-neural model, which has an additional neural network aimed to correct values of the noise parameters. Since the correction network is trained using the measured noise data, the achieved modelling accuracy can be equal to the accuracy of the measured data. The previous is valid for the accuracy of the black-box neural model as well. In the both cases, all mechanisms of generating noise are included in the model. The improved hybrid model has at its inputs additional knowledge about the noise parameters. Therefore, it requires less training data than the black box model and is suitable when there are no enough training data. On the other hand, regarding the time needed for a model development and a number of necessary optimizations, the black-box model is the most efficient, since only the training of one neural network should be done and there are no optimizations in a circuit simulator. Hence, it is the best solution if there are enough measured data for the model development. All of the proposed models can be easily implemented in standard microwave simulators. 79

293 Comparison of Temperature Dependent Noise Models of Microwave FETs Fig. 5. Magnitude of optimum reflection coefficient Scatering plot: basic hybrid model vs. reference data Fig. 8. Magnitude of optimum reflection coefficient Improved hybrid model contrasted to black-box model REFERENCES Fig. 6. Magnitude of optimum reflection coefficient Scatering plot: improved hybrid model vs. reference data Fig. 7. Magnitude of optimum reflection coefficient Scatering plot: black-box neural model vs. reference data [] Q. J. Zhang, K. C. Gupta, Neural networks for RF and microwave design, Artech House, [] K.C.Gupta, EM ANN models for microwave and millimeterwave components, IEEE MTT-S Int. Microwave Symp. Workshop, Denver, CO, June 997, pp [3] F.Gunes, H.Torpi, F.Gurgen, "Multidimensional signal-noise neural network model", IEE Proceedings on Circuits, Devices and Systems, Vol.45, Iss., Apr 998, pp. -7 [4] P.M. Watson, M. Weatherspoon, L. Dunleavy; G.L. Creech, Accurate and efficient small-signal modeling of active devices using artificial neural networks Proceedings of Gallium Arsenide Integrated Circuit Symposium, Technical Digest, November 998, pp [5] Z. Marinković, V. Marković, Temperature Dependent Models of Low-Noise Microwave Transistors Based on Neural Networks, International Journal of RF and Microwave Computer-Aided Engineering, vol.5, no. 6, pp , 5. [6] Z. Marinković, O. Pronić, J. Ranđelović, V. Marković, An Automated Procedure for MESFETs / HEMTs Noise Modeling Against Temperature, ICEST5, June 5, Niš, Serbia and Montenegro, pp [7] Z. Marinković, V. Marković, Accurate Temperature Dependent Noise Models of Microwave Transistors Based on Neural Networks, Proceeding of European Microwave Week 5-3 th GAAS Symposium, October 3-7, Paris, France, pp [8] NE83A_temp.xls file, internal communication with prof. A. Caddemi, University of Messina, Italy. [9] A. Caddemi, A. Di Paola, M. Sannino, "Determination of HEMT's Noise Parameters vs. Temperature using Two Measurement Methods", IEEE Trans. on Instrumentation and Measurement, Vol. IM-47, 998, pp. 6-. [] M.W. Pospieszalski, Modeling of noise parameters of MESFET's and MODFET's and their frequency and temperature dependence", IEEE Trans. Microwave Theory Tech., Vol. 37, 989, pp [] Advanced Design Systems-version.5, Agilent EEsof EDA,. 73

294 Power Losses and Applications of Nanocrystalline Magnetic Materials Vencislav C. Valchev, Georgi T. Nikolov, Alex Van den Bossche 3 and Dimitre D. Yudov 4 Abstract In this paper magnetic properties, power losses and applications of nanocrystalline magnetic materials are presented. It is figured out that because of their marvelous properties the nanocrystalline materials are very prospective in electronics. The loss comparison shows -3 times lower losses per unit weight of nanocrystalline compared to ferrites under both sine and square voltage measurements carried out. Keywords Nanocrystalline magnetic materials, losses emphasis on ISDN systems, installation techniques at 5/6 Hz and since very recently applications in the automotive electronics. Additionally particle accelerators should be mentioned where cores with masses up to 5 kg or even more are needed for converters or resonators [],[]. A diagram, showing the comparison of typical properties of some Soft Magnetic Materials is shown in Fig. []. I. INTRODUCTION Nanocrystalline alloys are firstly developed to obtain great permeability. The outcome of nanocrystalline manufacturing processes suggests an alternative about the use of other materials in power electronics applications. In nowadays power electronics applications, the nanocrystalline materials are concurrent to power ferrites and amorphous materials at high frequency devices. The purpose of this paper is to presents the results of comparison of main parameters and application advantages of nanocrystalline soft magnetic materials and ferrites for Power Electronics Components. II. NANOCRYSTALLINE ALLOYS PROPERTIES The nanocrystalline alloys (FeSiBCuNb) are closely related to the amorphous soft magnetic materials. The precursor amorphous FeSiB alloy, containing small additions of Nb and Cu, is elaborated by very rapid solidification on ribbons µm thick ( Finemet HITACHI, Vitoperm VACUUMSCHMELZE, NanoPhY IMPHY). The material is annealed at medium temperature (5-55 C) to induce optimum crystallization and to develop the remarkable and unexpected magnetic properties of the nanocrystalline structure discovered at the end of the 8 s. Due to their unique combination of favorable magnetic properties nanocrystalline cores are now well established in a wide field of applications. The major areas are: switched mode power supplies, digital telecommunications with Fig. Typical initial permeabilities and saturation inductions for soft magnetic materials One of the great advantages that nanocrystalline magnetic materials offer is the ability to control their B-H curve, by applying a magnetic field during annealing. In Fig. three typical curves for FINEMET (Hitachi) cores are shown: ) H type: a magnetic field is applied in a circumferential direction of the core plane during annealing. ) M type: no magnetic field is applied during annealing. 3) L type: a magnetic field is applied vertically. In manufacturer data sheets there is data about the losses under sine wave excitation. Comparison of losses under sine wave of typical magnetic materials is given in Fig.3 [,3]. It must be emphasised that the comparison of only magnetic properties could not argue of an indisputable advantage of nanocrystalline compared to soft-ferrite materials [4]. So, integration capabilities of nanocrystalline materials must be analysed including power electronic specifications. Vencislav C. Valchev - Technical University Varna, Varna Studentska, Bulgaria, Georgi T. Nikolov - Technical University Varna, Varna Studentska, 3 Prof. AlexVan den Bossche - EELAB EESA Firw8 UGENT, Sint-Pietersnieuwstraat 4, B 9 Gent, Belgium 4 Dimitre D. Yudov Bourgas Free University, Bourgas, Bulgaria 73 Fig.. Typical curves for FINEMET [5].

295 Power Losses and Applications of Nanocrystalline Magnetic Materials Fig. 3. Specific core losses of typical materials for power electronics under sine wave Comparison of relative permeability μ r of typical materials for power magnetic components is shown in Fig. 4 [5]. Here, ω is the angular velocity and μ hr a reference permeability. Thus, z s represents a self inductance per unit length with a loss angle. Eq. () describes the magnetic behaviour of the material in the whole frequency range by only two parameters δh and μhr. The measured material is VITROPERM 5F. The core shapes are all toroidal. Three different cores sizes are measured. The first coil (W435-4) is wound with four windings of 6 turns of 4 x, mm Litz wire in parallel. The secondary winding (,5 mm double insulated) is used to measure the voltage and the flux. The number of turns is N=6. The second and third coil (W56- and W433-) are wound with three windings of 5 turns of 4 x, mm Litz wire in parallel. We measured the losses at variable duty ratios - D (from 5% to 5% with a 5% step), using a high-frequency test platform [8]. To be able to compare the data correctly, we used the specific volume losses. Experimental waveforms for duty ratio of 4% are shown in Fig.6. Fig. 4. Comparison of relative permeability μ r of typical materials for power magnetic components. III. POWER LOSSES IN NANOCRYSTALLINE MATERIALS FOR TYPICAL POWER ELECTRONICS WAVEFORMS In power electronics sine waves are not very often used. Most frequently the voltage resemble a square wave or pulse wave with variable duty ratio. Thus, to carry out a comparison in respect to power electronics applications, we measured losses in nanocrystalline and a few ferrite materials. A wide frequency loss model of nanocrystalline magnetic sheets including hysteresis effect is based on the theory of one-dimensional homogeneous transmission lines in the frequency domain [6]. The expressions for the complex impedances per unit length z s and z p result from the transmission line equations: z p = /( σ + j ωε ) / σ with σ the conductivity. The effect of the permittivity ε is neglected so that z p is real. The expression zs = jωμ is combined with an impedance function in order to describe the hysteresis and the excess losses [7], represented by a constant loss angle δ h (in radians): δh / π z (j ω) = jωμ (j ω) = μ (j ω) () s h hr Fig. 5. VITROPERM 5F 63x5x5 core: W Full Bridge, Square Wave, khz, 5 C - 4% Duty Ratio Fig. 6. Loss comparison for materials 3F3 and Vitroperm 5F, under square voltage, for variable duty ratio, from 5 % to 5 % with a 5 % step, f=khz, B peak =.T. 73

296 Vencislav C. Valchev, Georgi T. Nikolov, Alex Van den Bossche and Dimitre D. Yudov A comparison of the losses for materials 3F3 and Vitroperm 5, under square voltage, for variable duty ratio, from 5 % to 5 % with a 5 % step is shown in Fig.5 for f= khz, B peak =.T. As it is shown in the Figure 6, nanocrystalline materials exibit significantly lower losses compared to the ferrites under typical power electronics waveforms. IV. DESIGN SPECIFICS OF NANOCRYSTALLINE MATERIALS Core shapes: With regard to core shapes, E-cores (for larger power even combinations of U- and l-cores) as well as toroidal cores are used. Operating frequency range: A wide frequency range can be used at even high induction swing. Beside low core losses, the main inductance of a VITROPERM transformer will be dependent on frequency to a very small extent only, and as leakage inductances will be small due to the toroidal geometry and the low number of turns possible. This results in good magnetic coupling properties, and excess voltage peaks on switching transistors can be kept small. Moreover, there will be a low external leakage field. The application temperature: The application temperatures of VITROPERM 5 F typically range from - 4 C to + C. This high maximum operating temperature provides a further volume advantage and is made possible by the large thermal stability of our materials and their properties. Core losses of most of nanocrystalline materials decrease up to 3-5% at C, when compared to room temperature. Fig.7 Features of nanocrystalline materials Typical applications of nanocrystalline materials in Electronics and Power Electronics are shown in Fig.8. A useful comparison of properties and applications of most widely used soft magnetic materials in electronics is presented in Fig. 9. V. APPLICATIONS OF NANOCRYSTALLINE MATERIALS Typical applications of nanocrystalline materials are: telecommunications (telephone exchange power supplies, Base Stations); railways technology, mechanical handling equipment technology (battery charging devices); 3 welding technology (switched mode Converters). In future, these will increasingly be extended to: 4 electric vehicles (battery charging devices, motor inverters); 5 solar technology (inverters); 6 induction heating. The features of the nanocrystalline materials and the corresponding applications are summarized in Fig.7. Fig.8. Typical applications of nanocrystalline magnetic materials in Electronics and Power Electronics. Fig.9. Comparison of the features and applications of soft magnetic materials in Electronics and Power electronics. 733

297 Power Losses and Applications of Nanocrystalline Magnetic Materials VI. CONCLUSION In this paper magnetic properties, applications and global operating parameters of nanocrystalline magnetic materials are presented. The nanocrystalline materials combine high permeability of amorphous materials and low losses of ferrite materials, thus they are very promising in power electronics. Further advantage of the nanocrystalline materials is the possibility to contraol the B-H loop by applying magnetic field during annealing. The loss comparison shows -3 times lower losses of nanocrystalline compared to ferrites under both sine and square voltages experiments and measurements carried out and depicted. A wide frequency loss model of nanocrystalline magnetic sheets including hysteresis effect is also discussed in the paper. ACKNOWLEDGEMENT The paper was developed in the frames of the NATO Research Program, Project RIG REFERENCES [] Petzold J., Advantages of softmagnetic nanocrystalline materials for modern electronic applications, Journal of Magnetism and Magnetic Materials, 4 45 () [] Vacuumschmelze GmbH & Co.KG., Nanocrystalline VITROPERM - EMC Components, 4 [3] Herzer G., Nanocrystalline soft magnetic alloys. in: K.H.L. Buschow (Ed.), Handbook of Magnetic Materials, Vol., Elsevier, Amsterdam, 997, pp [4] H. Chazal, J. Roudet, T. Chevalier, T. Waeckerle,H. Fraisse, Comparative Study Of Nanocrystalline and Soft-Ferrite Transformer Using an Optimization Procedure, EPE 3, Toulouse [5] Hitachi Metals,Ltd., Nanocrystalline soft magnetic material FINEMET, 5 [6] G. Bertotti, Hysteresis in magnetism, Academic Press, New York, (998). [7] Van den Bossche, and V.C. Valchev, Inductors and Transformers for Power Electronics, CRC Press, Boca Ration, FL, USA (5). [8] Van den Bossche A., T.A. Filchev, V.C. Valchev, D.D. Yudov, Test Platform for resonant converters, th European Power Electronics and Applications Conference, EPE 3, Toulouse, France, -4 September 3, CD-ROM 734

298 Multi level Electronic Transformer Dimitre D.Yudov, Atanas Iv. Dimitrov, Vencislav C. Valchev 3 and Dimitar M. Kovatchev 3 Abstract Serial connection of the rectifiers and parallel connection of the outputs of DCET are presented, providing reduced voltages of the components, as well as to increased battery charging current. Using the carried out investigations and experiments, useful relations are derived for dimensioning the components in the electronic transformer. Keywords АC/DC converter, DC/AC, DCET, battery charging I. INTRODUCTION The reduction of maximum static and dynamic voltage and current values of the converter components is achieved by using of multi-level converters. In [] a three level converter operating on a common load and supply by a common source is presented. The operating high voltages require high voltage switching components and passive components. The aim of the paper is introducing, analyzing and dimensioning of a n level converter, achieving considerable reduction of the voltages across the included converters. The converters are connected in series and their input and they are connected in parallel in their output (all the converters deliver energy to one and the same load). II. BLOCK DIAGRAM The block diagram of a n level converter is shown in fig.. The block diagram includes n similar AC/DC converters, connected in series and n DC electronic transformers (DCET). The DCET operate on a common load. The choice of the number of the levels depends on the value of the supplying voltage and output voltage in order to obtain an optimal transformer ratio, n equal to (n=) []. Fig.. Block diagram of a multilevel electronic transformer. The realization of DCET can be achieved whit both oneswitch or two-switch converters depending on the output power. The power stage of one phase three level electronic transformer, suitable for battery charging, is shown in fig. Forward converter scheme is used to realize the levels of DCET. The new approach here is that the accumulated energy is transferred to the load by the windings w rn. This is obtainable because the load is almost purely capacitive [3,5,6]. The control of the power switches is realized by PWM controller, which is galvanicly isolated in respect to the power stage [4]. Dimitre D. Yudov is with the Centre of Informatics and Technical Sciences, Burgas Free University, 8 Burgas, Bulgaria, Atanas Iv. Dimitrov is with the Centre of Informatics and Technical Sciences, Burgas Free University, 8 Burgas, Bulgaria 3 Vencislav Valchev and Dimitar M. Kovatchev are with the Faculty of Electronics, Technical University, 9, Varna, Bulgaria Fig.. Principle scheme of the power stage of a three level electronic transformer used for battery charging. 735

299 Multi level Electronic Transformer Fig. 3. PSpice model of the investigated three level electronic DCET for battery charging A specific feature of the scheme is that the voltage across the diodes in the rectifier bridges Is three time less then the power supply voltage, and the battery charging current is three times higher then the power transistor current. This advantage can be taken in account in investigation and dimensioning of the scheme. In fig. 3 is shown the PSpice equivalent model of the circuit. Transformers of each of the levels are designed for P O =W. A magnetic core model of PHILIPS is used with parameters A C =,78cm, saturation induction B sat =49Gauss and µ i =3 [3]. For the designed power, inductors with L=L=L3=5µH are used. In the equivalent model, the pulse-width modulator (PWM) is replaced by pulse generators V, V and V3, which work synchronous without phase difference. The resistors Rp Rp3, Rs Rs3, Rr Rr3 и RL RL3 in the circuit represents the active resistance of the conductors of the windings. The resistor Rins represents the isolation resistance of the transformers, and Rint the internal resistance of the battery. In fig. 4 and fig. 5 are given the waveforms from PSpice simulations, showing the functionality of the chosen circuit. Fig. 4. Simulation waveforms of DCET Fig. 5. Simulation waveforms of DCET, voltage and current of a power transistor and charge current of the battery 736

300 Dimitre D.Yudov, Atanas Iv. Dimitrov, Vencislav C. Valchev and Dimitar M. Kovatchev It can be noticed that: The input voltage of each stage (U, U, and U 3 ) are equal, and three times less, than the input voltage (U i ) The voltage across transistors is in the admissible range (U CE =.4U ) The current through the transistors has small AC component and no real peak values. The charge current of the battery is three times more, than the current in each level, at transformation coefficient k =. It is necessary to investigate the possibility of stabilization of the charge current, when the supply voltage and the battery voltage change. In fig. 6 is shown the control characteristics of the device under test, when the supply voltage changes. One can see that the current can be regulated in the range of 6А when the supply voltage changes with ΔUi= ± %. inductance is achieved at low values of duty ratio δ and the charge current depend less on δ. The dependence of the maximum voltage across the power transistors upon the charge current with battery voltage changing (U(Batt)= 44, 48 and 54V) is shown on fig Fig. 8. Voltage across the power transistors versus the charge current Fig. 6. Control characteristics of the circuit, when supply voltage changes In fig.7 is shown the dependence of the charge current, on different values of the battery - U(Batt)= 44, 48 и 54V. Fig. 7.Control characteristics of the circuit, when battery voltage changes Two sections are clearly distinguished in fig.6 and fig. 7 are visible, characterized with different slopes. This is due to the discontinuous and continuous working mode of the converters. The discontinuous mode of the current through the When the charge current changes according to the specifications (A 6А), and the battery voltage is in the range U batt =(44 54)V, the voltage across the power transistors does not change a lot U CE =(..6)U, where U =U =U =U 3 is the supply voltage of each stage. On the basis of the obtain waveforms and the dependences for the voltage and current of its components, several formulas can be derived to help dimensioning these components. The average value of the current through the power transistor is: tи IO k. δ. IO IC = k. dt T =, () n n where w s k = is the transformation coefficient of the wp power transformers; n is number of levels (stages) of DCET. The maximum value of the voltage across the transistors is: γ. U im U CE =, () n where γ=(. 3) is a coefficient giving voltage loading of the transistor. The required value of the inductance in each level is determined by the admissible pulsations of the charge current: nu..( δ ) L n =, (3) a. f. I where: L n is the inductance in each level; a is the pulsations coefficient and a [.5 3] ; U O is the output voltage; f is the working frequency of the converters. 737

301 Multi level Electronic Transformer The choice of the number of levels depends on the supply voltage and the output voltage. In this way optimal structure of the transformer can be obtained with transformation coefficient k=[4]. CONCLUSION Serial connection of the rectifiers and parallel connection of the outputs of DCET leads to reducing the working voltages of the transistors and the reactive elements, as well as to increasing the charging current of the battery. When the supply voltage changes and/or battery voltage vary (according to the specifications), a constant value of the charging current is obtained by controlling the duty ratio δ. Using the carried out investigations and experiments, useful relations are derived for dimensioning the components in the electronic transformer. ACKNOWLEDGMENT REFERENCES [] D. Yudov, At. Dimitrov, Многозвенен понижаващ DC-DC преобразувател при работа на противо Е.Д.Н., Електроника 6, София, България, 6. [] Van den Bossche, and V.C. Valchev, Inductors and Transformers for Power Electronics, CRC Press, Boca Ration, FL, USA (5). [3] D. Yudov, At. Dimitrov, A Step-Down Pulse Converter, ELECTRONICS ET4, Conference Proceedings, pp. -7, book 4, Sozopol, Bulgaria, 4. [4] Dimitar Kovachev, "BPSK randomization of PWM in dc-dc converters", Proc. of 5th International Conference TEHNONAV 6, 9- MAY, 6, Constanta, pp [5] Mi, N.; Sasic, B.; Marshall, J.; Tomasiewicz, S.A novel economical single stage battery charger with power factor correction, 3. APEC apos;3. Eighteenth Annual IEEE Volume, Issue, 9-3 Feb. 3 vol., Page(s): pp [6] Bojrup, M. Karlsson, P. Alakula, M. Simonsson, B. A dual purpose battery charger for electric vehicles PESC 98 Record. 9th Annual IEEE 7- May, vol., p p The paper was developed in the frames of the NATO Research Program, Project RIG

302 An Approach to Effectiveness Increasing of SPICE Macromodels Elissaveta D. Gadjeva Boryanka I. Mihova and Vergil G. Manchev 3 Abstract In this paper transformed macromodels are proposed to effectiveness increasing of behavioral SPICE macromodels. The modifications of the PSpice library models allow to reduce the simulation time, the number of iterations and the order of the circuit matrix. The original and the modified models are compared and their effectiveness is evaluated. Keywords Behavioral models, Modified Nodal Analysis, OrCAD PSpice, Model Effectiveness I. INTRODUCTION Contemporary electronic devices are characterized with increasing complexity, a huge number of elements and high degree of integration. The standard electronic circuits contain hundreds, sometimes even thousands of elements. The effectiveness during the modeling of these circuits is of great importance. The circuit and the system simulators can run much faster today due to the availability of powerful computers and workstations. The circuits are more complex with each new generation of computers. This leads to huge computer resources assigned to circuit simulation in the design process in order to verify the circuit behavior [,,3]. The analysis of large electronic and electrical circuits and systems requires repeatedly solutions of sparse linear and nonlinear systems of equations. The sparsity can be used to accelerate circuit and system analysis. The nodal analysis equation have the form [4,5]: [ Y ][ V ] = [ J ] where [ Y ] is the nodal admittance matrix; [ ] unknown voltages; [ ] currents. The matrix [ ]. () V is the vector of J is the vector of the independent Y is sparse, because it contains a high proportion of zero-valued elements [,,3]. Every node is not connected to every other circuit node, and for nodal analysis nonzero-valued elements result only from direct connections. The dependence of the sparsity on the circuit size n is presented in Table I. It increases with circuit size and this can be used to reduce storage requirements and the number of floating point operations entailed in the solution of the circuit equations [,]. TABLE I DEPENDENCE OF THE SPARSITY AND THE SIMULATION TIME ON THE MATRIX SIZE n % sparsity Simulation time Matrix type 5% 3 n dense matrix 4 9% 96%. 5 n sparse matrix 4 99% 99.9%. n large sparse matrix In order to increase the effectiveness, the modified nodal analysis (MNA) is used [,]. MNA allows to include all types of dependent and independent sources in the circuit matrix. The computer implementation of this procedure is easy, which is a substantial advantage for automated solution. According to the MNA, one equation is written for each of the circuit nodes and the equations for the voltage sources are included in the augmentation. II. EFFECTIVENESS INCREASING A. Effectiveness Increasing of Linear PSpice Operational Amplifier Behavioral Models For the computer simulation of circuits containing operational amplifiers, macromodels of different complexity are used [,7,8,9]. The linear macromodel of OpAmp is described by a voltage controlled voltage source (VCVS), which depends on the input signal. The frequency response of the output voltage is defined in the form: V ( f ) H ( f ) V ( f ) =, () out. where V in ( f ) is the input voltage. The open-loop gain H ( f ) has the form: H ( f ) in A = (3) f + j f c Elissaveta D. Gadjeva is with the Faculty of Electronics and Electronic Engineering, Technical University of Sofia, 756 Sofia, Bulgaria, Boryanka I. Mihova is with the Faculty of Electronics and Electronic Engineering, Technical University of Sofia, 756 Sofia, Bulgaria, 3 Vergil G. Manchev is with the English Language Department of Engineering, Technical University of Sofia, 756 Sofia. 739 where A is the DC open-loop gain, f c is the cut-off frequency. The standard linear macromodels of OpAmp are based on the equivalent circuit, shown in Fig..

303 An Approach to Effectiveness Increasing of SPICE Macromodels CCVS and current controlled current source CCCS) increase the matrix order, as shown in Table II. Fig.. Equivalent circuit of OpAmp macromodel Fig.. Modified equivalent circuit of OpAmp macromodels The frequency dependence of the OpAmp open-loop g