Industrial-Engineering-Concepts-Methodologies-Tools-and-Applications-3-Volume-Set.pdf

Industrial Engineering: Concepts, Methodologies, Tools and Applications Information Resources Management Association USA

Views 222 Downloads 9 File size 70MB

Report DMCA / Copyright

DOWNLOAD FILE

Citation preview

Industrial Engineering: Concepts, Methodologies, Tools and Applications Information Resources Management Association USA

3 Volume Set 

Volume I

Managing Director: Senior Editorial Director: Book Production Manager: Publishing Systems Analyst: Assistant Acquisitions Editor: Development Manager: Development Editor: Assistant Production Editor: Cover Design:

Lindsay Johnston Heather Probst Jennifer Romanchak Adrienne Freeland Kayla Wolfe Joel Gamon Chris Wozniak Deanna Jo Zombro Nick Newcomer

Published in the United States of America by Engineering Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail: [email protected] Web site: http://www.igi-global.com Copyright © 2013 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Industrial engineering : concepts, methodologies, tools, and applications / Information Resources Management Association, editor. v. cm. Includes bibliographical references and index. ISBN 978-1-4666-1945-6 (hardcover) -- ISBN 978-1-4666-1946-3 (ebook) -- ISBN 978-1-4666-1947-0 (print & perpetual access) 1. Industrial engineering. 2. Industrial engineering--Case studies. I. Information Resources Management Association. T56.I43 2013 620--dc23 2012023210

British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. The views expressed in this book are those of the authors, but not necessarily of the publisher.

xxvii

Preface

The constantly changing landscape of Industrial Engineering makes it challenging for experts and practitioners to stay informed of the field’s most up-to-date research. That is why Information Science Reference is pleased to offer this three-volume reference collection that will empower students, researchers, and academicians with a strong understanding of critical issues within Industrial Engineering by providing both broad and detailed perspectives on cutting-edge theories and developments. This reference is designed to act as a single reference source on conceptual, methodological, technical, and managerial issues, as well as provide insight into emerging trends and future opportunities within the discipline. Industrial Engineering: Concepts, Methodologies, Tools and Applications is organized into eight distinct sections that provide comprehensive coverage of important topics. The sections are: (1) Fundamental Concepts and Theories, (2) Development and Design Methodologies, (3) Tools and Technologies, (4) Utilization and Application, (5) Organizational and Social Implications, (6) Managerial Impact, (7) Critical Issues, and (8) Emerging Trends. The following paragraphs provide a summary of what to expect from this invaluable reference tool. Section 1, Fundamental Concepts and Theories, serves as a foundation for this extensive reference tool by addressing crucial theories essential to the understanding of Industrial Engineering. Introducing the book is “Defining, Teaching, and Assessing Engineering Design Skills” by Nikos J. Mourtos, a great foundation laying the groundwork for the basic concepts and theories that will be discussed throughout the rest of the book. Another chapter of note in Section 1 is titled “Integrating ‘Designerly’ Ways with Engineering Science” by Ian de Vere and Gavin Melles, which discusses the novel techniques of adding aspects of design science into the stricter roles of engineering practices. Section 1 concludes, and leads into the following portion of the book with a nice segue chapter, “Tracing the Implementation of Non-Functional Requirements,” by Stephan Bode and Matthias Riebisch. Where Section 1 leaves off with fundamental concepts, Section 2 discusses architectures and frameworks in place for Industrial Engineering. Section 2, Development and Design Methodologies, presents in-depth coverage of the conceptual design and architecture of Industrial Engineering, focusing on aspects including parametric design, service design, fuzzy logic, control modeling, supply chain systems, and many more topics. Opening the section is “Learning Parametric Designing” by Marc Aurel Schnabel. This section is vital for developers and practitioners who want to measure and track the progress of Industrial Engineering on a through the multiple lens of parametric design. Through case studies, this section lays excellent groundwork for later sections that will get into present and future applications for Industrial Engineering, including, of note: “Decision Support Framework for the Selection of a Layout Type” by Jannes Slomp and Jos A.C. Bokhorst, and “Internal Supply Chain Integration” by Virpi Turkulainen. The section concludes with an

xxviii

excellent work by Mousumi Debnath and Mukeshwar Pandey, titled “Enhancing Engineering Education Learning Outcomes Using Project-Based Learning.” Section 3, Tools and Technologies, presents extensive coverage of the various tools and technologies used in the implementation of Industrial Engineering. Section 3 begins where Section 2 left off, though this section describes more concrete tools at place in the modeling, planning, and applications of Industrial Engineering. The first chapter, “Semantic Technologies in Motion,” by Ricardo Colomo-Palacios, lays a framework for the types of works that can be found in this section, a perfect resource for practitioners looking for the fundamentals of the types of semantic technologies currently in practice in Industrial Engineering. Section 3 is full of excellent chapters like this one, including such titles as “Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations,” “Multi-Modal Assembly-Support System for Cellular Manufacturing,” and “An Estimation of Distribution Algorithm for Part Cell Formation Problem” to name a few. Where Section 3 described specific tools and technologies at the disposal of practitioners, Section 4 describes successes, failures, best practices, and different applications of the tools and frameworks discussed in previous sections. Section 4, Utilization and Application, describes how the broad range of Industrial Engineering efforts has been utilized and offers insight on and important lessons for their applications and impact. Section 4 includes the widest range of topics because it describes case studies, research, methodologies, frameworks, architectures, theory, analysis, and guides for implementation. Topics range from serios games, enterprise resource planning, and crisis management, to air travel development and design. The first chapter in the section is titled “Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context,” which was written by Souleiman Naciri, Min-Jung Yoo, and Rémy Glardon. The breadth of topics covered in the chapter is also reflected in the diversity of its authors, from countries all over the globe, including Germany, Slovenia, Norway, Hong Kong, Malaysia, Brazil, Cyprus, Turkey, the United States, and more. Section 4 concludes with an excellent view of a case study in a new program, “UB1-HIT Dual Master’s Programme,” by David Chen, Bruno Vallespir, Jean-Paul Bourrieres, and Thecle Alix. Section 5, Organizational and Social Implications, includes chapters discussing the organizational and social impact of Industrial Engineering. The section opens with “Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs” by Kathryn J. Hayes and Ross Chapman. Where Section 4 focused on the broad, many applications of Industrial Engineering technology, Section 5 focuses exclusively on how these technologies affect human lives, either through the way they interact with each other, or through how they affect behavioral/workplace situations. Other interesting chapters of note in Section 5 include “Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral” by Cengiz Kahraman, Selçuk Çebi, and Ihsan Kaya, and “Direct Building Manufacturing of Homes with Digital Fabrication” by Lawrence Sass. Section 5 concludes with a fascinating study of a new development in Industrial Engineering, in “Firm-Specific Factors and the Degree of Innovation Openness” by Valentina Lazzarotti, Raffaella Manzini, and Luisa Pellegrini. Section 6, Managerial Impact, presents focused coverage of Industrial Engineering as it relates to effective uses of offshoring, network marketing, knowledge management, e-government, knowledge dissemination, and many more utilities. This section serves as a vital resource for developers who want to utilize the latest research to bolster the capabilities and functionalities of their processes. The section begins with “Offshoring Process,” a great look into whether or not offshoring practices could help a given business, alongside best practices and some new trends in the field. The 13 chapters in this section offer

xxix

unmistakable value to managers looking to implement new strategies that work at larger bureaucratic levels. The section concludes with “Research Profiles” by Gretchen Jordan, Jonathon Mote, and Jerald Hage. Where Section 6 leaves off, section seven picks up with a focus on some of the more contenttheoretical material of this compendium. Section 7, Critical Issues, presents coverage of academic and research perspectives on Industrial Engineering tools and applications. The section begins with “Cultural Models and Variations” by Yongjiang Shi and Zheng Liu. Other issues covered in detail in Section 7 include design paradigns, knowledge dynamics, layout structuring, design ethos, and much more. The section concludes with “Engineer-to-Order” by Ephrem Eyob and Richard Addo-Tenkorang, a great transitional chapter between Sections 7 and 8 because it examines an important trend going into the future of the field. The last chapter manages to show a theoretical look into future and potential technologies, a topic covered in more detail in Section 8. Section 8, Emerging Trends, highlights areas for future research within the field of Industrial Engineering, opening with “Advanced Technologies for Transient Faults Detection and Compensation” by Matteo Sonza Reorda, Luca Sterpone, and Massimo Violante. Section 8 contains chapters that look at what might happen in the coming years that can extend the already staggering amount of applications for Industrial Engineering. Other chapters of note include “Embedded RFID Solutions Challenges for Product Design and Development” and “Green Computing as an Ecological Aid in Industry.” The final chapter of the book looks at an emerging field within Industrial Engineering, in the excellent contribution, “Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control” by Thomas Strasser and Alois Zoitl. Although the primary organization of the contents in this multi-volume work is based on its eight sections, offering a progression of coverage of the important concepts, methodologies, technologies, applications, social issues, and emerging trends, the reader can also identify specific contents by utilizing the extensive indexing system listed at the end of each volume. Furthermore to ensure that the scholar, researcher, and educator have access to the entire contents of this multi volume set as well as additional coverage that could not be included in the print version of this publication, the publisher will provide unlimited multi-user electronic access to the online aggregated database of this collection for the life of the edition, free of charge when a library purchases a print copy. This aggregated database provides far more contents than what can be included in the print version, in addition to continual updates. This unlimited access, coupled with the continuous updates to the database ensures that the most current research is accessible to knowledge seekers. As a comprehensive collection of research on the latest findings related to using technology to providing various services, Industrial Engineering: Concepts, Methodologies, Tools and Applications, provides researchers, administrators and all audiences with a complete understanding of the development of applications and concepts in Industrial Engineering. Given the vast number of issues concerning usage, failure, success, policies, strategies, and applications of Industrial Engineering in countries around the world, Industrial Engineering: Concepts, Methodologies, Tools and Applications addresses the demand for a resource that encompasses the most pertinent research in technologies being employed to globally bolster the knowledge and applications of Industrial Engineering.

Table of Contents

Volume I Section 1 Fundamental Concepts and Theories This section serves as a foundation for this exhaustive reference tool by addressing underlying principles essential to the understanding of Industrial Engineering. Chapters found within these pages provide an excellent framework in which to position Industrial Engineering within the field of information science and technology. Insight regarding the critical incorporation of global measures into Industrial Engineering is addressed, while crucial stumbling blocks of this field are explored. With 10 chapters comprising this foundational section, the reader can learn and chose from a compendium of expert research on the elemental theories underscoring the Industrial Engineering discipline.

Chapter 1 Defining, Teaching, and Assessing Engineering Design Skills .............................................................. 1 Nikos J. Mourtos, San Jose State University, USA Chapter 2 Why Get Your Engineering Programme Accredited?............................................................................ 18 Peter Goodhew, University of Liverpool, UK Chapter 3 Quality and Environmental Management Systems in the Fashion Supply Chain ................................ 21 Chris K. Y. Lo, The Hong Kong Polytechnic University, Hong Kong Chapter 4 People-Focused Knowledge Sharing Initiatives in Medium-High and High Technology Companies: Organizational Facilitating Conditions and Impact on Innovation and Business Competitiveness..................................................................................................................................... 40 Nekane Aramburu, University of Deusto, Spain Josune Sáenz, University of Deusto, Spain

Chapter 5 Integrating ‘Designerly’ Ways with Engineering Science: A Catalyst for Change within Product Design and Development....................................................................................................................... 56 Ian de Vere, Swinburne University of Technology, Australia Gavin Melles, Swinburne University of Technology, Australia Chapter 6 E-Learning for SMEs: Challenges, Potential and Impact...................................................................... 79 Asbjorn Rolstadas, Norwegian University of Science and Technology, Norway Bjorn Andersen, Norwegian University of Science and Technology, Norway Manuel Fradinho, Cyntelix, the Netherlands Chapter 7 Categorization of Losses across Supply Chains: Cases of Manufacturing Firms................................. 98 Priyanka Singh, Jet Airways Limited, India Faraz Syed, Shri Shankaracharya Group of Institutions, India Geetika Sinha, ICICI Lombard, India Chapter 8 Collaborative Demand and Supply Planning Networks...................................................................... 108 Hans-Henrik Hvolby, Aalborg University, Denmark Kenn Steger-Jensen, Aalborg University, Denmark Erlend Alfnes, Norwegian University of Science and Technology, Norway Heidi C. Dreyer, Norwegian University of Science and Technology, Norway Chapter 9 Instructional Design of an Advanced Interactive Discovery Environment: Exploring Team Communication and Technology Use in Virtual Collaborative Engineering Problem Solving.................................................................................................................................. 117 YiYan Wu, Syracuse University, USA Tiffany A. Koszalka, Syracuse University, USA Chapter 10 Modes of Open Innovation in Service Industries and Process Innovation: A Comparative Analysis............................................................................................................................................... 137 Sean Kask, INGENIO (CSIC-UPV), Spain Chapter 11 Production Competence and Knowledge Generation for Technology Transfer: A Comparison between UK and South African Case Studies...................................................................................... 159 Ian Hipkin, École Supérieure de Commerce de Pau, France Chapter 12 Tracing the Implementation of Non-Functional Requirements........................................................... 172 Stephan Bode, Ilmenau University of Technology, Germany Matthias Riebisch, Ilmenau University of Technology, Germany

Section 2 Development and Design Methodologies This section provides in-depth coverage of conceptual architecture frameworks to provide the reader with a comprehensive understanding of the emerging developments within the field of Industrial Engineering. Research fundamentals imperative to the understanding of developmental processes within Industrial Engineering are offered. From broad examinations to specific discussions on methodology, the research found within this section spans the discipline while offering detailed, specific discussions. From basic designs to abstract development, these chapters serve to expand the reaches of development and design technologies within the Industrial Engineering community. This section includes 14 contributions from researchers throughout the world on the topic of Industrial Engineering.

Chapter 13 Learning Parametric Designing .......................................................................................................... 197 Marc Aurel Schnabel, The Chinese University of Hong Kong, Hong Kong Chapter 14 Service Design: New Methods for Innovating Digital User Experiences for Leisure......................... 211 Satu Miettinen, Savonia University of Applied Sciences, Finland Chapter 15 A Mass Customisation Implementation Model for the Total Design Process of the Fashion System . ............................................................................................................................................... 223 Bernice Pan, Seamsystemic Design Research, UK Chapter 16 Integration of Fuzzy Logic Techniques into DSS for Profitability Quantification in a Manufacturing Environment......................................................................................................................................... 242 Irraivan Elamvazuthi, Universiti Teknologi PETRONAS, Malaysia Pandian Vasant, Universiti Teknologi PETRONAS, Malaysia Timothy Ganesan, Universiti Teknologi PETRONAS, Malaysia Chapter 17 Control Model for Intelligent and Demand-Driven Supply Chains..................................................... 262 Jan Ola Strandhagen, SINTEF Technology and Society, Norway Heidi Carin Dreyer, Norwegian University of Science and Technology, Norway Anita Romsdal, Norwegian University of Science and Technology, Norway Chapter 18 Reducing Design Margins by Adaptive Compensation for Thermal and Aging Variations................ 284 Zhenyu Qi, University of Virginia, USA Yan Zhang, University of Virginia, USA Mircea Stan, University of Virginia, USA

Chapter 19 Modeling Closed Loop Supply Chain Systems .................................................................................. 313 Roberto Poles, University of Melbourne, Australia Chapter 20 A Production Planning Optimization Model for Maximizing Battery Manufacturing Profitability ......................................................................................................................................... 343 Hesham K. Alfares, King Fahd University of Petroleum & Minerals, Saudi Arabia Chapter 21 Multi-Objective Optimization of Manufacturing Processes Using Evolutionary Algorithms........................................................................................................................................... 352 M. Kanthababu, Anna University, India Chapter 22 Decision Support Framework for the Selection of a Layout Type ..................................................... 377 Jannes Slomp, University of Groningen, The Netherlands Jos A.C. Bokhorst, University of Groningen, The Netherlands Chapter 23 Petri Net Model Based Design and Control of Robotic Manufacturing Cells . .................................. 393 Gen’ichi Yasuda, Nagasaki Institute of Applied Science, Japan Chapter 24 Lean Thinking Based Investment Planning at Design Stage of Cellular/Hybrid Manufacturing Systems ............................................................................................................................................... 409 M. Bulent Durmusoglu, Istanbul Technical University, Turkey Goksu Kaya, Istanbul Technical University, Turkey Chapter 25 Internal Supply Chain Integration: Effective Integration Strategies in the Global Context................................................................................................................................................. 430 Virpi Turkulainen, Aalto University, Finland Chapter 26 Equipment Replacement Decisions Models with the Context of Flexible Manufacturing Cells .................................................................................................................................................... 453 Ioan Constantin Dima, Valahia University of Târgovişte, Romania Janusz Grabara, Częstochowa University of Technology, Poland Mária Nowicka-Skowron, Częstochowa University of Technology, Poland Chapter 27 Enhancing Engineering Education Learning Outcomes Using Project-Based Learning: A Case Study........................................................................................................................................ 464 Mousumi Debnath, Jaipur Engineering College and Research Centre, India Mukeshwar Pandey, Jaipur Engineering College and Research Centre, India

Section 3 Tools and Technologies This section presents an extensive coverage of various tools and technologies available in the field of Industrial Engineering that practitioners and academicians alike can utilize to develop different techniques. These chapters enlighten readers about fundamental research on the many tools facilitating the burgeoning field of Industrial Engineering. It is through these rigorously researched chapters that the reader is provided with countless examples of the up-and-coming tools and technologies emerging from the field of Industrial Engineering. With 14 chapters, this section offers a broad treatment of some of the many tools and technologies within the Industrial Engineering field.

Chapter 28 Semantic Technologies in Motion: From Factories Control to Customer Relationship Management......................................................................................................................................... 477 Ricardo Colomo-Palacios, Universidad Carlos III de Madrid, Spain Chapter 29 Similarity-Based Cluster Analysis for the Cell Formation Problem . ................................................. 499 Riccardo Manzini, University of Bologna, Italy Riccardo Accorsi, University of Bologna, Italy Marco Bortolini, University of Bologna, Italy Chapter 30 Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles................................................................................................................................................. 522 Paolo Renna, University of Basilicata, Italy Michele Ambrico, University of Basilicata, Italy Chapter 31 Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations .......................................................................... 539 Vahidreza Ghezavati, Islamic Azad University, Iran Mohammad Saidi-Mehrabad, University of Science and Technology, Iran Mohammad Saeed Jabal-Ameli, University of Science and Technology, Iran Ahmad Makui, University of Science and Technology, Iran Seyed Jafar Sadjadi, University of Science and Technology, Iran Chapter 32 Multi-Modal Assembly-Support System for Cellular Manufacturing ................................................ 559 Feng Duan, Nankai University, China Jeffrey Too Chuan Tan, The University of Tokyo, Japan Ryu Kato, The University of Electro-Communications, Japan Chi Zhu, Maebashi Institute of Technology, Japan Tamio Arai, The University of Tokyo, Japan

Chapter 33 Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets ..................................................................................................................................................... 577 Gen’ichi Yasuda, Nagasaki Institute of Applied Science, Japan Chapter 34 Human-Friendly Robots for Entertainment and Education................................................................. 594 Jorge Solis, Waseda University, Japan & Karlstad University, Sweden Atsuo Takanishi, Waseda University, Japan Chapter 35 Dual-SIM Phones: A Disruptive Technology?..................................................................................... 617 Dickinson C. Odikayor, Landmark University, Nigeria Ikponmwosa Oghogho, Landmark University, Nigeria Samuel T. Wara, Federal University Abeokuta, Nigeria Abayomi-Alli Adebayo, Igbinedion University Okada, Nigeria Chapter 36 Data Envelopment Analysis in Environmental Technologies . ........................................................... 625 Peep Miidla, University of Tartu, Estonia Chapter 37 Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm . .......................................................................................................................................... 643 Alexandros Xanthopoulos, Democritus University of Thrace, Greece Dimitrios E. Koulouriotis, Democritus University of Thrace, Greece Chapter 38 Comparison of Connected vs. Disconnected Cellular Systems: A Case Study................................... 663 Gürsel A. Süer, Ohio University, USA Royston Lobo, S.S. White Technologies Inc., USA Chapter 39 AutomatL@bs Consortium: A Spanish Network of Web-based Labs for Control Engineering Education............................................................................................................................................. 679 Sebastián Dormido, Universidad Nacional de Educación a Distancia, Spain Héctor Vargas, Pontificia Universidad Católica de Valparaíso, Chile José Sánchez, Universidad Nacional de Educación a Distancia, Spain

Volume II Chapter 40 An Estimation of Distribution Algorithm for Part Cell Formation Problem ...................................... 699 Saber Ibrahim, University of Sfax, Tunisia Bassem Jarboui, University of Sfax, Tunisia Abdelwaheb Rebaï, University of Sfax, Tunisia

Chapter 41 A LabVIEW-Based Remote Laboratory: Architecture and Implementation....................................... 726 Yuqiu You, Morehead State University, USA Section 4 Utilization and Application This section discusses a variety of applications and opportunities available that can be considered by practitioners in developing viable and effective Industrial Engineering programs and processes. This section includes 14 chapters that review topics from case studies in Cyprus to best practices in Africa and ongoing research in the United States. Further chapters discuss Industrial Engineering in a variety of settings (air travel, education, gaming, etc.). Contributions included in this section provide excellent coverage of today’s IT community and how research into Industrial Engineering is impacting the social fabric of our present-day global village.

Chapter 42 Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context...................................................................................................................................... 744 Souleiman Naciri, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Min-Jung Yoo, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Rémy Glardon, Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Chapter 43 Serious Gaming Supporting Competence Development in Sustainable Manufacturing..................... 766 Heiko Duin, BIBA – Bremer Institut für Produktion und Logistik GmbH, Germany Gregor Cerinšek, Institute for Innovation and Development of University of Ljubljana, Slovenia Manuel Fradinho, The Foundation for Scientific and Industrial Research at the Norwegian Institute of Technology, Norway Marco Taisch, Politecnico di Milano, Italy Chapter 44 Reengineering for Enterprise Resource Planning (ERP) Systems Implementation: An Empirical Analysis of Assessing Critical Success Factors (CSFs) of Manufacturing Organizations.................. 791 C. Annamalai, Universiti Sains Malaysia, Malaysia T. Ramayah, Universiti Sains Malaysia, Malaysia Chapter 45 Optimal Pricing and Inventory Decisions for Fashion Retailers under Value-At-Risk Objective: Applications and Review..................................................................................................................... 807 Chun-Hung Chiu, City University of Hong Kong, Hong Kong Jin-Hui Zheng, The Hong Kong Polytechnic University, Hong Kong Tsan-Ming Choi, The Hong Kong Polytechnic University, Hong Kong

Chapter 46 Implementation of Rapid Manufacturing Systems in the Jewellery Industry in Brazil: Some Experiences in Small and Medium-Sized Companies......................................................................... 817 Juan Carlos Campos Rúbio, Universidade Federal de Minas Gerais, Brasil Eduardo Romeiro Filho, Universidade Federal de Minas Gerais, Brasil Chapter 47 Cases Illustrating Risks and Crisis Management................................................................................. 838 Simona Mihai Yiannaki, European University, Cyprus Chapter 48 Aircraft Development and Design: Enhancing Product Safety through Effective Human Factors Engineering Design Solutions.............................................................................................................. 858 Dujuan B. Sevillian, Large Aircraft Manufacturer, USA Chapter 49 Adoption of Information Technology Governance in the Electronics Manufacturing Sector in Malaysia . ............................................................................................................................................ 887 Wil Ly Teo, Universiti Teknologi Malaysia Khong Sin Tan, Multimedia University, Malaysia Chapter 50 An Environmentally Integrated Manufacturing Analysis Combined with Waste Management in a Car Battery Manufacturing Plant ........................................................................................................ 907 Suat Kasap, Hacettepe University, Turkey Sibel Uludag Demirer, Villanova University, USA Sedef Ergün, Drogsan Pharmaceuticals, Turkey Chapter 51 Ghabbour Group ERP Deployment: Learning From Past Technology Failures.................................. 933 M. S. Akabawi, American University in Cairo, Egypt Chapter 52 Matching Manufacturing and Retailing Models in Fashion ............................................................... 959 Simone Guercini, University of Florence, Italy Chapter 53 Production Information Systems Usability in Jordan ......................................................................... 975 Emad Abu-Shanab, Yarmouk University, Jordan Heyam Al-Tarawneh, Ministry of Education, Jordan

Chapter 54 Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China.................................................................................................................................................... 990 Tao Chen, SanJiang University, China, Nanjing Normal University, China, & Harbin Institute of Technology, China Li Kang, SanJiang University, China, & Nanjing Normal University, China Zhengfeng Ma, Nanjing Normal University, China Zhiming Zhu, Hohai University, China Chapter 55 UB1-HIT Dual Master’s Programme: A Double Complementary International Collaboration Approach............................................................................................................................................ 1001 David Chen, IMS-University of Bordeaux 1, France Bruno Vallespir, IMS-University of Bordeaux 1, France Jean-Paul Bourrières, IMS-University of Bordeaux 1, France Thècle Alix, IMS-University of Bordeaux 1, France Section 5 Organizational and Social Implications This section includes a wide range of research pertaining to the social and behavioral impact of Industrial Engineering around the world. Chapters introducing this section critically analyze and discuss trends in Industrial Engineering, such as participation, attitudes, and organizational change. Additional chapters included in this section look at process innovation and group decision making. Also investigating a concern within the field of Industrial Engineering is research which discusses the effect of customer power on Industrial Engineering. With 13 chapters, the discussions presented in this section offer research into the integration of global Industrial Engineering as well as implementation of ethical and workflow considerations for all organizations.

Chapter 56 Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs: Absorptive Capacity Limitations....................................................................................................... 1026 Kathryn J. Hayes, University of Western Sydney, Australia Ross Chapman, Deakin University Melbourne, Australia Chapter 57 Teaching Technology Computer Aided Design (TCAD) Online . .................................................... 1043 Chinmay K Maiti, Indian Institute of Technology, India Ananda Maiti, Indian Institute of Technology, India Chapter 58 Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment....................................................................................................................................... 1064 Sami Akabawi, American University in Cairo, Egypt Heba Hodeeb, American University in Cairo, Egypt

Chapter 59 Sharing Scientific and Social Knowledge in a Performance Oriented Industry: An Evaluation Model......................................................................................................................... 1085 Haris Papoutsakis, Technological Education Institute of Crete, Greece Chapter 60 Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral .............................................................................................................................................. 1115 Cengiz Kahraman, Istanbul Technical University, Turkey Selçuk Çebi, Karadeniz Technical University, Turkey İhsan Kaya, Selçuk University, Turkey Chapter 61 Operator Assignment Decisions in a Highly Dynamic Cellular Environment ................................. 1135 Gürsel A. Süer, Ohio University, USA Omar Alhawari, Royal Hashemite Court, Jordan Chapter 62 Capacity Sharing Issue in an Electronic Co-Opetitive Network: A Simulative Approach................ 1153 Paolo Renna, University of Basilicata, Italy Pierluigi Argoneto, University of Basilicata, Italy Chapter 63 Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation............................................................................................................................................ 1180 Goldstain Ofir, Tel-Aviv University, Israel Ben-Gal Irad, Tel-Aviv University, Israel Bukchin Yossi, Tel-Aviv University, Israel Chapter 64 Cell Loading and Family Scheduling for Jobs with Individual Due Dates ...................................... 1201 Gürsel A. Süer, Ohio University, USA Emre M. Mese, D.E. Foxx & Associates, Inc., USA Chapter 65 Evaluation of Key Metrics for Performance Measurement of a Lean Deployment Effort................ 1220 Edem G. Tetteh, Paine College, USA Ephrem Eyob, Virginia State University, USA Yao Amewokunu, Virginia State University, USA Chapter 66 Direct Building Manufacturing of Homes with Digital Fabrication . ............................................... 1231 Lawrence Sass, Massachusetts Institute of Technology, USA

Chapter 67 eRiskGame: A Persistent Browser-Based Game for Supporting Project-Based Learning in the Risk Management Context................................................................................................................. 1243 Túlio Acácio Bandeira Galvão, Rural Federal University of the Semi-Arid – UFERSA, Brazil Francisco Milton Mendes Neto, Rural Federal University of the Semi-Arid – UFERSA, Brazil Mara Franklin Bonates, Rural Federal University of the Semi-Arid – UFERSA, Brazil Chapter 68 Effect of Customer Power on Supply Chain Integration and Performance....................................... 1260 Xiande Zhao, Chinese University of Hong Kong, Hong Kong Baofeng Huo, Xi’an Jiaotong University, China Barbara B. Flynn, Indiana University, USA Jeff Hoi Yan Yeung, Chinese University of Hong Kong, Hong Kong Chapter 69 Firm-Specific Factors and the Degree of Innovation Openness........................................................ 1288 Valentina Lazzarotti, Carlo Cattaneo University, Italy Raffaella Manzini, Carlo Cattaneo University, Italy Luisa Pellegrini, University of Pisa, Italy Section 6 Managerial Impact This section presents contemporary coverage of the social implications of Industrial Engineering, more specifically related to the corporate and managerial utilization of information sharing technologies and applications, and how these technologies can be extrapolated to be used in Industrial Engineering. Core ideas such as service delivery, gender evaluation, public participation, and other determinants that affect the intention to adopt technological innovations in Industrial Engineering are discussed. Equally as crucial, chapters within this section discuss how leaders can utilize Industrial Engineering applications to get the best outcomes from their shareholders and their customers.

Chapter 70 Offshoring Process: A Comparative Investigation of Danish and Japanese Manufacturing Companies.......................................................................................................................................... 1312 Dmitrij Slepniov, Aalborg University, Denmark Brian Vejrum Wæhrens, Aalborg University, Denmark Hiroshi Katayama, Waseda University, Japan Chapter 71 Network Marketing and Supply Chain Management for Effective Operations Management....................................................................................................................................... 1336 Raj Selladurai, Indiana University Northwest, USA

Chapter 72 Knowledge Management in SMEs: A Mixture of Innovation, Marketing and ICT: Analysis of Two Case Studies............................................................................................................................... 1350 Saïda Habhab-Rave, ISTEC, Paris, France Chapter 73 Developments in Modern Operations Management and Cellular Manufacturing............................. 1362 Vladimír Modrák, Technical University of Kosice, Slovakia (Slovak Republic) Pavol Semančo, Technical University of Kosice, Slovakia (Slovak Republic) Chapter 74 Fashion Supply Chain Management through Cost and Time Minimization from a Network Perspective......................................................................................................................................... 1382 Anna Nagurney, University of Massachusetts Amherst, USA Min Yu, University of Massachusetts Amherst, USA

Volume III Chapter 75 An Exploratory Study on Product Lifecycle Management in the Fashion Chain: Evidences from the Italian Leather Luxury Industry......................................................................... 1402 Romeo Bandinelli, Università degli Studi di Firenze, Italy Sergio Terzi, Università degli Studi di Bergamo, Italy Chapter 76 Knowledge Dissemination in Portals................................................................................................. 1418 Steven Woods, Boeing Phantom Works, USA Stephen Poteet, Boeing Phantom Works, USA Anne Kao, Boeing Phantom Works, USA Lesley Quach, Boeing Phantom Works, USA Chapter 77 A Comparative Analysis of Activity-Based Costing and Traditional Costing Systems: The Case of Egyptian Metal Industries Company............................................................................. 1429 Khaled Samaha, American University in Cairo, Egypt Sara Abdallah, British University in Egypt, Egypt Chapter 78 Complex Real-Life Supply Chain Planning Problems ..................................................................... 1441 Behnam Fahimnia, University of South Australia, Australia Mohammad Hassan Ebrahimi, InfoTech International Company, Iran Reza Molaei, Iran Broadcasting Services, Iran

Chapter 79 E-Government Clusters: From Framework to Implementation......................................................... 1467 Kristian J. Sund, Middlesex University Business School, UK Ajay Kumar Reddy Adala, Centre for e-Governance, India Chapter 80 Hybrid Algorithms for Manufacturing Rescheduling: Customised vs. Commodity Production.......................................................................................................................................... 1488 Luisa Huaccho Huatuco, University of Leeds, UK Ani Calinescu, University of Oxford, UK Chapter 81 Negotiation Protocol Based on Budget Approach for Adaptive Manufacturing Scheduling ........... 1517 Paolo Renna, University of Basilicata, Italy Rocco Padalino, University of Basilicata, Italy Chapter 82 Research Profiles: Prolegomena to a New Perspective on Innovation Management........................ 1539 Gretchen Jordan, Sandia National Laboratories, USA Jonathon Mote, Southern Illinois University, USA Jerald Hage, University of Maryland, USA Section 7 Critical Issues This section contains 13 chapters, giving a wide variety of perspectives on Industrial Engineering and its implications. Such perspectives include reading in privacy, gender, ethics, and several more. The section also discusses new ethical considerations within social constructivism and gender gaps. Within the chapters, the reader is presented with an in-depth analysis of the most current and relevant issues within this growing field of study. Crucial questions are addressed and alternatives offered, and topics discussed such as creative regions in Europe, ethos as an enabler of organizational knowledge creation, and design of manufacturing cells based on graph theory.

Chapter 83 Cultural Models and Variations......................................................................................................... 1560 Yongjiang Shi, Institute for Manufacturing, University of Cambridge, UK Zheng Liu, University of Cambridge, UK Chapter 84 New Design Paradigm: Shaping and Employment............................................................................ 1574 Vladimir M. Sedenkov, Belarusian State University, Belarus Chapter 85 Dynamics in Knowledge . ................................................................................................................. 1595 Shigeki Sugiyama, University of Gifu, Japan

Chapter 86 Tool and Information Centric Design Process Modeling: Three Case Studies.................................. 1613 William Stuart Miller, Clemson University, USA Joshua D. Summers, Clemson University, USA Chapter 87 Application of Dynamic Analysis in a Centralised Supply Chain..................................................... 1638 Mu Niu, Northumbria University, UK Petia Sice, Northumbria University, UK Ian French, University of Teesside, UK Erik Mosekilde, The Technical University of Denmark, Denmark Chapter 88 The Drivers for a Sustainable Chemical Manufacturing Industry .................................................... 1659 George M. Hall, University of Central Lancashire, UK Joe Howe, University of Central Lancashire, UK Chapter 89 Cellular or Functional layout?........................................................................................................... 1680 Abdessalem Jerbi, University of Sfax, Tunisia Hédi Chtourou, University of Sfax, Tunisia Chapter 90 Random Dynamical Network Automata for Nanoelectronics: A Robustness and Learning Perspective......................................................................................................................................... 1699 Christof Teuscher, Portland State University, USA Natali Gulbahce, Northeastern University, USA Thimo Rohlf, Genopole, France Alireza Goudarzi, Portland State University, USA Chapter 91 Creative Regions in Europe: Exploring Creative Industry Agglomeration and the Wealth of European Regions.............................................................................................................................. 1719 Blanca de-Miguel-Molina, Universitat Politècnica de València, Spain José-Luis Hervás-Oliver, Universitat Politècnica de València, Spain Rafael Boix, Universitat de València, Spain María de-Miguel-Molina, Universitat Politècnica de València, Spain Chapter 92 Design of Manufacturing Cells Based on Graph Theory................................................................... 1734 José Francisco Ferreira Ribeiro, University of São Paulo, Brazil Chapter 93 Ethos as Enablers of Organisational Knowledge Creation ............................................................... 1749 Yoshito Matsudaira, Japan Advanced Institute of Science and Technology, Japan

Chapter 94 Engineering Design as Research . ..................................................................................................... 1766 Timothy L.J. Ferris, Defence and Systems Institute, University of South Australia, Australia Chapter 95 Engineer-to-Order: A Maturity Concurrent Engineering Best Practice in Improving Supply Chains................................................................................................................................................ 1780 Richard Addo-Tenkorang, University of Vaasa, Finland Ephrem Eyob, Virginia State University, USA Section 8 Emerging Trends

This section highlights research potential within the field of Industrial Engineering while exploring uncharted areas of study for the advancement of the discipline. Introducing this section are chapters that set the stage for future research directions and topical suggestions for continued debate, centering on the new venues and forums for discussion. A pair of chapters on supply chain management and green computing makes up the middle of the section of the final 14 chapters, and the book concludes with a look ahead into the future of the Industrial Engineering field, with “Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control.” In all, this text will serve as a vital resource to practitioners and academics interested in the best practices and applications of the burgeoning field of Industrial Engineering.

Chapter 96 Advanced Technologies for Transient Faults Detection and Compensation .................................... 1798 Matteo Sonza Reorda, Politecnico di Torino, Italy Luca Sterpone, Politecnico di Torino, Italy Massimo Violante, Politecnico di Torino, Italy Chapter 97 Augmented Reality for Collaborative Assembly Design in Manufacturing Sector........................... 1821 Rui (Irene) Chen, The University of Sydney, Australia Xiangyu Wang, The University of Sydney, Australia Lei Hou, The University of Sydney, Australia Chapter 98 E-Business/ICT and Carbon Emissions............................................................................................. 1833 Lan Yi, China University of Geosciences (Wuhan), China Chapter 99 Building for the Future: Systems Implementation in a Construction Organization.......................... 1853 Hafez Salleh, University of Malaya, Malaysia Eric Lou, University of Salford, UK

Chapter 100 Embedded RFID Solutions: Challenges for Product Design and Development................................ 1873 Álvaro M. Sampaio, Polytechnic Institute of Cávado and Ave, Portugal & University of Minho, Portugal António J. Pontes, University of Minho, Portugal Ricardo Simões, Polytechnic Institute of Cávado and Ave, Portugal & University of Minho, Portugal Chapter 101 Future Trends in SCM........................................................................................................................ 1885 Reza Zanjirani Farahani, Kingston University London, UK Faraz Dadgostari, Amirkabir University of Technology, Iran Ali Tirdad, University of British Columbia, Canada Chapter 102 Green Computing as an Ecological Aid in Industry.......................................................................... 1903 Oliver Avram, Ecole Polytechnique Fédérale de Lausanne, Switzerland Ian Stroud, Ecole Polytechnique Fédérale de Lausanne, Switzerland Paul Xirouchakis, Ecole Polytechnique Fédérale de Lausanne, Switzerland Chapter 103 Improving Energy-Efficiency of Scientific Computing Clusters . .................................................... 1916 Tapio Niemi, Helsinki Institute of Physics, Finland Jukka Kommeri, Helsinki Institute of Physics, Finland Ari-Pekka Hameri, University of Lausanne, Switzerland Chapter 104 Organic Solar Cells Modeling and Simulation ................................................................................. 1934 Mihai Razvan Mitroi, Polytechnic University of Bucharest, Romania Laurentiu Fara, Polytechnic University of Bucharest, Romania & Academy of Romanian Scientists, Romania Andrei Galbeaza Moraru, Polytechnic University of Bucharest, Romania Chapter 105 Programming Robots in Kindergarten to Express Identity: An Ethnographic Analysis.................... 1952 Marina Umaschi Bers, Tufts University, USA Alyssa B. Ettinger, Tufts University, USA Chapter 106 Prototyping of Robotic Systems in Surgical Procedures and Automated Manufacturing Processes............................................................................................................................................ 1969 Zheng (Jeremy) Li, University of Bridgeport, USA

Chapter 107 Software Process Lines: A Step towards Software Industrialization................................................. 1988 Mahmood Niazi, Keele University, UK & King Fahd University of Petroleum and Minerals, Saudi Arabia Sami Zahran, Process Improvement Consultant, UK Chapter 108 Super High Efficiency Multi-Junction Solar Cells and Concentrator Solar Cells............................. 2003 Masafumi Yamaguchi, Toyota Technological Institute, Japan Chapter 109 Zero-Downtime Reconfiguration of Distributed Control Logic in Industrial Automation and Control .............................................................................................................................................. 2024 Thomas Strasser, AIT Austrian Institute of Technology, Austria Alois Zoitl, Vienna University of Technology, Austria Martijn Rooker, PROFACTOR GmbH, Austria

Section 1

Fundamental Concepts and Theories

This section serves as a foundation for this exhaustive reference tool by addressing underlying principles essential to the understanding of Industrial Engineering. Chapters found within these pages provide an excellent framework in which to position Industrial Engineering within the field of information science and technology. Insight regarding the critical incorporation of global measures into Industrial Engineering is addressed, while crucial stumbling blocks of this field are explored. With 10 chapters comprising this foundational section, the reader can learn and chose from a compendium of expert research on the elemental theories underscoring the Industrial Engineering discipline.

1

Chapter 1

Defining, Teaching, and Assessing Engineering Design Skills Nikos J. Mourtos San Jose State University, USA

ABSTRACT The paper discusses a systematic approach for defining, teaching, and assessing engineering design skills. Although the examples presented in the paper are from the field of aerospace engineering, the principles apply to engineering design in general. What makes the teaching of engineering design particularly challenging is that the necessary skills and attributes are both technical and non-technical and come from the cognitive as well as the affective domains. Each set of skills requires a different approach to teach and assess. Implementing a variety of approaches for a number of years at SJSU has shown that it is just as necessary to teach affective skills, as it is to teach cognitive skills. As one might expect, each set of skills presents its own challenges.

INTRODUCTION Design is the heart of engineering practice. In fact, many engineering experts consider design as being synonymous with engineering. Yet engineering schools have come under increasing criticism after World War II because they have overemphasized analytical approaches and engineering science at the expense of hands-on, design skills (Seely, DOI: 10.4018/978-1-4666-1945-6.ch001

1999; Petrosky, 2000). As the editor of Machine Design put it, schools are being charged with not responding to industry needs for hands-on design talent, but instead are grinding out legions of research scientists (Curry, 1991). In response to this criticism and to increase student retention, many engineering schools, including SJSU, introduce design at the freshman level to excite students about engineering. Freshman design also helps students put into perspective the entire curriculum, by viewing each subject as

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Defining, Teaching, and Assessing Engineering Design Skills

a necessary tool in the design process. Design is also globally dispersed in a variety of junior and senior level courses in the form of mini design projects and is finally experienced in a more realistic setting in a two-semester, senior design capstone experience. The paper first attempts to provide a comprehensive definition of design skills. Subsequently, it presents a model for curriculum design that addresses these skills. Lastly, it presents ideas for assessing student competence in design. What makes teaching engineering design particularly challenging is that the necessary skills and attributes are technical as well as non-technical, and come from the cognitive as well as the affective domains. For example, the ability to define “real world” problems in practical (engineering) terms, to investigate and evaluate prior solutions, and to develop constraints and criteria for evaluation are technical skills, while the ability to communicate the results of a design, to work in teams, and decide on the best course of action when a decision has ethical implications are non-technical skills. Most technical skills are cognitive, however, there are several skills from the affective domain as well, such as the willingness to spend time reading, gathering information and defining the problem, and the willingness to risk and cope with ambiguity, to welcome change and manage stress. All these skills, technical and non-technical, cognitive and affective are essential for engineers, yet each requires a different approach to teach and assess.

DEFINING ENGINEERING DESIGN SKILLS What is Engineering? To define the skills necessary for design engineers we need to start with the definition of engineering itself. Nicolai (1988) defines engineering as the design of a commodity for the benefit of mankind. Obviously, the word design is key to the definition

2

of engineering. Engineers design things in their attempt to solve everyday problems and improve the quality of our lives. As Theodore Von Karman put it: A scientist discovers that which exists. An engineer creates that which never was.

What is Design? The next step in our search for design skills is to define design itself. “Design is a process through which one creates and transforms ideas and concepts into a product that satisfies certain requirements and constraints.” Design requirements are usually technical and describe the performance expectations of the product, as specified by the customer or a perceived need. For example, a new passenger airplane may have mission requirements such as: • •

• •

A range of 3,000 km (i.e., the distance it will be able to fly without refueling). A payload of 100 passengers (i.e., the number of passengers along with their luggage it will be able to carry). A flight speed of 750 km/hr at a cruise altitude of 10 km. A takeoff field length of 1,500 m at standard sea level conditions.

The performance requirements specified by an airline (the customer), however, are not the only technical requirements that a passenger airplane must meet. To be certified, the plane must also satisfy additional airworthiness requirements. For example, FAR 25.121 part(b), refers to the ability of the plane to climb with one engine inoperative and requires that: •

In the takeoff configuration with the landing gear fully retracted but without ground effect the airplane must be able to maintain

Defining, Teaching, and Assessing Engineering Design Skills

a steady climb gradient of at least 2.4% for two-engine airplanes, 2.7% for threeengine airplanes, and 3% for four-engine airplanes at a climb speed that is also specified and known as V2 (Flightsim Aviation Zone, 2010). Such airworthiness requirements often prove to be more challenging than the original performance requirements specified by the customer. Additional design requirements, not specified by the customer, are not unique to aerospace engineering. For example, civil and architectural engineers must satisfy building code requirements, usually set by cities or countries. The definition of design also mentions constraints. Constraints are sometimes difficult to distinguish from requirements. They may be viewed as limitations stated in regards to materials, cost, environmental factors, etc. For example, the Hughes H-4 Hercules aircraft, the largest flying boat ever built, was made out of wood because of wartime restrictions on the use of aluminum (Wikipedia, 2011). Another example is the noise standards for transport aircraft (Flightsim Aviation Zone, 2010). In summary, design engineers must satisfy technical requirements, as specified by the customer and possibly additional technical requirements related to safety. Furthermore they must be concerned with the broader impact of their designs to individuals, the society, and the environment. This has become increasingly more important in our interconnected, globalized world. Pink (2005) adds yet another challenge to engineering design, one that relates to aesthetics. He argues that because of the ‘abundance’ of products we have come to expect in the 21st century, the lower manufacturing cost in many countries, and the fact that many engineering tasks can now be automated, it is no longer enough to create a product that’s reasonably priced and adequately functional. It must also be beautiful, unique, and meaningful. This requirement adds a new dimen-

sion to engineering design, a dimension that has much in common with the creative arts.

The Engineering Design Process The next step in our search for design skills is to look at the engineering design process. Figure 1 is an attempt to illustrate this iterative process, as it takes place in our brain (Nicolai, 1998). Design begins with brainstorming of ideas. This takes place in the right (creative) part of the brain. There are virtually no rules in generating these ideas. In fact, it is desirable to come up with as many ideas as possible and allow for “wild” ideas as well as conventional ones. While brainstorming, the right brain tends to be holistic, intuitive, and highly nonlinear (i.e., it jumps around). It sees things in their context as well as metaphorically, recognizes patterns, focuses on relationships between the various parts and cares about aesthetics. Subsequently, each idea is evaluated in the left (analytical) part of the brain under very rigid rules. The left brain acts as a filter on the ideas generated, deciding which ones are viable under the current rules and which ones are not. The left brain tends to be logical, sequential, computer-like. It sees things literally and focuses on categories. As Figure 1 illustrates, the design process involves an iterative cycling through a sequence that involves creative, imaginative exploration, objective analytical evaluation, and finally making a decision. It is this context, known also as convergent – divergent thinking (Nicolai, 1998), in which one should look for the skills and attributes necessary for a good design engineer. But there is more to the iterative nature of engineering design than the interchange between the right and the left brain illustrated in Figure 1; iteration is also necessary because of the openended nature of design. It is simply not possible to follow a linear, step-by-step process to arrive at a single answer or a unique product that meets our need. First of all, design requires numerous

3

Defining, Teaching, and Assessing Engineering Design Skills

Figure 1. The engineering design process: an iteration between creative synthesis and analytical evaluation (adapted from Nicolai, 1998)

assumptions because there are always so many unknowns. Some of these assumptions may be proven wrong down the road, requiring us to go back, make changes, and repeat our calculations, hence the need for iteration. The non-unique nature of design becomes obvious when one looks at the multitude of products available in the market to address a given need. Figure 2 illustrates the engineering design process. Engineering design begins with identifying a need. This need is articulated in terms of specific

technical requirements that the product must meet. Following this design specification engineers research existing solutions to the problem before proposing any new ones. Brainstorming is the most creative part in the design process. The members of the design team who brainstorm typically bring various perspectives and expertise to the problem. The goal is to create as many ideas as possible, including unusual and wild ones. To achieve this goal, participants are not allowed to criticize any ideas put forth. Rather, to create synergy, they

Figure 2. The engineering design process: From identifying a need to production

4

Defining, Teaching, and Assessing Engineering Design Skills

are encouraged to build on others’ ideas. After brainstorming the group selects two or three of these ideas to move forward with evaluation. Each proposed concept is analyzed systematically using appropriate engineering science in an effort to prove its feasibility and functionality. Hopefully, at least one of these concepts will prove feasible through analysis. A model is then built for actual testing. The tests will hopefully validate one of the proposed concepts, at which point the design is finalized and goes into production. Design also requires compromise because requirements often conflict with each other. For example, to provide comfort for airplane passengers one needs a large cross-sectional area. But a large cross-sectional area results in greater drag and compromised fuel efficiency, especially at high speeds. A successful aircraft designer must decide where to draw the line between these two conflicting requirements.

Skills and Attributes of Design Engineers Clearly, engineering design is a very complex process and as such, it requires several, very different from each other, sets of skills. These are briefly discussed in the following sub-sections.

Analytical Skills The right-hand side of Figure 1 attests to the need for traditional engineering analytical skills: solid fundamentals in mathematics, physical science (e.g., physics, chemistry, etc.), and engineering science (e.g., fluid mechanics, thermodynamics, dynamics, etc.). The need for such skills has been articulated in the desired attributes of a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997), as well as in ABET EC 2000, Outcome 3a (Engineering Accreditation Commission):

“A good grasp of engineering science fundamentals, including: mechanics and dynamics, mathematics (including statistics), physical and life sciences, and information science/technology An ability to apply knowledge of mathematics, science, and engineering”

Open-Ended Problem Solving Skills Design skills build upon open-ended problem solving skills. Outcome 3e of ABET EC 2000 (Engineering Accreditation Commission) highlights the need for such skills when it states that engineering graduates must be able to identify and formulate engineering problems in addition to being able to solve such problems. Students who are open-ended problem solvers exhibit the attributes listed below (Woods, 1997). Mourtos, Okamoto, and Rhee (2004) classified these attributes according to the various levels of Bloom’s taxonomy of educational objectives in the cognitive and the affective domains (Bloom, 1984; Bloom, Karthwohl, & Massia, 1984): a.

b. c.

d. e.

f. g.

h.

Are willing to spend time reading, gathering information and defining the problem (Affective) Use a process, as well as a variety of tactics and heuristics to tackle problems (Cognitive) Monitor their problem-solving process and reflect upon its effectiveness (Affective and Cognitive) Emphasize accuracy rather than speed (Affective and Cognitive) Write down ideas and create charts / figures, while solving a problem (Affective and Cognitive) Are organized and systematic (Affective) Are flexible (keep options open, can view a situation from different perspectives / points of view) (Affective) Draw on the pertinent subject knowledge and objectively and critically assess the quality,

5

Defining, Teaching, and Assessing Engineering Design Skills

i.

j.

accuracy, and pertinence of that knowledge / data (Cognitive) Are willing to risk and cope with ambiguity, welcoming change and managing stress (Affective) Use an overall approach that emphasizes fundamentals rather than trying to combine various memorized sample solutions (Cognitive)

It is interesting to note that the need for flexibility (attribute g) is also established as a desired attribute for a global engineer in a context much broader than engineering problem solving (The Boeing Company & Rensselaer Polytechnic Institute, 1997): “Flexibility: the ability and willingness to adapt to rapid and/or major change.” The observation that some of these attributes are associated with the affective domain suggests that engineering design is not all about cognitive skills; it is also about acquiring the right attitudes. Although it is not difficult to illustrate the need for such skills in class, their assessment is more challenging and requires special rubrics. Mourtos (2010) presents an example of a set of rubrics developed to assess open-ended problem solving skills.

A View for Total Engineering Design engineers must be generalists and acquire a basic understanding of a variety of subjects, from within as well as outside their major – in fact, even from outside of engineering – to develop a view for total engineering. This need has been expressed in three desired attributes for a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997):

6



• •

A good understanding of the design and manufacturing process (i.e., understands engineering and industrial perspective) A multidisciplinary, systems perspective, along with a product focus An awareness of the boundaries of one’s knowledge, along with an appreciation for other areas of knowledge and their interrelatedness with one’s own expertise

For example, an aircraft designer must have a good understanding of the basic aeronautical engineering disciplines: aerodynamics, propulsion, structures and materials, stability and control, performance, weight and balance. In addition, he/she must develop an understanding of how each part is manufactured and how its design and manufacturing affects the acquisition and operation cost of the airplane. The example illustrates the multidisciplinary nature of engineering design. Clearly, being an expert in one of the fields involved and inadequate in one or more of the rest, will not work well for a design engineer. Furthermore, engineers must take into consideration a variety of constraints when they design a new product. Some of these constraints are technical; some are non-technical. This expectation is stated in Outcome 3c of ABET EC 2000 (Engineering Accreditation Commission): “Engineering graduates must have an ability to design a system, component, or process to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability.” The importance of taking into consideration non-technical constraints (e.g., social, political, ethical, safety) is further reinforced in other ABET outcomes as well, where engineering graduates are expected to have:

Defining, Teaching, and Assessing Engineering Design Skills

“3f: an understanding of professional and ethical responsibility.

e.

3h: the broad education necessary to understand the impact of engineering solutions in a global, economic, environmental, and societal context.

f.

3j: a knowledge of contemporary issues”

g.





A basic understanding of the context in which engineering is practiced, including: customer and societal needs and concerns, economics and finance, the environment and its protection, the history of technology and society High ethical standards (honesty, sense of personal and social responsibility, fairness, etc.)

In summary, the design engineer must develop an aptitude for systems thinking and maintain sight of the big picture, which is often influenced by technical as well as non-technical factors. Clearly, it is very difficult to quantify a set of specific skills to describe the ideal design engineer. Nevertheless, in an effort to facilitate the teaching and assessment of these design skills, the BSAE Program at SJSU adapted the following set of performance criteria: Aerospace engineering graduates must be able to: a. b.

c.

d.

Research, evaluate, and compare aerospace vehicles designed for similar missions. Follow a prescribed process to develop the conceptual / preliminary design of an aerospace vehicle. Develop economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability constraints and ensure that the vehicle they design meets these constraints. Select an appropriate configuration for an aerospace vehicle with a specified mission.

Develop and compare alternative configurations for an aerospace vehicle, considering trade-offs and appropriate figures of merit. Apply aerospace engineering principles (ex. aerodynamics, structures, flight mechanics, propulsion, stability and control) to design the various vehicle subsystems. Develop final specifications for an aerospace vehicle.

Ability to Use Design Tools Freehand Drawing and Visualization Drawing is the ability to translate a mental image into a visually recognizable form. Eventually any design drawing is rendered as a Computer–Aided Drawing (CAD) with the help of appropriate software. However, CAD is not the best medium when a creative design engineer wants to convey an idea of “how things work” to nontechnical people. Freehand pictorial drawing is most easily and universally understood. Furthermore, a freehand drawing can be a very effective and quick way to communicate ideas in three-dimensions when concepts evolve quickly, as is the case during the early stages of design (e.g., brainstorming), at which point it is not worth investing time and effort in a CAD. Leonardo da Vinci (1452 – 1519) was one of the earliest engineers who demonstrated mastery in freehand drawing, making it possible for us today to visualize how his inventions worked and appreciate his genius (Figure 3). Freehand drawing is a right-brain activity because it is free of technical symbols and it is closely associated with our ability to visualize things in three dimensions, an indispensable design skill.

Computer–Aided Drawing and Computer–Aided Design Unlike freehand drawing with its artistic flavor, engineering drawing is a precise discipline based

7

Defining, Teaching, and Assessing Engineering Design Skills

Figure 3. Design for flying by Leonardo da Vinci (the drawings of Leonardo da Vinci)

work and what will not work. For example, in the design of an airplane landing gear, the designer must be able to visualize how the gear will fold and retract in its proper space and make sure that it will not conflict with other components in the process. The skills described in this section fall under Outcome 3k of ABET EC 2000, which states that engineering graduates must have an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice.

Interpersonal, Communication, and Team Skills Interpersonal and Team Skills on the principles of orthographic projection. In contrast to freehand drawing, engineering drawing emphasizes accuracy, something that has been greatly enhanced by the use of modern computers and graphic capabilities. Today a CAD is much more than a computer generated engineering drawing; it involves an extensive database detailing the attributes of an object and allows it to be rotated, sectioned, and viewed from any angle. This capability is indispensable in the design of complex engineering equipment, such as an airplane, because engineers can now superposition the various subsystems and immediately see potential conflicts. CAD has led to Computer-Aided Manufacturing (CAM), where the machines that manufacture the various components receive their operating instructions directly from the database in the computer.

Kinematics A design engineer needs skills in kinematics since the various parts of an engineering product move, rotate and may also expand / retract or fold. An understanding of kinematics (e.g., selecting the proper mechanism and visualizing its operation) allows the design engineer to evaluate what will

8

Archimedes designed his screw pump (Wikipedia, 2007) alone. This was not uncommon in the ancient world. Similarly, Leonardo da Vinci designed his engineering devices, such as the one shown in Figure 3, alone. Today, working alone to design an engineering product is, for the most part, a thing of the past unless, of course, the product is a very simple one. The complexity of modern engineering products requires engineers to work in teams; in fact, sometimes several teams must work together. For example, in the design of a new transport, it is typical to have a team of engineers for each of the disciplines mentioned above (aerodynamics, controls, manufacturing, etc.). These teams work closely together to meet the same set of mission and airworthiness requirements, while at the same time making sure there are no conflicts between the various airplane sub-systems. Hence, although earlier we expressed the need for design engineers to be generalists, so they can appreciate the multidisciplinary requirements that come into play in the design of a new product, it is not possible for an individual to have enough expertise in each and every one of the technical areas to adequately perform the detail design of all the subsystems, not to mention the analysis of

Defining, Teaching, and Assessing Engineering Design Skills

the impact of a new product in a global, economic, environmental, and societal context. Outcome 3d of ABET EC 2000 states that engineering graduates must have an ability to function on multidisciplinary teams. In today’s multicultural world, this outcome also implies an ability to collaborate with people from different cultures, abilities, and backgrounds. This is further elaborated in the following four desired attributes for a global engineer (The Boeing Company & Rensselaer Polytechnic Institute, 1997):





Communicate clearly with team members when speaking and writing. Understand the direction of the team. Bring a positive attitude to the team, encourage others, seek consensus, and bring out the best in others.

Communication Skills

The following performance criteria have been chosen to assess this outcome in the BSAE Program at SJSU: Students working in teams are expected to:

Design requires clear and effective communication not only between team members, but also between the team and third parties (management, customers, etc.). Communication usually takes two forms, oral and written and can be informal, such as between team members or formal, such as when the team presents information to third parties. All four types are crucial for the success of a project. Needless to say, good verbal communication requires not only ability to express one’s ideas clearly but also the ability to listen carefully and understand ideas and concerns expressed by others. The need to communicate effectively is outlined in Outcome 3g of ABET EC 2000. In the BSAE Program at SJSU the following performance criteria were selected to express the skills embedded in this outcome: Ability to:



a.











• •

An awareness of and strong appreciation for other cultures and their diversity, their distinctiveness, and their inherent value. A strong commitment to team work, including extensive experience with and understanding of team dynamics. An ability to think both critically and creatively, in both independent and cooperative modes. An ability to impart knowledge to others.

Be committed to the team and the project; be dependable, faithful, and reliable. Attend all meetings, arrive on time or early, and come prepared and ready to work. Exhibit leadership by taking initiative, making suggestions, providing focus. Be creative, bring energy and excitement to the team, and have a “can do” attitude; spark creativity in others. Gladly accept responsibility for work and get it done; exhibit a spirit of excellence. Demonstrate abilities the team needs and make the most of these abilities by giving fully to the team.

b.

c.

d.

Produce well-organized reports, following guidelines. Use clear, correct language and terminology while describing experiments, projects or solutions to engineering problems. Describe accurately in a few paragraphs a project / experiment performed, the procedure used, and the most important results (abstracts, summaries). Use appropriate graphs and tables following published engineering standards to present results.

It is interesting to note that the desired attribute for a global engineer relating to communication skills, includes listening but also graphic skills

9

Defining, Teaching, and Assessing Engineering Design Skills

as part of the list (The Boeing Company & Rensselaer Polytechnic Institute, 1997): “Good communication skills, including written, verbal, graphic, and listening.” Although graphic skills were discussed earlier in the topic of freehand drawing and CAD, the term graphic here includes the ability to prepare engineering graphs that illustrate for example, parametric studies pertinent to a particular design. One thing that becomes obvious in this discussion is that the skills and attributes necessary for competent engineering design are so integrated that in some cases it is not even possible to draw clear distinctive lines between them.

CURRICULUM AND COURSE DESIGN FOR TEACHING ENGINEERING DESIGN SKILLS Like any set of skills, design skills must be introduced early in the curriculum, practiced often, and culminate in a realistic design experience if students are to achieve the level of mastery prescribed in ABET EC 2000 and expected in industry. The following subsections describe how design skills are introduced at the freshman level, dispersed throughout the BSAE curriculum, and culminate in a senior design capstone sequence. The Project-Based Learning (PBL) pedagogical model is used in all the courses where design is taught and students work in teams for all design projects. Non-traditional ways of assessing design skills are also discussed.

First-Year Design At SJSU engineering design is first taught in our Introduction to Engineering course (E10). E10 is a one-semester, two-hour lecture/three-hour laboratory course for freshmen, required by all engineering majors. Engineering design is taught

10

through hands-on projects (PBL) as well as through case studies in engineering failures, which also bring up the subject of engineering ethics. For each project, students work in teams to research, brainstorm, design, build, test, and finally demonstrate a device in class (Mourtos & Furman, 2002). Typically, students participate in two or three projects during the semester. This course design followed well-established research, which shows that first-year design courses help attract and retain engineering students (Ercolano, 1996). E10 students report significant gains in their understanding of design and ethics, design report writing and briefing skills (Mourtos & Furman, 2002). They report slightly lower gains in openended problem solving skills, including estimation and mathematical modeling. On the other hand, they report low gains in team skills. This was due to the fact that team skills were not taught explicitly at the time of the assessment. Despite a significant amount of time spent working in teams, students needed more guidance and coaching on skills like conflict resolution, task delegation, decision making, etc.). These skills are now taught more explicitly. In addition to student self-reporting, authentic assessment data from course instructors show that engineering freshmen perform fairly well in their design assignments.

Design Globally DispersedTeaching and Assessment of OpenEnded Problem Solving Skills In the BSAE Program design is dispersed throughout the curriculum, so students have an opportunity to practice design in a variety of subjects. Student design practice begins with open-ended problems to help them develop the related skills and attributes described earlier. For example, to help students develop: a.

A habit of doing research before attempting to solve a problem: an extensive literature

Defining, Teaching, and Assessing Engineering Design Skills

b.

c.

d.

e.

f.

g.

h.

i.

review is required for all open-ended problems and design projects. Competency in the use of a process, as well as specific tactics and heuristics to solve a problem: a problem-solving methodology is taught and required to use in the solution of all open-ended problems. An ability to monitor their progress following a problem-solving process: students write a reflection on the effectiveness of their problem-solving process and identify their strengths and weaknesses. A value system in which accuracy is more important than speed: students are given sufficient time to tackle problems, whether in class (exams) or outside of class, and their grading depends heavily on the accuracy of their calculations. A habit of writing down ideas and creating sketches, charts, and figures while solving a problem: students are graded not only on their final answer but also on how well they integrate such features in their solution of problems. An organized and systematic way of approaching problems: students are expected to document in their solutions every step of the problem-solving methodology they are required to follow. An open mindedness and flexibility when solving problems: students are required to consider, analyze, discuss, and present multiple approaches and solutions to a problem. A risk-taking attitude when solving problems: innovative approaches are encouraged; students are not penalized for presenting such solutions, even when the final outcome is not the best. An ability to use an overall approach that emphasizes fundamentals rather than combining memorized solutions as well as an ability to cope with ambiguity and manage the stress: open-ended problems are practiced in all upper division courses.

Design was originally introduced through projects in several junior level aerospace engineering courses. For example, in aerodynamics (AE162), students designed an airfoil for an ultralight aircraft and a wing for a high subsonic transport, both of which had to meet very specific requirements. Similarly, in propulsion (AE167) students designed a compressor and a turbine and they subsequently matched them for placement in a jet engine with specific thrust requirements. In an effort to address the compartmentalization of traditional engineering curricula this approach was modified in 2005. In each of the junior fall and spring semesters, students now define their own design project that involves applications from at least two courses, taken concurrently in the particular semester (Mourtos, Papadopoulos, & Agrawal, 2006). For example, one project involved the design of a ramjet inlet and required integration of compressible flow (AE164) and propulsion principles (AE167). Another, more ambitious project involved the design of a flexible wing for high maneuverability and required integration of principles from aerospace structures (AE114), aerodynamics (AE162), flight mechanics (AE165), and computational fluid dynamics (AE169). This project-based integration of the curriculum offers students an opportunity to appreciate the integrative nature of aerospace engineering design on a smaller scale, before they delve into a much more demanding senior design experience.

Senior Design Capstone Experience In their senior year, aerospace engineering students may specialize in aircraft (AE171A&B) or spacecraft (AE172A&B) design. Both course sequences involve the conceptual and preliminary design of an aerospace vehicle. Depending on the project, the experience may also include the detail design and manufacturing of the vehicle. Although only one of these course sequences is

11

Defining, Teaching, and Assessing Engineering Design Skills

required, a few students choose to take both in lieu of technical electives.

Teaching and Assessment of Team Skills As anyone who has ever worked in a team knows, team skills are not acquired automatically simply by working in a team; they need to be taught explicitly, practiced regularly, and assessed periodically, just like any other set of skills. Although team skills are now taught in E10 and assessed in every course that involves a team project or experiment, it is in the senior design course sequence that these skills are formally taught and assessed. As the course meets once a week for two and a half Table 1. Team member report card

12

hours, the first 15 to 30 minutes are dedicated to building an understanding of how effective teams work. At the beginning of the year, after teams are formed, students engage in various teambuilding activities. Lessons from these activities are discussed in class. Subsequently, in each class meeting students present and discuss one of the 17 laws of teamwork (Maxwell, 2001). Finally, at the end of each semester students submit a team member report card, in which they evaluate the performance of their teammates as well as their own, using the performance criteria for effective teamwork defined earlier and which are shown also in Table 1. These peer reviews are taken into consideration when assigning individual course grades.

Defining, Teaching, and Assessing Engineering Design Skills

Assessment of Total Engineering Skills Many student teams choose to participate in the SAE (Society for Automotive Engineering) Aero-Design or the AIAA (American Institute for Aeronautics and Astronautics) Design/Build/ Fly competitions. In addition to the conceptual and preliminary design, these teams carry out the detail design of their airplane, which they proceed to build and test. Clearly, these competitions give students an opportunity to go beyond a design on paper and experience challenges related to manufacturability and cost. Often engineering professionals from the aerospace industry mentor students in their designs. Participation in design competitions offers unique learning experiences through interactions with students, faculty, and engineers from educational institutions and companies around the country (US) and the world. Both the SAE and the AIAA competitions attract student teams from universities around the world. Furthermore, it provides unique opportunities for authentic assessment of student design skills by engineering professionals. In addition to the engagement factor, which in itself enhances the students’ learning experience in engineering design (Mourtos, 2003), the flight competition itself provides the ultimate test for their designs.

Assessment of Technical Communication Skills Although students must pass a technical writing course (E100W) and have several design and lab reports evaluated in previous courses, it is again the senior design capstone experience that offers opportunities for more realistic assessment of technical communication skills. For example, students who participate in design competitions have their design reports and drawings evaluated by a team of professional engineers, from whom they receive a score sheet and written feedback. Teams also present their design orally and receive

a separate evaluation of their presentation. This kind of feedback naturally adds to any comments given by the course instructor throughout the year. In fact, in many cases it carries a greater weight. In addition to participating in design competitions, students are encouraged to submit and present papers to conferences (e.g., Johnson et al., 2009; Casas et al., 2008). Whether a student conference or a professional conference, participation provides similar benefits in terms of evaluating student written and oral communication skills.

Safety, Ethics, and Liability Issues Safety, ethics, and liability issues are addressed in the course through aerospace case studies involving accidents. Students research background information for each case, make a class presentation, and argue about the various issues in class. A written report is also required. Students in general engage in these discussions and perform fairly well in their written assignments not only because safety, ethics, and liability provide an interesting dimension to aerospace vehicle design but also because these assignments are the only ones addressing ABET Outcome 3f in the BSAE Curriculum, and as such, they have been designated as “gateway” assignments. Hence, students must receive a score of 70% or better in these assignments to pass the course, regardless of their performance in the technical aspects of their design.

Economic, Environmental, Societal, and Global Impact Students discuss in one of their reports the impact of their designs in an economic, environmental, societal, and global context. For example, a team that designed a solar-powered UAV performed a simple analysis on the environmental impact of their airplane by estimating the emissions from a small internal combustion engine with comparable power. They also discussed operating cost taking

13

Defining, Teaching, and Assessing Engineering Design Skills

into consideration the replacement cost of their expensive solar panels every time their UAV crashed. On the other hand, it is not always possible to find interesting and realistic social, political and other types of constraints for all airplanes that students choose to design. Nevertheless, it is important that students develop at least a basic understanding of such issues as well as ways to properly research them before attempting to address them. To develop such an understanding of these issues as they relate to aircraft design, students perform an additional individual assignment by selecting and researching a topic of interest to them. For example, two very interesting topics selected by students were the impact of airplanes on cultural integration and the contribution of jet aircraft contrails on global warming. Students are required to find at least five references related to their topic, at least two of which must be technical journal articles, conference papers or technical reports. For the rest of their references students may use newspaper or magazine articles and the worldwide web. Students study these references and prepare a two-page paper summarizing the key points of their research and a ten-minute presentation for our class. In their presentation students must include two key questions related to their issue, as a way to facilitate class discussion.

Graphic Communication Skills To introduce students to freehand drawing a collaboration has been established with the SJSU School of Art and Design. A team of students from the graduate class Artists Teaching Art Seminar (Art 276) visits the aircraft design class to offer a three-hour workshop on freehand drawing, which includes contour drawing, gesture drawing, and perspective. Both groups of students have been very positive about their experience; the art students because they are given an opportunity to practice their teaching skills in a realistic setting; the aircraft design students because they get an opportunity to express themselves creatively

14

Figure 4. Example of a free-hand drawing in the early design stages

within the context of a very demanding engineering course. An example of a free-hand drawing illustrating a possible configuration for a small solar-powered UAV is shown in Figure 4. Engineering students tend to be very capable with computer programs, including those used in design. For example, a student produced an artist’s concept of his proposed very large, luxury airship, as a way of helping his audience visualize the level of comfort and luxury afforded in this kind of vehicle and provide a contrast with the interior one finds in most airlines today (Figure 5). Naturally, three-view CAD drawings are expected from students in all final design reports. Students are introduced to CAD early in their curriculum with a required freshman-level course in Design & Graphics (ME20). In addition, Computer Aided Design (ME165) is a popular technical elective for many students.

REQUIRED SKILLS FOR FACULTY WHO TEACH ENGINEERING DESIGN An additional challenge in teaching design is the competence level, as far as design skills are concerned, of the faculty who teach design courses.

Defining, Teaching, and Assessing Engineering Design Skills

Figure 5. Example of an artist’s concept drawing for the interior of a very large, luxury airship

To provide a thorough analysis of this issue is beyond the scope of this article, however, it is worth mentioning two very distinct reasons, which contribute to this challenge: a.

b.

A successful completion of a Ph.D. degree, required for a faculty position at most engineering schools, entails primarily development of analytical (left brain), research skills. On the other hand, as we have seen, design requires both analytical and creative skills. To earn tenure and promotion in an academic setting engineering faculty are required to perform research, publish in refereed journals, and seek external funding. To maximize their chances for success under this kind of pressure, engineering faculty continue the same line of research they did in graduate school. After all, the venues available for publishing design work or seeking funding to do such work are limited compared with traditional areas of engineering research.

Hence, faculty members who are asked to teach a design course, often find themselves unprepared. One way to address this deficiency is to require engineering faculty to undergo some training in engineering design before teaching a design course. There are many workshops on design for

faculty members as well as for engineers who work in industry, sponsored by professional societies, universities, and engineering companies. Professional societies also offer summer fellowships for engineering faculty willing to spend a summer in industry working alongside design engineers. Another way to address this issue is to hire adjunct faculty with current design experience from industry to teach design courses. This solution, however, poses its own problems. a.

b.

While some engineering schools are strategically located in areas where adjunct faculty with design experience are available, not every engineering school is blessed with proximity to engineering companies that may provide such faculty. This issue can be addressed in creative ways. For example, to accommodate an adjunct faculty member who teaches a design course at SJSU, a blended course has been scheduled: traditional (face-to-face) and online. The instructor flies from another state every other week and spends three hours with the students. In between, the course is conducted online using appropriate software. Teaching any subject including design requires not only expertise in the subject matter but also appropriate pedagogical

15

Defining, Teaching, and Assessing Engineering Design Skills

knowledge (Mourtos, 2007). Unfortunately, most engineering faculty do not possess such knowledge, as it is not a requirement in their job description. This is true for full-time as well as part-time faculty. Our experience at SJSU has shown that both full-time and adjunct faculty have opportunities to develop pedagogical knowledge through experience and reflection by teaching a variety of courses over time as well as through optional pedagogical training available at most universities. As a result, some – certainly not all – of the faculty do develop appropriate pedagogical content knowledge over time and become effective teachers.

CONCLUSION An attempt has been made to provide a comprehensive list of skills, technical and non-technical, for design engineers. These skills include analytical, open-ended problem solving, a view for total engineering, interpersonal and team skills, communication skills, as well as fluency with modern tools and techniques used in engineering design. In addition to these skills, design engineers must develop certain attributes, such as curiosity to learn new things and explore new ideas, self-confidence in making design decisions, taking risks by trying new concepts, thinking out-of-the-box, and persistence to keep trying when things don’t work. The paper presented course and curriculum design from the BSAE Program at SJSU that addresses these skills and attributes and touched briefly on the challenge of engineering faculty competence in design skills and pedagogy. Some of the elements in this curriculum were introduced several years ago, have been assessed extensively and indicate that students indeed acquire an adequate level of competence in some of these skills. Some of these elements, such as the teaching of freehand drawing through the collaboration with the College of Arts and Design, were introduced

16

only recently and have not yet been assessed. In any case, the attributes of a design engineer, as described above, are difficult to measure and will require the development of special rubrics.

REFERENCES Bloom, B. S. (1984). Taxonomy of educational objectives; Handbook 1: Cognitive domain. Reading, MA: Addison-Wesley. Bloom, B. S., Karthwohl, D. R., & Massia, B. S. (1984). Taxonomy of educational objectives; Handbook 2: Affective domain. Reading, MA: Addison-Wesley. Casas, L. E., Hall, J. M., Montgomery, S. A., Patel, H. G., Samra, S. S., Si Tou, J., et al. (2008). Preliminary design and CFD analysis of a fire surveillance unmanned aerial vehicle. In Proceedings of the Thermal-Fluids Analysis Workshop. Curry, D. T. (1991). Engineering schools under fire. Machine Design, 63(10), 50. Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005). Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94(1), 103–120. Engineering Accreditation Commission & Accreditation Board for Engineering and Technology. (2009). Criteria for accrediting engineering programs, effective for evaluations during the 2010-2011 cycle. Retrieved August 20, 2010 from http://www.abet.org/forms.shtml Ercolano, V. (1996). Freshmen: These first-year design courses help attract and retain engineering students. ASEE Prism, 21-25. Flightsim Aviation Zone. (2010). Federal Aviation Regulations, Part 25 – Airworthiness Standards, Transport Category Airplanes. Retrieved April 18, 2011 from http://www.flightsimaviation.com/ data/FARS/part_25.html

Defining, Teaching, and Assessing Engineering Design Skills

Flightsim Aviation Zone. (2010). Federal Aviation Regulations, Part 36 – Noise Standards. Retrieved April 30, 2011 from http://www.flightsimaviation. com/data/FARS/part_36.html Johnson, K. T., Sullivan, M. R., Sutton, J. E., & Mourtos, N. J. (2009). Design of a skydiving glider. In Proceedings of the Aerospace Engineering Systems Workshop. Maxwell, J. C. (2001). The 17 indisputable laws of teamwork: Embrace them and empower your team. Nashville, TN: Thomas Nelson. Mourtos, N. J. (2003). From learning to talk to learning engineering; Drawing connections across the disciplines. World Transactions on Engineering & Technology Education, 2(2), 195–204. Mourtos, N. J. (2007). Course design: A 21st century challenge (pp. 1–4). San Jose, CA: Center for Faculty Development and Support, San Jose State University. Mourtos, N. J. (2010). Challenges students face when solving open - ended problems. International Journal of Engineering Education, 26(4). Mourtos, N. J., DeJong-Okamoto, N., & Rhee, J. (2004). Open-ended problem-solving skills in thermal-fluids engineering. Global Journal of Engineering Education, 8(2), 189–199. Mourtos, N. J., & Furman, B. J. (2002). Assessing the effectiveness of an introductory engineering course for freshmen. In Proceedings of the 32nd IEEE/ASEE Frontiers in Education Conference. Mourtos, N. J., Papadopoulos, P., & Agrawal, P. (2006). A flexible, problem-based, integrated aerospace engineering curriculum. In Proceedings of the 36th IEEE/ASEE Frontiers in Education Conference.

Nicolai, L., & Pinson, J. (1988). Aircraft Design Short Course. Dayton, OH: Bergamo Center. Nicolai, L. M. (1998). Viewpoint: An industry view of engineering design education. International Journal of Engineering Education, 14(1), 7–13. Petroski, H. (2000). Back to the future. ASEE Prism, 31-32. Pink, D. H. (2005). A whole new mind: Why the right-brainers will rule the future. New York, NY: Riverhead Books. Reuteler, D. (2010). The drawings of Leonardo da Vinci. Retrieved August 20, 2010, from http:// www.drawingsofleonardo.org/ Seely, B. E. (1999). The other re-engineering of engineering education, 1900-1965. Journal of Engineering Education, 285-294. The Boeing Company & Rensselaer Polytechnic Institute. (1997). A Manifesto for Global Engineering Education: Summary Report of the Engineering Futures Conference. Seattle, WA. Wikipedia. (2007). Archimedes’ screw. Retrieved August 18, 2010 from http://en.wikipedia.org/ wiki/Archimedes%27_screw Wikipedia. (2011). Hughes H-4 Hercules. Retrieved April 18, 2011, from http://en.wikipedia. org/wiki/Hughes_H-4_Hercules Woods, D. R., Hrymak, A. N., Marshall, R. R., Wood, P. E., Crowe, C. M., & Hoffman, T. W. (1997). Developing problem-solving skills: The McMaster problem-solving program. Journal of Engineering Education, 86(2), 75–91.

This work was previously published in the International Journal of Quality Assurance in Engineering and Technology Education (IJQAETE), Volume 2, Issue 1, edited by Arun Patil, pp. 14-30, copyright 2012 by IGI Publishing (an imprint of IGI Global).

17

18

Chapter 2

Why Get Your Engineering Programme Accredited? Peter Goodhew University of Liverpool, UK

ABSTRACT In many countries engineering degree programmes can be submitted for accreditation by a professional body and/or graduate engineers can be certified or registered. Where this is available most academic institutions feel that they must offer accredited engineering programmes. The author suggests that these processes are at best ineffective (they do not achieve their aims) and at worst they are destructive of creativity, innovation and confidence in the academic community. The author argues that such processes (including any internal certification within the Conceive-Design-Implement-Operate, i.e., CDIO Initiative) should be abandoned completely. The author proposes alternative ways of maintaining the quality of engineering design and manufacture, which place the responsibility where it properly lies – with the manufacturer or contractor. This is a polemic piece, not a referenced review of accreditation.

INTRODUCTION In many countries undergraduate engineering programmes can be submitted to a national body for accreditation. Graduates from accredited programmes are eligible, often with an additional requirement for relevant work experience, for registration as a professional engineer. In the UK this accreditation is overseen by the Engineering Council via UK-Spec. and opens the way to C.Eng, I.Eng or Eng Tech qualifications. In the USAABET DOI: 10.4018/978-1-4666-1945-6.ch002

serves a similar function, while in Australia the appropriate body is Engineers Australia. In all cases the programme, its students, and sometimes its graduates, are scrutinised by a committee of professional engineers before accreditation is awarded for a fixed period such as five years. The accreditation process involves substantial paperwork and usually a one or two day visitation, so is quite costly both for the educational institution and the professional body. I argue in this article that this considerable effort does not represent good value for money and in some cases may have a negative effect on the quality of engineering education.

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Why Get Your Engineering Programme Accredited?

THE CASE AGAINST ACCREDITATION Did the accreditation of professional engineering programmes prevent the disastrous crash of the Airbus 330, flight AF 447, in June 2009? Equally, is it responsible for the fact that the Eiffel tower has remained standing for 120 years? Or that my iPhone is so brilliant? No, no and no. So what is accreditation supposed to be for? At the highest level I presume that the intention is to ensure and enhance the quality and safety of engineered products throughout the world. At a more mundane (and self-interested) national level it might be intended to enable the world-wide transferability, and thus profitability, of a nation’s engineering industry by ensuring the international credibility and employability of its engineers. These seem to be laudable objectives, but delivery of them is several steps away from the accreditation of university programmes. The logic is presumably that the employers of professional engineers must have confidence, via external testimony, in their skills and their fitness to practice. This confidence is engendered by their status as professional (chartered in UK parlance, registered in other jurisdictions) engineers, part of the qualification for which is that, at some time in the past, they graduated from an accredited degree programme. These engineers also have to demonstrate some appropriate experience in employment and the membership of a professional body. I find the whole system of accreditation unsatisfactory in two ways: It does not deliver the intended outcome (and so is ineffectual) and, additionally, it can damage our education system and thus our students and graduates. First, the charge that it is ineffectual: Engineered products are conceived, designed, made and operated (CDIO-ed) by engineers employed by large or small companies. Some, but certainly not all, of these engineers may be chartered. They will usually have earned their chartered status by virtue

of the work undertaken in their first few years of employment, backed up by the degree they were awarded several years ago. Since receiving their chartered status they will have been encouraged to undertake continuous professional development, but this will not have been checked. A fifty-yearold chartered engineer is thus operating on the basis of a validation process twenty years ago and a degree awarded about 25 to 30 years ago. The accreditation of this degree, so long ago, has almost no relevance for the engineering practices in use today. Indeed if the degree was typical of those awarded 25 years ago it will have contained a significant amount of engineering science and very few tests of engineering aptitude or attitude (which is of course why we have the CDIO movement). The fitness to practice of an individual engineer will in reality depend on what they have done, seen and learned during their working life, which is almost independent of the content of their first degree. Indeed the technical content of a degree in one engineering discipline may have almost no overlap with the content of another engineering discipline so it is hard to argue that subject content has anything to do with being, or thinking like, an engineer. Furthermore an engineer employed today may be working in an area unrelated to their original area of study. This is very likely for bioengineers, nanoengineers, environmental engineers, nuclear engineers and others working in interdisciplinary areas. Their original degree would either have been un-accredited or the accreditation would relate to a different disciplinary area. How can this in any way validate or assure the quality of their current work? A third issue is the effectiveness of the quality assurance provided by chartered status. I have already asserted that there are almost no checks on the continued professional development of chartered engineers, but equally there are almost no cases of the de-registration of rogue chartered engineers (and even if there were, they would certainly – like doctors – be de-registered after

19

Why Get Your Engineering Programme Accredited?

they had committed a grave misjudgment or offence, not before). So the accreditation of programmes is certainly ineffectual, but it is also damaging to the education process. University departments of Engineering spend a great deal of time preparing for accreditation visits, and tuning their degree programmes to fit the perceived requirements of their professional bodies. They do this not to improve their programmes (most programme leaders do not believe that the comments of accreditors will achieve this) but because of the fear that they will no longer be able to compete in the marketplace for students if they are not accredited. This fear is probably misplaced, but no department has the courage to put it to the test. Accreditation panels almost always feel that they should make some critical (framed as helpful) comments but these usually reflect the prejudices of individual panel members, who are rarely experts in higher education and frequently elderly and tending to be out of date. (I have resolved never to accept another invitation to sit on an accreditation panel now I have reached 65.) The damage to the system is that the threat of accreditation makes our engineering departments more conservative, less willing to change or innovate, as well as taking time and money which would be better spent on the education of their students. It also reinforces (unhelpfully) the audit culture which has over-run our universities in the last twenty years (at least in the UK). It would be unreasonable to criticise the existing system of accreditation without making some attempt to suggest what might replace it to provide the assurance of quality demanded by society. My suggestion is that the responsibility for the safety and quality of products (from multi-billion tunnels to five-penny toys) should remain where it legally is – with the manufacturer or major contractor. These businesses should assure themselves that their workers are appropriately skilled and work

to appropriate safety and ethical standards. To achieve this they might need to strengthen their recruitment procedures to include a real assessment of candidates’ current abilities and skill sets. They would also want, as many do, to ensure periodically that their employees are up to date. They might wish to buy in the necessary training expertise, perhaps even from a local university, but they will not be much helped by a past accreditation. The proof of the quality of training, and of initial education, will be demonstrated by the performance of the employee – supervised and checked by experienced colleagues – not by their possession of a yellowing piece of paper. I notice that I have not mentioned professional bodies. What might their role be? Certainly not as accreditors, but perhaps as honest brokers between employers and trainers and educators, or as forums for discussion (but not regulation) of best practice. In which case perhaps there should be an upper age limit for service on any committee or as an officer – shall we say 50 – and those in their dotage (like me) should only speak when asked.

CONCLUSION The arguments I have advanced here also apply to the certification of undergraduate programmes as, CDIO-compliant. Such a scheme would cost effort (and almost certainly money) to implement, it would cost even more to police (so this would be unlikely to happen) and would still offer no assurance of the quality of an engineering graduate. A further particular argument which applies to CDIO members is that (unlike many other engineering teaching departments) they have already shown their commitment to improving engineering education and are thus the least likely programmes to need the additional discipline offered by certification process. So I strongly suggest that we do not bother.

This work was previously published in the International Journal of Quality Assurance in Engineering and Technology Education (IJQAETE), Volume 2, Issue 2, edited by Arun Patil, pp. 93-95, copyright 2012 by IGI Publishing (an imprint of IGI Global). 20

21

Chapter 3

Quality and Environmental Management Systems in the Fashion Supply Chain Chris K. Y. Lo The Hong Kong Polytechnic University, Hong Kong

ABSTRACT Consumers and stakeholders have rising concerns over product quality and environmental issues, and therefore, quality and environmental management have become important topics for today’s fashion products manufacturers. This chapter presents some empirical evidence of the adoption of quality management systems (QMS) and environmental management systems (EMS) and their impact on fashion and textiles related firms’ supply chain efficiency. Although both management systems are commonly adopted in the manufacturing industries and becoming a passport to business, their actual impacts specifically on the fashion supply chain have not been explored. By investigating the adoption of ISO 9000 (a quality management system) and ISO 14000 (an environmental management system) in the U.S. fashion and textiles firms, we estimate their impact on manufacturers’ supply chain performance. Based on 284 publicly listed fashion and textiles manufacturing firms in the U.S., we find that fashion and textiles firms operating cycle time had shortened by 15.12 days in a five-year period. In the crosssectional analysis, the results show that early adopters of ISO 9000 and high-tech textiles related firms obtained more supply chain benefits. We only find mixed results of the impact of ISO 14000 on supply chain performance.

BACKGROUND The quality of textiles products at each stage in the fashion supply chain is essential for the success of a fashion product. The quality level delivered to the final customer is the result of quality manageDOI: 10.4018/978-1-4666-1945-6.ch003

ment practices of each link in the fashion supply chain, thus each actor is responsible for their own quality issues (Romano & Vinelli, 2001). This is because the quality of the final product that reaches the customers is clearly the results of a chain of successive, inter-linked phases: spinning, weaving, apparel and distribution, and thus quality management in supply chain are particularly

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Quality and Environmental Management Systems in the Fashion Supply Chain

relevant in the fashion and textiles industries (Romano & Vinelli, 2001). Quality management is defined as an integrated approach to achieve and sustain high quality output, focusing on the maintenance and continuous improvement of processes and defect prevention at all levels and in all functions of the organization to meet or exceed customer expectation (Flynn, Schroeder, & Sakakibara, 1994). The customer expectations on the quality of a product, however, are not just its physical attributes and workmanship. According to ISO 9000, quality is defined as customer expectations over actual performance (ISO, 2004). Consumers’ expectations on fashion products, nowadays, also include its environmental attributes, for instance, use of sustainable materials and control of environmental impacts during the manufacturing processes, etc. Therefore, both quality and environmental management have become important focuses for today’s fashion and textiles manufacturers. International buyers of major brands often use quality management system (QMS) and environmental management systems (EMS) as a major tool to select capable fashion and textiles suppliers (Boiral, 2003; Boiral & Sala, 1998), to ensure their products and raw materials could meet customers’ expectations on quality and environmental aspects. To respond to the call for management systems in various industries, International Organization for Standardization (ISO) has developed ISO 9000 in 1987 and ISO 14000 in 1996, which are generic QMS and EMS for worldwide applications. The number of ISO 9000 certified firms has been increasing persistently since its introduction some 20 years ago. According to recent statistics (ISO, 2009), almost one million of firms or business divisions in 175 countries have adopted ISO 9000. In the past five years, almost 800,000 firms or business units have adopted ISO 9000, representing an increase of almost 570%. For ISO 14000, it has been adopted by 188,815 firms or business divisions in 155 countries (ISO, 2009). From 2006 to 2008, almost 60,000 firms or busi-

22

ness units have adopted ISO 14000, representing an increase of about 47% (ISO, 2009). Multinational enterprises (MNEs) with operations in more than one country are widely recognized as key agents in the diffusion of ISO certifications across national borders. The diffusion of ISO 9000 in the fashion and textiles industries is particularly pronounced. In the early 1990s, European Committee for Standardization (Committee European pour Normalization - CEN) developed importing regulations for use by the European Union (EU) countries. CEN requires manufacturing firms that are importing products into the European market to comply with ISO 9000 standard. Import of fashion and textiles products to EU countries are under this regulation. The requirement of ISO 9000 was then followed by major MNEs, which use the ISO-based criteria to certify their own suppliers and have developed their internal quality management systems according to the ISO guidelines (Guler, Guillen, & Macpherson, 2002). Many suppliers to MNEs subsequently required their upstream suppliers or business partners to be ISO certified, leading to the widespread diffusion of the standard in global supply chain. ISO 14000 follows the global diffusion pattern of ISO 9000, and it has become the most widely adopted EMS in the world (Corbett & Kirsch, 2001). It is a set of management processes and procedures requiring firms to identify, measure, and control their environmental impacts (Bansal & Hunter, 2003). With the aim of improving the environmental performance of a firm, compliance with the standard is audited and certified by an independent, third-party certification body (Jiang & Bansal, 2003). The initial version of ISO 14000 was a consolidation of various elements in BS 7750, a British environmental management standard, and European’s Environmental Management and Audit Scheme (EMAS). Regulations of different countries towards the adoption of ISO 14000 also affect the diffusion of this standard. European countries and

Quality and Environmental Management Systems in the Fashion Supply Chain

some Asian countries, such as Japan (Bansal & Roth, 2000) and Singapore (Quazi, Khoo, Tan, & Wong, 2001), provide favourable legislative environment for firms to adopt EMS, while the U.S. comparatively provides less favourable legislative environment (Kollman & Prakash, 2001, 2002). The regulatory environment within a country affect the costs and perceived benefits of ISO 14000 adoption (Delmas, 2002). Jennings and Zandbergen (1995) maintained that the larger the pressure from the environment, the faster the diffusion of EMS. Due to globalization, the environmental law and regulations are not just affecting the firms within a particular country, but also affecting firms which import and export goods to other countries. Christmann and Taylor (2001) found that if the polluting firms in developing countries export a large proportion of their output to developed countries, they would be more likely to adopt ISO 14000, despite they may be tempted by lax environmental regulations in developing countries. Although the original objective of ISO 14000 is quite different from ISO 9000’s, they shared the same management framework and both diffuse along the global supply chain. In the literature, some scholars investigated on the interactions between ISO 9000 and ISO 14000. Corbett & Kirsch (2001) found that ISO 9000 appears as an important factor explaining diffusion of ISO 14000, suggesting that the motivation, such as attracting potential customers, behind the two have significantly overlap. Pan (2003) also found that there is a strong linkage between the motivations of implementing ISO 9000 and ISO 14000 and their perceived benefits of adoption. Albuquerque, Bronnenberg, & Corbett (2007) further investigated the global diffusion patterns of ISO 9000 and ISO 14000, and they found that both certifications diffuse across countries primarily by geography. In addition, the adoption experience of ISO 9000 could also help certified firms to effectively implement ISO 14000, as the two systems are very similar in terms of imple-

mentation requirements (Poksinska, Dahlgaard, & Eklund, 2003).

LITERATURE REVIEW OF QMS AND EMS ADOPTION IN THE FASHION SUPPLY CHAIN In fashion and textiles literature, researchers mainly focus on the usefulness of QMS and EMS in supplier selection process. Buyers use management certification (e.g., ISO 9000 and ISO 14000) as an instrument to determine whether the supplier is capable to follow industry’s standards (Motwani, Youssef, Kathawala, & Futch, 1999; Teng & Jaramillo, 2005; Thaver & Wilcock, 2006). Teng and Jaramillo (2005) developed a model for evaluation of supplier in the fashion supply chain. They proposed five performance clusters to evaluate textiles supplier performance, which are delivery, flexibility, cost, reliability, and quality. They suggested using QMS certifications as the evidence for quality performance evaluation. The suppliers would receive a higher score in the evaluation model, if they are ISO 9000 certified. There are only a few anecdote cases that discussed the impact of QMS on the fashion supply chain. Sarkar (1998) found that a textiles mill in India obtained higher customer satisfaction through increased employee involvement and product quality improvement due to ISO 9000 adoption. Romano and Vinelli (2001) conducted a case study of the Marzotto Group, one of the most important Italian textiles and apparel manufactures, about its relationships with both upstream and downstream suppliers. They found that quality management system is the “glue” that makes the supply network to operate as a “whole system”. Adanur and Allen (1995) conducted the first industry survey of ISO 9000 in the U.S. textiles industry, and they found that the certified firms experienced decrease in production time and product returns. The certified firms also reported fewer raw material rejections from ISO 9000

23

Quality and Environmental Management Systems in the Fashion Supply Chain

certified suppliers. A recent study shows that the adoption of ISO 9000 can improve the adopting firms’ supply chain efficiency in the general manufacturing industries in the U.S. (Lo, Yeung, & Cheng, 2009), suggesting that the adoption of QMS might help the certified firms to become a more efficient node in their supply chain. The implementation of QMS are often under the pressure of major customers, thus small- and medium-sized textiles manufacturers have no choice but to comply with customer requirements on these certification. They might pursuit ISO 9000 without genius knowledge about QMS. Allen and Oakland (1988, 1991a, 1991b) found that small textiles firms lack correct knowledge about QMS compared to large firms, in three survey studies of 183 textiles firms. They concluded that there is a distinct lack of good quality management practices within in the British textiles industry. Fatima and Ahmed (2010) studied the ISO 9000 certified firms in Pakistan’s bedwear textiles industry. They found that 60% of the firms offered poor training, 70% had poorly defined quality policy and objectives, and 70% had ineffective internal audit. Their findings show that despite the high adoption rate of ISO 9000 in the industry, there is lack of real implementation of QMS, and ISO 9000 is merely a passport into export markets. Fashion and textiles manufacturers in developing countries have self-regulation pressures to adopt QMS for gaining legitimacy from the MNEs of developed countries (Christmann & Taylor, 2001). In the operation management (OM) literature, there are some critics on EMS’s benefits in firm operations. The critics believe that environmental initiatives often transfer the cost previously bored by the society back to the firms (McGuire, Sundgren, & Schneeweis, 1988). The increased liability and environmental obligations would also lead to a negative impact on firms’ operational performance and its flexibility in production (i.e. limited choices of environmental friendly dyes and fabrics, which are often more expensive). Therefore, textiles firms’ operations managers

24

would hesitate to adopt EMS in their production. On the other hand, the advocate of EMS believe that the ISO 14000 could improve environmental performance, which eventually improved firms’ performance through cost-saving and revenue gain pathways (Klassen & McLaughlin, 1996). In fact, the perceived benefits of EMS in fashion supply chain are quite direct. The dying process in textiles processing could produce huge amount of emissions that would lead to fine and high restoration cost. The impact of EMS is especially important for the wet processing of natural fibres, which are water, energy and pollution intensive (Ren, 2000). Managing the environmental impact through a systematic process would help the firms to reduce water and energy consumption, as well as to avoid serious pollution incidents. Therefore, EMS adoption is particularly relevant to fashion and textiles related firms. However, compared to ISO 9000, the number of empirical works that focused on the EMS adoption in the fashion supply chain is very limited. There are only a few survey studies discussed about the sustainability of the fashion supply chain.. For example, Fresner (1998) conducted a case study of an Austrian textiles mill and found that the adoption of ISO 9000 helped to reduce solid waste and then pursuit ISO 14000 for further improvement in productivity. Brito et al. (2008) found that the adoption of ISO 14000 in Europe fashion and textiles industries would improve customer services and cost optimization for the adopting firms, and eventually improve the overall performance of the whole supply chain. However, both studies mentioned above did not provide objective evidence of such impact. To explore how QMS and EMS affect fashion and textiles firms’ supply chain efficiency, more empirical evidence is needed.

HYPOTHESIS DEVELOPMENT Supply chain efficiency could refer to lead-time performance, delivery promptness and inventory

Quality and Environmental Management Systems in the Fashion Supply Chain

level (Kojima, Nakashima, & Ohno, 2008). If the members of a fashion supply chain could deliver their products to their down stream customers in shorter period, the performance of the overall supply chain performance is improved. The well-known Supply Chain Operations Reference (SCOR) model suggested that firms’ operating cycle time is an important indicator to measure supply chain efficiency (Stewart, 1995, 1997). To measure supply chain efficiency for a particular firm, accounting-based performance indicators, namely 1) inventory days, 2) accounts receivable days, and 3) operating cycle time can be used (Lo, et al., 2009). We discuss how ISO 9000 and ISO14000 could possibly affect the supply chain efficiency of adopting firms as follows. ISO 9000 requires the adopting firm to ensure that product quality is constantly measured and appropriate corrective actions are taken whenever defects occur. These actions must be undertaken through a well-defined management system that monitors the potential quality problems continuously. Therefore, the defect rate of the fashion products should decrease and defects should be detected and corrected early in the production processes, and less scrap and rework need to be handled in the manufacturing processes (Naveh & Marcus, 2005). Therefore, the overall time required turning raw material into fashion and textiles products that fulfil customer order should be shortened (i.e. lower inventory days). ISO 14000 adoption could also lead to lower inventory days, as it requires organizations to implement pollution prevention procedures to avoid environmental spills, crises, and liabilities that might incur huge effort in restoration (Bansal & Hunter, 2003; Brio, Fernandez, Junquera, & Vazquez, 2001). Therefore, ISO 14000 certified firms could face less frequent mandatory pollution restoration that could seriously disrupt their operations. ISO 14000 certified firms are also perceived as less risky than their non-certified counterparts, less frequent environmental inspections from customers and regulators are required (Potoski

& Prakash, 2005), leading to further shortening of inventory days. Therefore, we hypothesize that the time required to convert raw materials into textiles products (i.e., inventory days) is shorter after ISO 9000 or ISO 14000 implementation. As the initial objectives and its impact on supply chain are different between ISO 9000 and ISO 14000, we develop two parallel hypotheses to estimate their impact on supply chain performance independently. H1a: The adoption of QMS leads to lower inventory days. H1b: The adoption of EMS leads to lower inventory days. The perceived benefits of ISO 9000 are not just confined to improving product quality, but also enhancing customer services (Buttle, 1997). If ISO 9000 implementation can improve product quality and customer services, the customer orders fulfilment time should be shorter. Moreover, if there were any quality problem of the fashion products, payment would be postponed as defective fashion products are returned to rework. Customers may not pay for the products until the reworked products being delivered and met their quality requirements. As a result, the time between product delivery and customer payment (i.e. accounts receivable days) should be shorter for firms with higher product and service quality for the ISO 9000 certified firms (Lo, et al., 2009). By taking proactive measures to prevent environmental crises, an ISO 14000 certified firm are able to prevent mistaken use of hazardous materials, and violation of environmental regulatory requirements in the customer’s market, which might results in large-scale product recall. Once a product recall is needed, the accounts receivable days will be significantly affected (i.e. longer). Besides, as customers are more favourable toward environmentally friendly products and organizations, ISO 14000 adoption can establish a positive corporate image and trust from customers in the

25

Quality and Environmental Management Systems in the Fashion Supply Chain

long run (Bansal & Hunter, 2003). Therefore, such firms have the potential to bargain for more favourable payment terms from customers who trusted and loyal to them (i.e., customers are willing to pay earlier). This hypothesis can be tested by measuring the accounts receivable days. H2a: The adoption of QMS leads to lower accounts receivable days. H2b: The adoption of EMS leads to lower accounts receivable days. In the fashion supply chain, operating cycle time consists of manufacturing time (the time required to turn raw materials into products), delivery time (the time required to deliver products from the manufacturer to customers), and payment fulfilment time (the time required for customers to pay for their accepted products). The total time incurred in the above processes is known as operating cycle or “cash-to-cash cycle” (Eskew & Jensen, 1996). Therefore, we hypothesize that operating cycle time should be shorter after the implementation of ISO 9000 and ISO 14000. H3a: The adoption of QMS leads to a shorter operating cycle. H3b: The adoption of EMS leads to a shorter operating cycle.

METHODOLOGY AND DATA COLLECTION In this research, we focus on fashion and textiles related firms in the manufacturing industry (SIC code 2000-3999). All firms under the industry that contains keywords such as “Fashion”, “Textiles”, “Dye”, “Apparels” and “Fabrics”, were included in our sample. To generate our sample, we identified ISO 9000 and ISO 14000 certified firms and their years of certification from registration data of Quality Digest and Who’s Registered, which are online registration databases. Since each firm

26

could have multiple plants/sites certified, we follow the practice of previous research (e.g., Corbett, Montes-Sancho, & Kirsch, 2005; Naveh & Marcus, 2005) by focusing on the first certification. It is because this is the only time period representing the change from a non-certified to a certified firm status. Additional certifications after the first certification only mean continuous improvements. After compiling the data from the online databases and from Standard and Poor’s financial database - COMPUSTAT, we found that 284 ISO 9000 certified and 61 ISO 14000 certified publicly listed fashion or textiles manufacturing firms in the U.S. We define the year of formal ISO certification as the certification year (year 0). To measure the abnormal change in performance over a long-term period (event window), we should start by defining the base year, when there is no prior impact from the preparation of ISO certification on the sample firms. To pass the certification audit, the average preparation time is 6-18 months prior to registration (Corbett, et al., 2005). Therefore, year -2 should be taken as the base year. As we only focus on the first certification, we can assume that there is no impact from ISO implementation on all the sample firms during the base year (year -2). ISO requires a third-party audit to verify that it has been effectively implemented, so a strong impact of ISO on performance should appear in the certification year when the firm passed the audit. The performance of certified firms should also experience the impact years after ISO certification. Therefore, we set the event period in this research at year -2 as the base year and measure the changes over the next five years (year -1, year 0, year 1, year 2, and year 3). To estimate the abnormal changes in supply chain performance within the event window, we compare the actual performance with the expected performance of the sample firms, which is based on the changes in performance of control firms (Barber & Lyon, 1996). The selection of control firms should be based on a combination of three

Quality and Environmental Management Systems in the Fashion Supply Chain

criteria: pre-event performance, industry, and firm size (Barber and Lyon, 1996). First, matching on pre-event performance can avoid the mean reversion problem of accounting data and control the impact of other factors on firms’ performance (Barber & Lyon, 1996). Second, industry economic status could account for up to 20% of the changes in financial performance (McGahan & Porter, 1997). Moreover, environmental issues and the impact of EMS are industry-specific (Russo & Fouts, 1997), so we must control industry type in matching sample and control firms. Third, previous studies have suggested that operating performance varies by size (e.g., Fama & French, 1995). We thus match sample and control pairs based on the three matching criteria. We generate the sample-control pairs and regard these pairs as the performance-industrymatched group. The first step is to match each sample firm to a portfolio of control firms based on at least a two-digit SIC code and 90%-110% of performance in year -2. In Step 2, if there are some firms have no control firm is matched in step 1, we use at least a one-digit SIC code and 90%-110% of performance to match for control firms. If no control firm is matched in the Step 2, we use only the 90%-110% of performance as the matching criterion. In our sample, there are 248 ISO 9000 certified firms, of which 58 are ISO 14000 certified. We discard 57 firms that did not have financial information (Operating cycle) in year -2. In the remaining 191 fashion and textiles related firms, 154 firms are matched in Step 1 (80.6%). five firms are matched in Step 2. No firm is matched in Step 3. We matched 159 sample firms with at least one control firm in year -2. The financial information presented in Table 1, the number of sample and control matched pairs gradually decrease from the ending period year 1 to year 3, due to lack of financial information in either sample firms or control firms. For another group of matching, i.e., performance-industry-size-matched, we further control

the firm size. We use the 50%-200% range of total assets for controlling the firm size. In other words, we match the sample and control firms with at least a two-digit SIC code, 50-200% firm size (in terms of total assets), and 90%-110% prior certification performance in year -2 (Step 1). In the case where a sample firm cannot match any control firm in Step 1, we relax the industrymatching criterion to at least a one-digit SIC code, while keeping the other matching criteria unchanged (Step 2). If no control firm is matched in Step 2, we use only the 50%-200% of firm size and 90%-110% of performance as the criteria (Step 3). The reason for taking the prescribed steps to create two control groups is to try to match most of the firms without compromising on the tightness of the matches on performance (Hendricks & Singhal, 2008).

Measurements of Indicators We measure the fashion and textiles firms’ supply chain efficiency by measuring the inventory days, accounts receivable days, and operating cycle time. For the calculation of inventory days, we first divide the cost of goods sold by average inventory. We then divide the 365 days by inventory turnover ratio for the Inventory days. The unit of inventory days is in terms of day (see Formula 1). For accounts receivable days, similarly, we first calculate the accounts receivable turnover by dividing the credit sales by average accounts receivable. We then use 365 days over accounts receivable turnover to estimate the number of accounts receivable days (see Formula 2). The overall operating cycle is the summation of inventory days and accounts receivable days, and it represents the time required to turn the raw materials to cash from customers. The corresponding formulas as follows: Letting IT be inventory turnover ratio, given by IT =

COGS , Avg.Inv.

27

28

Table 1. Abnormal operating cycles of sample firms for the ISO 9000 adoption in three-year period (year -2 to year 1), four-year period (year -2 to year 2), and five-year period (year -2 to year 3), based on performance-industry-matched and performance-industry-size-matched matching. p < 0.1*; p < 0.05**; p < 0.01*** for one-tailed tests; a in percentage year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

149

-18.44

-13.24

63.00

142

-11.59

-14.13

63.00

139

-17.24

-15.12

67.00

t / z statistics Inventory Days

-2.19 149

t / z statistics Accounts Receivable Days

-14.00 -2.28

149

t / z statistics

**

***

-10.31 **

-9.95 -1.86

-3.46 -3.05

-1.71

***

62.00 ***

-1.05 **

-3.11 -2.79

142 ***

51.00 **

-2.51 -11.00 -2.83 142

-0.16

***

-3.64

***

-9.69 ***

-3.26

-1.78

-1.56

-0.80

-1.40

-3.11

***

63.00 ***

-3.11

139 ***

54.00 *

-3.34 -14.48 -3.10 139

-0.92

***

***

-9.47 ***

-7.56 -2.48

-4.15 -3.46

-2.76

***

65.00 ***

-4.32 ***

-3.90 -3.39

***

61.00 ***

-2.55

***

Performance-industry-size-matched year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

144

-12.43

-5.51

57.00

137

-9.07

-3.92

59.00

131

-14.22

-7.18

60.00

t / z statistics Inventory Days

-1.44 144

t / z statistics Accounts Receivable Days t / z statistics

-1.58 144

*

-9.57

-1.68

*

-1.89 *

-1.57

-1.58 55.00

*

*

-1.74 137

-1.08

-2.86

0.20

50.00

-0.88

-0.53

0.00

-2.39 137

**

-11.28

-1.51

*

-3.50 ***

-1.68

-2.05 55.00

**

**

-1.65 131

-1.20

2.32

0.61

47.00

1.06

1.69

0.68

-1.86 131

*

-12.54

-1.44

*

-5.46 **

-1.52

-2.10

**

59.00 *

-1.92

-2.41

0.38

48.00

-0.77

-0.20

-0.35

**

Quality and Environmental Management Systems in the Fashion Supply Chain

Performance-industry-matched

Quality and Environmental Management Systems in the Fashion Supply Chain

Where

performance (i.e., in year -2) and change in the median performance of control firms in that period (i.e., from year -2 to year 1, year 2 and year 3). The formulas are as follows:

COGS = cost of goods sold, Avg.Inv. = average inventory balance,

AP(t+j) = PS(t+j) - EP(t+j),

We have I =

365 . IT

(1)

Where

Similarly, letting ART be the accounts receivable turnover ratio, given by ART =

CS , Avg.AR

Where CS = credit sales, Avg. AR = average accounts receivable balance, We have AR =

365 . ART

OC = I + AR

EP(t+j) = PS(t+i) + (PC(t+j) - PC(t+i)),

(2) (3)

Where OC = operating cycle, I = number of inventory days, AR = number of accounts receivable days. We estimate abnormal supply chain efficiency (i.e., inventory days, accounts receivable days, and operating cycle time) within the event window as the difference between sample post-event performance (i.e., actual performance in year 1, year 2 and year 3) and expected performance (in year 1, year 2 and year 3). We estimate expected performance as the sum of sample pre-event

AP – abnormal performance, EP – expected performance, PS – performance of sample firms, PC – median performance of control firms, t – year of ISO 9000 / ISO 14000 certification, i – starting year of comparison (i = -2), j – ending year of comparison (j = 1, 2, or 3). We obtain the performance data from the COMPUSTAT database. Since the first ISO 9000 (ISO 14000) certification was awarded in 1990 (1996) and we need performance data at least two years before certification (year -2) and three years after certification (year 3) for analysis, we obtain performance data covering the period 1988 to 2008. We conduct the Wilcoxon signed-rank (WSR) test to examine the median abnormal performance. We also carry out the Sign test to determine if the percentage of positive abnormal performance is significantly higher (i.e., higher than 50%). To check for consistency, we further conduct the parametric t-test on the mean abnormal performance to ensure that our findings are robust. Table 1 and Table 2 present the results of ISO 9000 and ISO 14000 on supply chain efficiency respectively.

RESULTS We begin our discussion by examining abnormal supply chain efficiency on ISO 9000 certified textiles and textiles related firms. The cumula-

29

30

Table 2. Abnormal operating cycles of sample firms for the ISO 14000 adoption in three-year period (year -2 to year 1), four-year period (year -2 to year 2), and five-year period (year -2 to year 3), based on performance-industry-matched and performance-industry-size-matched matching. p < 0.1*; p < 0.05** for one-tailed tests; a in percentage year - 2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Mean

Median

% negative

Operating Cycle

45

-13.36

-11.15

64.00

41

0.30

-3.93

59.00

33

-5.49

-4.37

55.00

0.04

-0.93

-0.94

-0.82

-0.87

-0.35

-3.76

-6.96

61.00

-0.63

-1.29

-1.25

-2.84

-0.14

51.00

-0.54

-0.16

0.00

t / z statistics Inventory Days

-1.79 45

t / z statistics Accounts Receivable Days

-6.36 -1.45

45

t / z statistics

**

**

-4.86 *

-15.45 -2.05

-2.13 -1.93

-1.95

**

64.00 **

-2.54 **

-1.79 -1.79

41 **

62.00 **

-1.49

41 *

33

33

-6.22

-4.17

-1.10

-1.39

58.00

-3.90

-2.33

58.00

-0.95

-0.78

-0.70

Median

% negative

*

-0.70

Performance-industry-size-matched year -2 to year 1

year -2 to year 2

year -2 to year 3

Performance measures

N

Mean

Median

% negative

N

Mean

Median

% negative

N

Operating Cycle

40

-3.21

-2.99

55.00

37

2.71

-4.66

59.00

32

-0.68

-0.91

-0.47

0.37

-0.46

-0.99

-4.78

-2.01

55.00

-2.26

-2.64

59.00

-1.15

-1.25

-0.47

-0.33

-1.12

-0.99

1.63

3.74

40.00

5.03

3.99

43.00

0.45

-0.90

1.11

1.87

1.49

0.66

t / z statistics Inventory Days

40

t / z statistics Accounts Receivable Days t / z statistics

40

37

37

32

32

Mean -4.66

-3.03

53.00

-0.68

-0.64

-0.18

-5.08

-3.51

56.00

-0.89

-1.18

-0.53

-1.01

-1.92

56.00

-0.34

-0.51

-0.53

Quality and Environmental Management Systems in the Fashion Supply Chain

Performance-industry-matched

Quality and Environmental Management Systems in the Fashion Supply Chain

tive results of three-year to five-year (from year -2 to year 1, year 2 and year 3) changes provide a clearer picture on the long term impact of ISO 9000 adoption on firms’ supply chain efficiency in the fashion supply chain. For the results of the performance-industry-matched group from the year prior to ISO 9000 implementation (year -2) to the year of post-certification (year 1), the median (mean) cumulative changes in operating cycle is -13.24 days (-18.44 days), which is significant at the 1% (1%) level. More than half of the sample firms (63%) experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The median (mean) abnormal change in inventory days is -10.31 days (-14.00 days), which is significant at the 1% (1%) level, with over 62% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -1.05 days (-9.95 days), which is significant at the 5% (5%) level. More than half of the sample firms (51%) experiencing a shorter accounts receivable days. For the results of performance-industrymatched group from year -2 to year 2, the median (mean) cumulative changes in operating cycle is -14.13 days (-11.59 days), which is significant at the 1% (1%) level. About 63% of sample firms experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The median (mean) abnormal change in inventory days is -9.69 days (-11.00 days), which is significant at the 1% (1%) level, with over 63% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -1.56 days (-1.78 days). More than half of the sample firms (54%) experiencing a shorter accounts receivable days. For the results of performance-industrymatched group from year -2 to year 3, the median (mean) cumulative changes in operating cycle is -15.12 days (-17.24 days), which is significant at the 1% (1%) level. About 67% of sample firms experience a shorter operating cycle, which is significantly higher than 50% (p < 0.01). The

median (mean) abnormal change in inventory days is -9.47 days (-14.48 days), which is significant at the 1% (1%) level, with over 65% of the sample firms shortened their inventory days. The median (mean) abnormal change in accounts receivable days is -4.32 days (-7.56 days). More than half of the sample firms (61%) experiencing a shorter accounts receivable days. The results of abnormal supply chain performance are similar between the performanceindustry-matched group and the performance-industry-size-matched group, except for the accounts receivable days results. In both matching groups, the impacts of ISO 9000 on operating cycle and inventory days are statistically significant. Hypotheses H1a and H3a are supported. However, the abnormal impact of ISO 9000 on accounts receivable days is not statistically significant in the performance-industry-size-matched group. This mixed results suggest that accounts receivable days is sensitive to firm size; hypothesis H2a is only partially supported. The overall results are robust between the threeyear, four-year and five-year cumulative results, revealing that the impact of ISO 9000 on supply chain performance is long lasting in fashion and textiles industries. The length of shorten operating cycle is longer in the period from year -2 to year +3, which means the impact of ISO 9000 is long lasting. The certified textiles related firms improve their supply chain efficiency continuously in the five-year period. Table 2 presents the results of ISO 14000 matching groups. The performance-industrymatched group from the year -2 to year 1, the median (mean) cumulative changes in operating cycle is -11.15 days (-13.36 days), which is significant at the 5% (5%) level. More than half of the sample firms (64%) experience a shorter operating cycle, which is significantly higher than 50% (p < 0.05). The median (mean) abnormal change in inventory days is -4.86 days (-6.36 days), which is significant at the 5% (5%) level, with over 64% of the sample firms shortened their inventory days.

31

Quality and Environmental Management Systems in the Fashion Supply Chain

The median (mean) abnormal change in accounts receivable days is -2.54 days (-15.45 days), which is significant at the 5% (5%) level. More than half of the sample firms (62%) experiencing a shorter accounts receivable days. The abnormal median changes of all three supply chain performance indicators are negative in the four-year (from year -2 to year 2) and fiveyear periods (from year -2 to year 3). However, they are not statistically significant in nearly all the statistical tests. Such results suggest that the impact of ISO 14000 on fashion and textiles related firms’ supply chain efficiency is temporary. This impact diminished after year 1. We could not found significant impact of ISO 14000 on the three indicators in the performance-industry-sizematched group. These mixed results suggest that hypotheses 1b, 2b and 3b are not supported. The adoption of ISO 14000 only has some short-term impact on supply chain efficiency but it is not enduring. We will discuss these findings in the discussion section.

and dedicating additional manpower to facilitate the implementation process of ISO 9000. We use firms’ total assets to represent the firm size. We also control the financial performance of the firm because firms that are more profitable are more efficient in operations. As ISO 9000 adoption calls for improvement in a firm’s operational efficiency, firms that are more efficient may be able to implement the system more effectively than less efficient firms. Firms that are more profitable could also have more resources for ISO 9000 implementation. We estimate the financial performance of a firm as the firm’s ROA in year 3. We include three independent variables into the regression model, which are labour intensity, R&D intensity, and time of ISO 9000 adoption. The arguments and predictions of the three indicators are as follows: Independent variables: 1.

MODERATING FACTORS OF ISO 9000 ADOPTION IN THE FASHION SUPPLY CHAIN We try to provide a deeper understanding on the association between ISO 9000 adoption and the improvement in operating cycle. We only focus on ISO 9000 because we reveal no significant change on operating cycle from ISO 14000 adoption in the previous section. We construct a regression model to study how firm-level characteristics affect the impact of ISO 9000 on abnormal operating cycle over the five-year event period (from year -2 to year 3). We use firm size, and original financial performance of the ISO 9000 certified firms as the control factors. The abnormal performance in operating cycle of ISO 9000 certified firms could be more positive for larger firms. Larger firms normally have more resources for hiring external consultants, providing additional training,

32

2.

3.

Labour intensity: The abnormal changes in operating cycle could be more positive (shortened length of operating cycle) in more labour intensive firms. It is because these firms might have higher need in standardizing their operation procedure to ensure the production processes is smooth. The calculation of labour intensity is the number of employee over firm’s total assets. R&D intensity: Industries with higher R&D intensity would normally mean that they face more rapid technological and product changes. There are thus more opportunities for new product development for hightechnology textiles firms. This allows them to implement efficient process designs to a greater extent. Therefore, the positive impact of ISO 9000 on abnormal operating cycle could be higher in firms with higher levels of R&D intensity. We measure R&D intensity as R&D expenses over sales. Time of ISO 9000 adoption: According to the institutional theory, early adoptions of organizational innovations are motivated by

Quality and Environmental Management Systems in the Fashion Supply Chain

technical and economic needs (DiMaggio & Powell, 1983), while later adopters respond to the growing social legitimacy of the innovations as taken-for-granted organizational structure improvements (Westphal, Gulati, & Shortell, 1997). ISO 9000 is a well-recognized example of institutionalized management practice (Guler, et al., 2002). Therefore, we predict that early adopter of ISO 9000 could have larger improvement in operating cycle than the later ones, as the formers are motivated by technical benefits of ISO 9000. Table 3 presents the cross-sectional regression results. We use the abnormal performance in operating cycle of the performance-industrymatched group for analysis. For this model, the F-value is 2.89, which is significant at the 1% level. The adjusted R2 values is 8.0%, which is comparable to those observed in previous studies that attempted to use cross-sectional regression models to explain abnormal performance (e.g., Hendricks & Singhal, 2008). We find that firm size and firm labour intensity do not moderate the association between ISO 9000 adoption and abnormal operating cycle.

Fashion and textiles firms’ ROA is negatively related (p < 0.01) to abnormal operating cycle. It means textiles firms that are more profitable can further shortened their operating cycle time from ISO 9000 adoption. The coefficient of R&D intensity is negatively and statistically significant at the 1% level (p < 0.01). It means that high technology fashion and textiles firms could benefit more from the reduction of operating cycle time after ISO 9000 adoption. The coefficient of time of adoption is positive and statistically significant at the 5% level. Late adopters of ISO 9000 obtain less abnormal benefit from ISO 9000 adoption (i.e. they have longer abnormal operating cycle compared to early adopters). It shows that the institutionalization of ISO 9000 in general manufacturing industries also appear in the fashion supply chain.

DISCUSSION AND SUMMARY This study provides empirical evidence on the impact of QMS and EMS adoption on firms’ supply chain performance in the fashion supply chain. Based on the 248 ISO 9000 and 61 14000 certified publicly-listed fashion or textiles related

Table 3. Estimated coefficients (t-statistics in parentheses) from regression of abnormal operating cycle change from year -2 to year 3 Independent Variables Intercept

-5879

(-1.684)

Firm size +

0.001

(0.663)

ROA +

-108.259

(-2.911)

Labour intensity

200.802

(0.939)

R&D intensity +

-61.501

(-2.463)

***

Year of ISO 9000 adoption +

2.940

(1.683)

**

Number of observations

109

Model F value

2.89

R squared (%)

12.2

Adjusted R squared (%)

8.0

** ***

***

Note: Significance levels (One-tailed tests) of independent variables: p max Timeic : ICM ic = *max {ICM ic* }∧NTaskic = max Ntask { ’ } (18) ic  c =1,…,C  c ’ =1,…,C c =1,…,C  c ≠c '' c ’ ≠c c * ≠c  

        NTask ∧ Time = max Time }  c = 1, …,C : ICM ic = *max {ICM ic* }∧NTaskic = max { } { ic ic ' ic ''   c =1,…,C c ' =1,…,C C c '' =1,…,C   '' * ' c ≠c c ≠c c ≠c      

508

Similarity-Based Cluster Analysis for the Cell Formation Problem

A result of the assignment of parts to the manufacturing cells is the so-called block-diagonal incidence matrix shown in Figure 5 as the result of the application of the proposed procedure to the instance illustrated in Section 5.

Step 5: Plant Layout Configuration This step deals with the determination of the location of each manufacturing resource (machines and human resources) in the production area. Layout decisions are significantly influenced by the configuration of cells and part families in CM systems, but they are omitted in this chapter. Nevertheless a few significant performance measures on layout results are quantified as clearly explained below.

• •

a part family and columns representing the related machine cell. Void. This is a “zero” element appearing in a diagonal block (see Figure 4). Exceptional element. This is a “one” appearing in off-diagonal blocks (see Figure 4). The exceptional element causes intercell movements.

A set of CM measurements of performance quantified in the proposed experimental analysis is now reported and discussed. (high) and (low) labels refer to the expected values for best performing the CF and CM. Problem Density: PD PD =

CLUSTERING PERFORMANCE EVALUATION Sarker (2001) presents, discusses, and compares the most notable measurements of grouping efficiency in CM. The measurements adopted in the following illustrated experimental analysis are based on the following definitions: •

lock. This is a submatrix of the machinepart matrix composed of rows representing

number of "ones" in the incidence matrix nts in the incidence matrix number of elemen (19)

Global Inside cells density: ICD (high) ICD = number of "ones" in diagonal blocks number of elements in diagonal blocks

(20)

Figure 5. Block-diagonal matrix. Simple matching, farthest neighbour.& 75° percentile.

509

Similarity-Based Cluster Analysis for the Cell Formation Problem

Exceptional elements: EE (low) EE = Number of exceptional elements in the offdiagonal blocks (21) Ratio of non-zero elements in cells: REC REC = total number of "ones" number of elements in diagonal bllocks

(22)

It is a weighted average of two functions and it is defined as:

r =1

η2 = 1 −

(23)

eo mn − ∑ M r N r

where

510

number of 1s in the diagonal blocks number of 1s in the off-diagonal blocks number of diagonal blocks number of machines in the rth cell number of components in the rth part-family weighting factor (0≤q≤1) that fixes the relative importance between voids and inter-cell movements. If q=0.5 both get the same importance: this is the value adopted in the numerical example and in the experimental analysis illustrated in sections 5 and 6.

ICW PW

(24)

where ICW total intercellular workload PW total plant workload. ICW and PW can be defined as: C m  n  ICW = ∑ ∑ Yic (∑ (1 − Z kc )X ik mkTik )   c =1 i =1  k =1 

m

n

PW = ∑ ∑ X ik mkTik

(25) (26)

i =1 k =1

k

r =1

ed eo k Mr Nr q

It is a measure of independence of machinecomponent groups. High values of QI are expected in presence of high independency. QI is defined as: QI = 1 −

Grouping Efficiency: ƞ (Sarker 2001) (high)

η = q η1 + (1 − q )η2 e η1 = k d ∑ Mr Nr

Quality Index: QI (Seifoddini and Djassemi 1994, 1996) (high)

where n is the number of parts, k=1,…,n the generic part, m the number of machines and i,j=1,…,m the generic machine. This is the notation previously introduced 1 if machine i is assigned to cell c Yic =  0 otherwiswe 1 if part k is assigned to cell c Z kc =  0 otherwiswe  1 if part k has operation on machine i X ki =  0 otherwiswe 

mk Tki

volume of part k processing time of part k on machine i

Similarity-Based Cluster Analysis for the Cell Formation Problem

QI measures the number of intracellular movements which ask to be maximized minimizing intercellular ones. Now the authors introduce a new grouping efficiency based on QI as previously defined. Grouping Efficiency based on QI: ƞQI (high) ηQI=qη1+(1-q)QI

(27)

ηg = ηu − ηm e1 ηu = e1 + ev e ηm = o e where ƞu

The adopted value of weighting factor q is: ƞm

k

q=

∑M N r

r =1

mn

r

(28)

Grouping Efficacy: τ (Kumar and Chandrasekharan 1990) (high) Group efficacy can be quantified by the application of the following equation: τ=

e − e0 e + ev

(29)

where e

total number of “ones” in the matrix (i.e. the total number of operations) e0=EE number of exceptional elements (number of “ones” in the off-diagonal blocks) ev number of voids (number of “zeros” in the diagonal blocks). Grouping measure: ƞG (Miltenburg and Zhang 1991) (high) It gives higher values if both the number of voids and exceptional elements are fewer, and it is defined as:

(30)

e1

ratio of the number of 1s to the number of total elements in the diagonal block (this is the inside cell density - ICD) ratio of exceptional elements to the total number of 1s in the matrix number of 1s in the diagonal block.

Group technology efficiency: GTE (Nair and Narendran 1998) (high) It is defined as the ratio of the difference between the maximum number of inter-cell travels possible and the number of inter-cell travels actually required to the maximum number of inter-cell possible: GTE = n

I −U I

I = ∑ (rj − 1)

(31)

j =1 n

rj −1

U = ∑ ∑ xl js j =1 s =1

where I U rj

maximum number of inter-cell travels number of inter-cell movements required by the system maximum number of operations for component j

xl js = 0 if operations s, s + 1 are performed in the same cell   1 otherwise 

511

Similarity-Based Cluster Analysis for the Cell Formation Problem

Bond efficiency: BE (high) This is an important index because depends on both the within-cell compactness (by the ICD) and the minimization of inter-cell movements by the GTE. It is defined as: BE=q∙IDC+(1-q)∙GTE

(32)

The adopted value of weight q in the experimental analysis is 0.5.

NUMERICAL EXAMPLE This section presents a numerical example which relates to a problem oriented instance presented by De Witte (1980) and made of 19 parts and 12 machines. Manufacturing input data are reported in Table 2. Table 3 reports the 12x19 machine-part incidence matrix useful for the evaluation of a general purpose similarity index.

A General Purpose Evaluation This section presents the results obtained by the application of a general purpose similarity index in cluster analysis for the cell formation problem. Table 4 reports the result of the evaluation of the general purpose index known as Simple Matching (SI) and defined in Table 1. Figure 4 shows the dendrogram generated by the application of the fn combined with the SI similarity coefficient. In particular a sequence of numbers is explicitly reported in figure for each node of the diagram. The generic node corresponds to a specific aggregation ordered in agreement with the similarity metric and the adopted hierarchical rule. The list of nodes and aggregations, the related values of similarity, and the number of objects per group are also reported in Table 5. The obtained number of nodes is 11.

512

Now it is possible to define a partitioning of the available set of machines by the identification of a cut value, the so called “cutting threshold similarity value”. The adopted level of homogeneity within the generic cluster is the percentile-based threshold measure discussed in Section 3. Given the dendrogram in Figure 4 and assuming a threshold percentile cut value equal to 20°, the corresponding range of similarity is (0.585, 0.622) as demonstrated in Section 3. The obtained configuration of the manufacturing cells (nine different cells are obtained) is: Cell 1 (single machine): M12 Cell 2 (single machine): M11 Table 2. Manufacturing input data, De Witte (1980) Part

Volume

Work Cycle

Processing Time

p1

2

m1, m4, m8, m9

20, 15, 10, 10

p2

3

m1, m2, m6, m4, m8, m7

20, 20, 15, 15, 10, 25

p3

1

m1, m2, m4, m7, m8, m9

20, 20, 15, 25, 10, 15

p4

3

m1, m4, m7, m9

20, 15, 25, 15

p5

2

m1, m6, m10, m7, m9

20, 15, 20, 25, 15

p6

1

m6, m10, m7, m8, m9

15, 50, 25, 10, 15

p7

2

m6, m4, m8, m9

15, 15, 10, 15

p8

1

m3, m5, m2, m6, m4, m8, m9

30, 50, 20, 15, 15, 10, 15

p9

1

m3, m5, m6, m4, m8, m9

30, 50, 15, 15, 10, 15

p10

2

m3, m6, m4, m8

30, 15, 15, 10

p11

3

m6, m12

15, 20

p12

1

m11, m7, m12

40, 25, 20

p13

1

m11, m10, m7, m12

40, 50, 25, 20

p14

3

m11, m7, m10

40, 25m 50

p15

1

m11, m10

40, 50

p16

2

m11, m12

40, 20

p17

1

m11, m7, m12

40, 25m 20

p18

3

m6, m7, m10

15, 25, 50

p19

2

m10, m7

50, 25

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 3. Machine-part incidence matrix m1

p1

p2

p3

p4

p5

1

1

1

1

1

1

1

m2

p6

p7

1

1

1

1

1

m5 m6

1

m7

1

1

1

1

m8

1

m9

1

1

m10

p9

p10

1

1

1

1

1

1

1

1

1

1

p11

p12

p13

p14

p15

p16

p17

p18

p19

1

m3 m4

p8

1

1

1

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

m11 m12

1

1

1

1

1

Cell 3 (single machine): M10 Cell 4 (single machine): M7 Cell 5 (two machines): M3, M5 Cell 6 (single machine): M9 Cell 7 (two machines): M8, M4 Cell 8 (single machine): M2 Cell 9 (single machine): M1

aggregation, it is possible to include (exclude) the node in the formation of cells. In particular, assuming a level of threshold similarity equal to 80°, two alternative configurations can be obtained as the result of inclusion/exclusion of one or more nodes of the dendrogram located in correspondence of the cutting level:

In case a cut value corresponds to one or more nodes generated by the hierarchical process of

Case 1: Including node 10 and node 9

Table 4. Simple matching similarity matrix m1

m2

m3

m4

m5

m6

m7

m8

m9

m10

m11

m1

1.0000

m2

0.5479

1.0000

m3

0.4021

0.5479

1.0000

m4

0.5118

0.5118

0.5118

1.0000

m5

0.4389

0.5847

0.6576

0.4750

1.0000

m6

0.3292

0.4021

0.4750

0.4389

0.4389

1.0000

m7

0.4021

0.3292

0.1826

0.2195

0.2195

0.2556

m8

0.4389

0.5118

0.5118

0.6215

0.4750

0.5118

0.2194

1.0000

m9

0.5118

0.4389

0.4389

0.5479

0.4750

0.4389

0.2924

0.5479

1.0000

m10

0.3292

0.3292

0.3292

0.1465

0.3653

0.3292

0.4750

0.2194

0.2924

1.0000

m11

0.2924

0.3653

0.3653

0.1826

0.4021

0.1465

0.3653

0.1826

0.1826

0.4389

1.0000

m12

0.3292

0.4021

0.4021

0.2194

0.4389

0.2556

0.3292

0.214

0.2194

0.3292

0.5847

m12

1.0000

1.0000

513

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 5. List and configuration of nodes generated by fn rule & SI similarity coefficient Node

Group 1

Group 2

Simil.

Objects in Group

1

M3

M5

0.658

2

2

M4

M8

0.622

2

3

M11

M12

0.585

2

4

M1

M2

0.548

2

5

Node 2

M9

0.548

3

6

M7

M10

0.475

2

7

Node 4

Node 5

0.439

5

8

Node 1

M6

0.439

3

9

Node 7

Node8

0.329

8

10

Node 6

Node 3

0.329

4

11

Node 9

Node 10

0.146

12

The second column in Table 8 reports the obtained values of the performance evaluation for the case study object of this numerical example adopting the Simple Matching similarity index, the fn heuristic, and the cutting threshold percentile value equal to 75°.

A Problem Oriented Evaluation

Cell 1 (four machines): M12, M11, M10 and M7 Cell 2 (eight machines): M6, M5, M3, M9, M8, M4, M2, M1. Case 2: Not including node 10 and node 9 Cell 1 (two machines): M12, M11 Cell 2 (two machines): M10, M7 Cell 3 (3 machines): M6, M5, M3 Cell 4 (5 machines): M9, M8, M4, M2, M1.

Table 6 reports the result of the evaluation of the problem oriented similarity coefficient as proposed by Nair and Narendran (1998). Figure 6 shows the dendrogram as the result of the application of the fn clustering rule. The generic node of the dendrogram corresponds to a specific aggregation ordered in agreement with the adopted similarity metric and the adopted hierarchical rule. The list of nodes and aggregations, the related values of similarity, and the number of objects in group are also reported in Table 7 as the result of the application of the fn rule and Nair and Narendran (1998) problem oriented similarity coefficient. The obtained number of nodes is 11. Figure 6 reports the dendrogram obtained by the application of the fn clustering heuristic rule and the “Nair and Narendran” problem oriented similarity coefficient to the literature instance of interest.

Table 6. Nair & Narendran similarity matrix m1

m2

m3

m4

m5

m6

m7

m8

m9

m10

m11

m1

1.0000

m2

0.5000

1.0000

m3

0.0000

0.2220

1.0000

m4

0.6920

0.5000

0.4210

1.0000

m5

0.0000

0.2860

0.6670

0.2350

1.0000

m6

0.3450

0.3480

0.3640

0.5450

0.2000

1.0000

m7

0.5620

0.3080

0.0000

0.3890

0.0000

0.4620

1.0000

m8

0.5000

0.5560

0.4710

0.8570

0.2670

0.6450

0.2940

1.0000

m9

0.6670

0.2220

0.2350

0.7150

0.2670

0.4520

0.4720

0.6150

1.0000

m10

0.1670

0.0000

0.0000

0.0000

0.0000

0.3870

0.7060

0.0770

0.2310

1.0000

m11

0.0000

0.0000

0.0000

0.0000

0.0000

0.0000

0.4000

0.0000

0.0000

0.4550

1.0000

m12

0.0000

0.0000

0.0000

0.0000

0.0000

0.2310

0.2070

0.0000

0.0000

0.0950

0.5880

514

m12

1.0000

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 6. Dendrogram by the application of Nair & Narendran similarity coefficient and the farthest neighbour

Table 7. List and configuration of nodes generated by the fn rule & Nair and Narendran (1998) similarity coefficient Node



Group 1

Group 2

Simil.

Objects in Group

1

M4

M8

0.857

2

2

M7

M10

0.706

2

3

M1

M9

0.667

2

4

M3

M5

0.667

2

5

M11

M12

0.588

2

6

Node 1

M6

0.545

3

7

M2

Node 6

0.348

4

8

Node 3

Node 7

0.222

6

9

Node 2

Node 5

0.095

4

10

Node 8

Node 4

0

8

11

Node 10

Node 9

0

12

}

{

Cell 1 (two machines): M11, M12 Cell 2 (two machines): M7, M10 Cell 3 (two machines): M3, M5 Cell 4 (six machines): M6, M8, M4, M2, M9, M1 • Assuming %p=20°:

{

}

{

}

T _ value20° ∈ simil 0.20 × 11 , simil 0.20 × 11  =   = simil {3} , simil {2} =  0.667, 0.706   

The obtained configuration of the manufacturing cells (eleven different cells are obtained) is:

Assuming %p=80°:

{

The obtained configuration of the manufacturing cells (four different cells are obtained) is:

}

T _ value80° ∈ simil 0.80 × 11 , simil 0.80 × 11  =   = simil {9} , simil {8} =  0.095, 0.222   

Single machine cells: Cell 1(M12), Cell 2(M11), Cell 4(M5), Cell 5(M3), Cell 6(M3), Cell7(M6), Cell 9(M2), Cell 10(M9), Cell 11(M1) Double machines cells: Cell 3 (M7, M10), Cell 8 (M8, M4).

515

Similarity-Based Cluster Analysis for the Cell Formation Problem

Table 8. Performance evaluation of numerical example; 75° percentile Similarity index

ID

Simple matching

Nair & Narendran

Problem Density

PD

0.329

0.329

Inside Cells Density

ICD

0.705

0.828

REC

REC

0.962

1.293

Exceptional Element

EE

20

27

Grouping Efficiency [%]

ƞ

60.2

57.9

Grouping Efficiency QI [%]

ƞQI

68.8

72.8

Group Technology Efficiency [%]

GTE

61.9

45.5

Bond Efficiency [%]

BE

66.2

66.1

Group Efficacy [%]

τ

82.9

84.5

Grouping measure

ƞG

0.438

0.468

The third column in Table 8 reports the obtained values of the performance evaluation for the case study object of this numerical example adopting the “Nair and Narendran” similarity index, the fn heuristic, and the cutting threshold percentile value equal to 75°. Which is the best similarity index? It is not correct to try to reply to this question as is, because previous sections demonstrate that there are different factors affecting the performance of the system configuration: the similarity index, the clustering rule, the threshold cutting value of similarity, and the part assignment rule. As a

consequence it is useful to measure the simultaneous effects generated by different combinations of these critical factors. Next section presents an experimental analysis conducted on the instance proposed by De Witte (1980) comparing the performance obtained adopting general purpose and problem oriented similarity metrics.

EXPERIMENTAL ANALYSIS This section presents the results obtained by the application of the proposed systematic procedure to cell formation and parts assignment to cells (part family formation), as the result of different settings of the similarity and hierarchical procedure as illustrated in previous sections. This what-if analysis is applied to the problem oriented instance introduced by De Witte (1980) and reported in Table 2. This analysis represents the first step to identify the best combination of

Table 9. What-if analysis, factors and levels general purpouse

problem oriented

Similarity Coefficient

J, SI, H, B, SO, R, SK, O, RM, RR

S, GS, SH (fbk=0.6;fek=0.4), N

Rule

CLINK, ALINK, SLINK

Percentile

10°, 25°, 40°, 50°, 75°

Figure 7. Block-diagonal matrix. Nair and Narendran, farthest neighbour.& 75° percentile.

516

Similarity-Based Cluster Analysis for the Cell Formation Problem

values, called levels, for the parameters, called factors, of the decision problem. Table 9 reports the adopted levels for each factor in the experimental analysis. Figures 8 to 10 present the main effects plot (Minitab ® Statistical Software Inc.) for the following performance indices: ƞG, called ƞ(G) in figures, τ, BE.

Similarity indices perform in a different way in terms of ƞG, τ, BE. In particular problem oriented (PO) perform better than general purpose (GP). Clink rule and percentile threshold value equal to 50° (or 75°) seem to be the best levels to set the clustering algorithm. The best performing indices are Seiffoddini - S (1987) and Nair and Narendran - N (1998).

Figure 8. Main effects plot for grouping measure

Figure 9. Main effects plot for grouping efficacy

517

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 10. Main effects plot for bond efficiency

Figure 11 shows that the number of exceptional elements significantly depends on the adopted threshold value of group similarity, but the adopted similarity index is not important. ƞQI, called ƞ(QI) in Figure 12, has an anomalous trend if compared with previous graphs.

Figure 11. Main effects plot for exceptional elements

518

Figure 13 shows the trend of the EE for different values of couples of factors, and the importance of the percentile threshold value of group similarity. Similarly, Figure 14 shows the importance of threshold value of similarity and clink rule for grouping items.

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 12. Main effects plot for grouping efficacy based on QI

CONCLUSION AND FURTHER RESEARCH This chapter illustrates the CFP as supported by the similarity based manufacturing clustering, and a hierarchical and systematic procedure for supporting managers in the configuration of cellular manufacturing systems by the applica-

tion of cluster analysis and similarity indices. In particular, both general purpose and problem oriented indices are illustrated and applied. The experimental analysis conducted on a literature problem oriented case study represents the first basis for the identification of the best setting of the cell formation problem and supporting decision models and tools.

Figure 13. Exceptional elements for couples of factors

519

Similarity-Based Cluster Analysis for the Cell Formation Problem

Figure 14. Interaction plot for τ

For the first time, this chapter successfully applies the threshold group similarity index to problem oriented similarity environment. The threshold value was introduced by the authors in a previous study on general purpose indices evaluation (Manzini et al. 2010). This chapter confirms the importance of this threshold cut value for the dendrogram when it is explained in percentile on the number of nodes. Further research is expected to improve the experimental analysis including more case studies and applications. Finally it is important to improve the critical process of part family formation and the decisions regarding the duplication of machines and resources in different manufacturing cells in order to minimize intercellular flows.

REFERENCES Aldenderfer, M. S., & Blashfield, R. K. (1984). Cluster analysis. (Sage University Paper series on Quantitative Applications in the Social Sciences, 1984, No. 07-044), Beverly Hills, CA: Sage.

520

Alhourani, F., & Seifoddini, H. (2007). Machine cell formation for production management in cellular manufacturing systems. International Journal of Production Research, 45(4), 913–934. doi:10.1080/00207540600664144 Bindi, F., Manzini, R., Pareschi, A., & Regattieri, A. (2009). Similarity-based storage allocation rules in an order picking system. An application to the food service industry. International Journal of Logistics Research and Applications, 12(4), 233–247. doi:10.1080/13675560903075943 De Witte, J. (1980). The use of similarity coefficients in production flow analysis. International Journal of Production Research, 18, 503–514. doi:10.1080/00207548008919686 Gupta, T., & Seifoddini, H. (1990). Production data based similarity coefficient for machinecomponent grouping decisions in the design of a cellular manufacturing system. International Journal of Production Research, 28, 1247–1269. doi:10.1080/00207549008942791 Heragu, S. (1997). Facilities design. Boston, MA: PWS Publishing Company.

Similarity-Based Cluster Analysis for the Cell Formation Problem

Kumar, C. S., & Chandrasekharan, M. P. (1990). Grouping efficacy a quantitative criterion for goodness of block diagonal forms of binary matrices in group technology. International Journal of Production Research, 28(2), 233–243. doi:10.1080/00207549008942706 Manzini, R., & Bindi, F. (2009). Strategic design and operational management optimization of a multi stage physical distribution system. Transportation Research Part E, Logistics and Transportation Review, 45, 915–936. doi:10.1016/j. tre.2009.04.011 Manzini, R., Bindi, F., & Pareschi, A. (2010). The threshold value of group similarity in the formation of cellular manufacturing system. International Journal of Production Research, 48(10), 3029–3060. doi:10.1080/00207540802644860 Manzini, R., Persona, A., & Regattieri, A. (2006). Framework for designing and controlling a multicellular flexible manufacturing system. International Journal of Services and Operations Management, 2, 1–21. doi:10.1504/ IJSOM.2006.009031 McAuley, J. (1972). Machine grouping for efficient production. Production Engineering, 51, 53–57. doi:10.1049/tpe.1972.0006 Mosier, C. T. (1989). An experiment investigating the application of clustering procedures and similarity coefficients to the GT machine cell formation problem. International Journal of Production Research, 27(10), 1811–1835. doi:10.1080/00207548908942656 Nair, G. J., & Narendran, T. T. (1998). CASE: A clustering algorithm for cell formation with sequence data. International Journal of Production Research, 36, 157–179. doi:10.1080/002075498193985

Papaioannou, G., & Wilson, J. M. (2010). The evolution of cell formation problem methodologies based on recent studies (1987-2008): Review and directions for future research, (vol. 206, pp. 509-521). Sarker, B. R. (2001). Measures of grouping efficiency in cellular manufacturing systems. European Journal of Operational Research, 130, 588–611. doi:10.1016/S0377-2217(99)00419-1 Seifoddini, H. (1987). Incorporation of the production volume in machine cell formation in group technology applications. Proceedings of the 9th International Conference on Production Research ICPR, (pp. 2348-2356). Seifoddini, H., & Djassemi, M. (1994). Analysis of efficiency measures for block diagonal machine-component charts. Proceedings of the 16th International Conference on Computers and Industrial Engineering, Ashikaga, Japan. Seifoddini, H., & Djassemi, M. (1996). The thresold value of a quality index for formation of cellular manufacturing systems. International Journal of Production Research, 34(12), 3401–3416. doi:10.1080/00207549608905097 Sokal, R. R., & Sneath, P. H. A. (1968). Principles of numerical taxonomy. San Francisco, CA: W. H. Freeman. Stawowy, A. (2004). Evolutionary strategy for manufacturing cell design. Omega: The International Journal of Management Science, 34, 1–18. doi:10.1016/j.omega.2004.07.016 Yin, Y., & Yasuda, K. (2006). Similarity coefficient methods applied to cell formation problem: A taxonomy and review. International Journal of Production Economics, 101, 329–352. doi:10.1016/j. ijpe.2005.01.014

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 140-163, copyright 2012 by Business Science Reference (an imprint of IGI Global). 521

522

Chapter 30

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles Paolo Renna University of Basilicata, Italy Michele Ambrico University of Basilicata, Italy

ABSTRACT Cellular manufacturing systems (CMSs) are an effective response in the economic environment characterized by high variability of market. The aim of this chapter is to compare different configurations of cellular models through the main performance. These configurations are fractal CMS (defined FCMS) and cellular systems with remainder cells (defined RCMS), compared to classical CMS used as a benchmark. FCMSs consist of a cellular system characterized by identical cells each capable of producing all types of parts. RCMSs consist of a classical CMS with an additional cell (remainder cell) that in specific conditions is able to perform all the technological operations. A simulation environment based on Rockwell ARENA® has been developed to compare different configurations assuming a constant mix of demand and different congestion levels. The simulation results show that RCMSs can be a competitive alternative to traditional cells developing opportune methodologies to control the loading of the cells.

INTRODUCTION Competitiveness in today’s market is much more intense compared to the past decades. Considerable resources are invested on facilities planning and DOI: 10.4018/978-1-4666-1945-6.ch030

re-planning in order to adapt the manufacturing systems to the market changes. A well-established manufacturing philosophy is the group technology concept. Group technology (GT) can be defined as a manufacturing philosophy identifying similar parts and grouping them together to take advantage

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

of their similarities in manufacturing and design (Selim et al.,1998). It is the basis of so-called cellular manufacturing systems (CMSs). In current production scenario demand for products is characterized by continuous fluctuations in terms of volumes, type of product (part mix), new products introduction and the life cycle of products has significantly reduced. The planning horizon needs to be divided into smaller horizons (time bucket) and the length of each period is related to the characteristics of products. These characteristics need to be considered in design process of a manufacturing system. Introduction of Cellular Manufacturing Systems has already introduced significant improvements. They are conceived with the aim of reducing costs such as setup costs or handling costs and also to reduce lead time and work in process (WIP). They combine advantages of flow shop and job shop, but a further step can be accomplished to be competitive in the market. They allow significant improvements such as: product quality, worker satisfaction, space utilization. Benefits and disadvantages (Irani et al.,1999) are showed in Table 1. They documented that companies implementing cellular manufac-

turing have a very high probability of obtaining improvements in various areas. The first column of Table 1 shows the case studies with improvements and the second column reports the percentage of improvement of the measures. Similarly, the third column shows the percentage of cases with worsening and in the fourth column is evidenced the rate of deterioration. The demand volatility and continuous new product introduction lead to re-configure several times the cellular manufacturing systems in order to keep a high level of performance. For the above reasons, new configurations have been proposed in literature such as Virtual Cell Manufacturing System (VCMS), Fractal Cell Manufacturing System (FCMS), Dynamic Cell Manufacturing System (DCMS), with the aim of keeping high flexibility of manufacturing systems. The concept of DCMS was introduced for the first time by Rehault et al. (1995). It provides a physical reconfiguration of the cells. The reconfiguration activity can be periodic or resulting from the variation of performance parameters. Reconfigure can mean duplicating machines,

Table 1. Benefits and disadvantages of CMS Measure

Percentage cases with improvements

Average percentage improvement

Percentage cases with worsening

Average percentage worsening

Tooling cost

31%

-10%

69%

+17%

Labor cost

91%

-33%

9%

+25%

Setup Time

84%

-53%

16%

+32%

Cycle Time

84%

-40%

16%

+30%

Machine utilization

53%

+33%

47%

-20%

Subcontracting

57%

-50%

43%

+10%

Product quality

90%

+31%

10%

-15%

Worker satisfaction

95%

+36%

5%

-

Space utilization

17%

-25%

83%

+40%

WIP inventory

87%

-58%

13%

+20%

Labor turnover/absenteeism

100%

-50%

0

-

Variable production cost

93%

-18%

7%

+10%

523

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

relocating machines between cells, removing machines, or also subcontracting some parts to other companies. These problems must be addressed by the decision maker. The concept of VCMS requires that the machines are dedicated to a part family but these machines are not necessarily close together in a classical cell. One machine can belong simultaneously to different cells. Hence sharing of machine makes the system more flexible. Moreover the machines are not shifted as dynamic cellular system therefore costs of reallocation are eliminated. On the other hand we must consider the increase in the movements of parts (or batches) across machines. A further problem may be the complication in the measurement of performance of the cells. This is because monitoring stations are usually located out of the cell, but in this case the cell does not exist physically. The FCMSs are based on the constructions of identical cells and they are not built for different families. The idea comes from Skinner (1974) and that is to build factory within a factory with duplication of processes. Each cell can work all products. Working time will be greater but these configurations are very effective if there are changes in part mix and in cases of machine breakdowns. Even if for example there are flash orders. A further idea was mentioned by Sripathy Maddisetty (2005). The author referred to so-called remainder cells and we can call them RCMSs. In addition to traditional cells refer to the product families you may create an additional cell that operates when conditions such as machine failures or overloaded machines occur. Focusing on an advanced design the RCMSs could provide interesting results in terms of competitiveness. Our goal in this chapter is to compare the various approaches to the design of manufacturing systems, making a complete performance comparison. In particular we aimed to compare the following systems: CMSs, FCMSs and RCMSs. A simulation environment has been developed to

524

compare the performance (WIP, Throughput Time, Tardiness, Throughput and Average Utilization) using as a benchmark the classic CMS. The aim is to evaluate the responses of different systems when market fluctuations occur in terms of arrival demand. The chapter is structured as follows. Section 2 provides an overview of the literature of various manufacturing system configurations, while in section 3 the system context is formulated. In section 4 there is a brief description of scheduling approaches. Section 5 presents the simulation environment and the case study while in section 6 are discussed simulation results. In Section 7 conclusions and future developments are discussed.

BACKGROUND Recently, several authors have investigated the configuration of manufacturing cells in order to keep a high level of performance when the market conditions change. Hachicha et al. (2007) proposed a simulation based methodology which takes into consideration the stochastic aspect in the CMS. They took into account the existence of exceptional elements between the parts and the effect of the correspondent inter-cell movements. They compared two strategies: permitting intercellular transfer and exceptional machine duplication. They used the simulation (Rockwell Arena) and they analyzed the following performance: mean transfer time, mean machining time, mean wait time, mean flow time. They assumed demand fixed and known for the parts. They did not consider failures of machines and maintenance policies. A multi-objective dynamic cell formation was presented by Bajestani et al. (2007) where purpose was to minimize simultaneously total cell load variation and sum of miscellaneous costs (machine cost, inter-cell material handling cost, and machine relocation cost). Since the problem

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

is NP-hard they used a scatter search approach for finding locally Pareto-optimal frontier. Safei et al. (2007) proposed to use an approach based on fuzzy logic for the design of CMS under uncertain and dynamic conditions. They began by finding that in most of research related on DCMS input parameters were considered deterministic and certain. Therefore they introduced fuzzy logic as a tool for the expression of the uncertainty in design parameters such as part demand and available machine capacity. Ahkioon et al. (2007) tried to investigate DCMS focusing on routing flexibility. They studied the creation of alternate contingency process routings in addition to alternate main process routings for all part types. Contingency routings had the function to provide continuity in case of exceptional events such as machine breakdowns but also flash orders. Furthermore their work provided discussions on the trade-off between the additional cost related to the formation of contingency routings and the advantages of increased flexibility. Linearized model proposed by the authors was solved with CPLEX. Aryanezhad et al. (2008) developed a new model which simultaneously embrace dynamic cell formation and worker assignment problem. They focused on two separate components of cost: the machine based costs such as production costs, inter-cell material handling costs, machine costs and human related costs such as hiring costs, firing costs, training costs and wages. They made the comparison of two models. One considered the machine costs and the other considered both machine costs and human related costs. The model was NP-hard even though they did not consider learning curve. Xiaoqing Wang et al. (2008) proposed a nonlinear multi-objective mathematical model in dynamic cells formation problem by giving weighing to three conflicting objectives: machine relocation costs, utilization rate of machine capacity, and total number of inter-cell moves over the planning horizon. A scatter search approach was

developed to solve the nonlinear model. Results were compared with those obtained by CPLEX. They considered certain demand and they did not consider machine breakdowns. Safei et al. (2009) proposed an integrated mathematical model of the multi-period cell formation and production planning in a dynamic cellular manufacturing system (DCMS). The focus was on the effect of the trade-off between production and outsourcing costs on the reconfiguration of the cells. Balakrishnan (2005) discussed cellular manufacturing system under conditions of changing product demand. He made a conceptual comparison to virtual cell manufacturing and he discussed a case study. Kesen et al. (2008) investigated three different types of system (cellular layout, process layout and virtual cells) by using simulation. They paid attention to the following performance: mean flow time and mean tardiness. Based on these simulations they used regression meta-models to estimate the systems behaviours. They only considered one family-based scheduling scheme and they did not consider extraordinary events such as machine failures. Vakharia et al. (1999) proposed and validated analytical approximations for comparing the performance of virtual cells and multistage flow shops. First they used these approximations and hypothetical data to identify some key factors that influenced the implementation of virtual cells in a multistage flow shop environment. Then they concluded with an application of approximations to industrial data. Kesen et al. (2009) examined the behaviours of VCMs, process layouts and cellular layouts. They addressed the VCMs by using family-based scheduling rule. The different systems were compared by simulation. Subsequently they developed an ant colony optimization based meta-models to reflect the system’s behaviours. Kesen et al. (2010) presented a genetic algorithm based heuristic approach for job scheduling

525

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

in virtual manufacturing cells (VMCs). Cell configurations were made to optimize the scheduling objective under changing demand conditions. They considered the case with multiple jobs and different processing routes. It was considered multiple machine types with several identical machines in each type and they were located in different locations in the shop floor. The objective was to minimize the total travelling distance. To evaluate the effectiveness of the genetic algorithm heuristic they compared it with a mixed integer programming solution. Results showed that genetic algorithm was promising in finding good solutions in very shorter. Uday Venkatadri et al.(1997) proposed a methodology for designing job shops under the fractal layout organization as an alternative to the more traditional function and product organizations. The challenge in assigning flow to workstation replicates was that flow assignment is in itself a layout dependent decision problem. They proposed an iterative algorithm that updated layouts depending on flow assignments, and flow assignments based on layouts. Their work has had the far-reaching consequence of demonstrating the validity of the fractal layout organization in manufacturing systems (FCMSs). Montreuil (1999) developed a new fractal alternative for manufacturing job shops which allocated the total number of workstations for most processes equally across several fractal cells. He introduced fractal organization and he briefly discussed the process of implementing fractal designs. He illustrated a case example and he showed that system is characterized by great flexibility. Maddisetty (2005) discussed the design cells in a probabilistic demand environment. He discussed idea of remainder cells (RCMS). A remainder cell is a kind of lung to cope in changes in demand. He examined the following performance: total WIP, average flow time, machine utilization. He proposed a comparison using three different approaches: mathematical, heuristic, and simulation.

526

Süer et al. (2010) proposed a new layered cellular manufacturing system to form dedicated, shared and remainder cells to deal with the probabilistic demand. Moreover they proposed a comparison of its performance with the classical cellular manufacturing system. Simulation and statistical analysis were performed to help identify the best design within and among both layered cellular design and classical cellular design. They observed that the average flow time and total WIP were not always the lowest when additional machines were used by the system, but the layered cellular system performed better when demand fluctuations was observed. There are several limitations encountered in existing literature. In previous research the demand of products was usually determined at the beginning of each period and it was known. The change in part mix was rarely assumed. Frequently the bottleneck station in each cell was considered as fixed and independent of the type of the part. Almost never were held in account exceptional events such as machine failures and maintenance. Almost never flash orders was considered and similarly backorders. The concept of learning curve was rarely covered. Furthermore hardly researchers focused on a wide range of performance measures. In this chapter the objective is to evaluate the reaction of different manufacturing systems configurations (CMSs, FCMSs and RCMSs) when there is a fluctuation in terms of arrival demand. The configurations are investigated considering the same machines for all cases; the machines are set in order to obtain the particular configuration. The analysis conducted allows to highlight the most promising configurations in terms of performance measures. Another objective of the chapter is to develop a simulation environment based on Rockwell Arena® tool in order to analyse the different configurations. The simulation allows build a model with minor simplification compared to mathematical models which require significant simplifications (linearization) in cases

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

of complex systems. Moreover the dynamic model (demand not known a priori, unexpected events like machine breakdowns) cannot be obtained with mathematical models.

MANUFACTURING SYSTEM CONTEXT The mentioned objective of this chapter is to compare the performance of different manufacturing systems. In particular, the configurations analyzed by using simulation tools based on the software Rockwell ARENA® are: CMS, FCMS and RCMS. Moreover another configuration has been considered changing the layout of machines and obtaining a CMS in line. The manufacturing system consists of M machines general purpose that are used for each configuration. It has been considered three part families. We consider a constant mix of each part family. We introduce the following assumptions for the model: •





the demand for each part type is unknown at priori end it is extracted randomly from an exponential distribution. Therefore, the parameter to set is the exponential parameter; set-up times are not simulated. When, the manufacturing cells are configured the setup times are very low for the product family assigned to the cell; the due date is obtained by processing time multiplied with an index greater than or equal to 1;

• • • •

Machine breakdowns and maintenance are not considered; intra-cell handling times are negligible; it is assumed that parts moved in units; each configuration presents the same number of machines in order to make a comparison in the same conditions.

The performance measures used to compare the manufacturing systems are the following: • • • • • •

Work in Process (WIP); Average utilization of the manufacturing system; Throughput time; Average throughput time; Tardiness (total of all the parts); Throughput.

Figure 1 describes the parameters and the performance analyzed in this research. The first manufacturing system configuration considered is a classical cellular system (CMS). The scheme is showed in Figure 2. The system manufactures N product families with N cells. Each cell is specialized to perform the technological operations required by the product family assigned (setup time is not necessary). In this chapter, it has been also considered a CMS with a different routing, as showed in Figure 3. The second configuration considered is the FCMS. In this case, the allocation of machines to cells is performed in order to obtain N identical cells. Each cell manufactures all product families with higher processing time, because the machines will be able to perform all the technological op-

Figure 1. Manufacturing configurations analysis

527

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 2. CMS configuration

remainder cell (ptir) is major of processing time of the machine i-th in the cell j-th in CMS (machine configured for the technological operations of a particular family) (ptij): over =

ptif ptij

=

ptir ptij

, over > 1

(1)

LOADING POLICY Figure 3. CMS in line configuration

In the previous section we have discussed the different cell configurations. Each configuration needs a loading approach policy to operate. For classical CMS parts arrive in the system and each family has its own cell competence. In CMS we have provided two different layouts: one Figure 4. FCMS configuration

erations required. The scheme of FCMS is showed in Figure 4. The third configuration considered is the RCMS. In this configuration there are N cells respectively for N product families. In addition, there is a further cell called remainder cell where all operations can be performed with higher processing times. It may be useful in case of machine failures but also in case of congestion of the system. The scheme of RCMS is showed in Figure 5. Each configuration includes the same number of machines and the time to manufacture each part is assumed the same, except for fractal cells (belonging to FCMS) and the remainder cell (belonging to RCMS) where machines can produce all kinds of part with a higher processing time (general purpose machine configuration). Therefore the processing time of machine i-th in fractal cell (ptif) and processing time of machine i-th in 528

Figure 5. RCMS configuration

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

with parallel machines and other with machines in line, as described above. In FCMS configuration parts arrive in the system and they are routed to cells with minor workload. The RCMS needs a specific loading policy for the use of the remainder cell. Parts arrive in the system and each cell is designed for a part family. In each cell, there is a controller that adopts the following strategy: it measures the number of parts in queue in each machine. If the measured value in cell j-th is greater than a maximum threshold of the cell (defined Smaxj) then the part is conveyed to the remainder cell. Similarly, when the measured value is minor of a minimum threshold of the cell j-th (defined Sminj) then the part is assigned to the cell designed for the part family. The logic of controller above described is showed in the flowchart of Figure 6.

SIMULATION ENVIRONMENT The manufacturing system consists of M=10 machines. All different configurations are obtained re-allocating the same number of machines available. It is considered that each machine functions

for 24 hours a day. Therefore total numbers of minute that system works is considered to be 43200 minutes per month. This is the simulation horizon considered. In order to evidence only the difference among the configurations, it is assumed that each part needs 40 minutes to complete processing. This technological time is divided by the number of machines used in the process, depending on manufacturing configuration. As above introduced it is equal for all parts except for those made in fractal cells and remainder cell where machines take more time. We assume three product families. The product mix is as follow: Product 1 (40%), Product 2 (40%) and Product 3 (20%). We have analyzed the performance of four different cellular systems changing one parameter: the average inter-arrival time. We have considered five different values of inter-arrival time that leads to different congestion levels of the manufacturing system (see Table 2). These values were selected to keep the average utilization of machines in a range that goes from 0.56 (low utilization) to 0.99 (high utilization). The demand for each part type is unknown at priori and it is extracted randomly from an

Figure 6. The logic of RCMS

529

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 2. Average inter-arrival times 4 4.5 5 6 7

exponential distribution with mean equal to the inter-arrival time reported in Table 2. The due date is obtained from the sum of the arrival time (tnow) and the technological working time (WT) multiplied with an index (DdateINDEX), as showed in equation 2. Ddate = tnow + (WT ⋅ DdateINDEX )

(2)

The WT is obviously equal to 40 minutes. The Ddateindex is 1.5 for parts 1 and 2, while it is 1 for part 3. The minor index of part 3 is justified by the lower demand than other part-mix, so there is no shift of the due date. However the due dates are the same for all configurations examined. Therefore not affect the comparison, but they are included in the model for completeness. Cellular systems analyzed are those already mentioned: CMS, CMS in line, RCMS and FCMS. The benchmark system is the CMS. The simulation environment has been developed by Rockwell Arena® tool. Arena is characterized by a block diagram that makes it more familiar environment simulation. Figure 7. Arrival and exit stations

530

The arrival stations of the parts and the exit station are showed in the Figure 7. In the first three boxes are showed the arrival stations where to each part is assigned a delivery time and a destination in the respective cell for processing; then the parts leave the arrival station. Exit station is equal for all types of configuration: if the delivery time has been observed then the WIP is updated and the part leaves the system. Otherwise the delay is calculated.

Cellular Manufacturing System In this case we consider three cells of production. The first two cells containing four identical machines working in pairs and in parallel. These cells are respectively for both products type 1 and type 2. The third cell contains 2 machines for products of type 3 (minor product mix). Each machine has a process time equal to 20 minutes. The scheme is showed in Figure 8. In each rectangle is indicated the working time.

Cellular Manufacturing System in Line In this case we also consider 3 cells of production. The first two cells containing 4 machines in line. Each machine has a process time equal to 10 minutes. These cells are respectively for type 1 and type 2. The third cell contains 2 machines for product type 3, each machine has a process

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 8. CMS considered in simulation

Figure 9. CMS in line considered in simulation

Remainder Cellular Manufacturing System In this case there are 3 cells (one for each part type) and there is a remainder cell where is defined a loading policy based on the number of parts in queue in other cells. The scheme is showed in Figure 11. The three machines operating in cell 1 (product type 1) has a process time equal to 13,33 minutes. The same for the machines operating in cell 2 (product type 2). The two machines operating in cell 3 has a process time equal to 20 minutes. The machines assigned to the remainder cell perform the manufacturing operations with a

Figure 10. FCMS considered in simulation

time equal to 20 minutes. The scheme is showed in Figure 9.

Fractal Cellular Manufacturing System In this case there are 5 identical cells. Each cell contains 2 machines and each cell is able to work on all the product mix. The scheme is showed in Figure 10. Naturally the machines perform the manufacturing operations with a major process time (see equations 1 and 2) because they are not dedicated to a part family but they are configured for all operations. In fact the process time of each machine is equal to 20 units time increased by 20% (over=1.2).

Figure 11. RCMS considered in simulation

531

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

major process time (see equations 1 and 2) because they are configured for all operations; the process time of each machine is equal to 20 units time increased by 20%(over=1.2). In this work, it has been investigated different instances of the same policy loading about the use of remainder cell. Each cell has a controller that measures the number of parts in queue in each machine. Using thresholds the parts can be conveyed to the remainder cell. In ARENA the controller is showed in Figure 12. The first “scan” controls the maximum threshold (Smaxj) and therefore assigns the part to the cell. Similarly, the second “scan” checks the minimum thresholds(Sminj). For the values of maximum (Smaxj) and minimum (Sminj) thresholds have been considered respectively six cases, equal for all three cells (see Table 3).

SIMULATION RESULTS The length of each simulation is fixed to 43200 minutes. During this period the average interarrival time and part mix are both constant. Table 4 reports the design of simulation experiments conducted for all four configurations of the manufacturing system. Combining the five inter-arrival times, four system configurations, and for the last configuration (RCMS) six cases regarding the thresholds, it has been obtained 45 experimental classes. Figure 12. Control blocks cell 1

532

For each experiment class have been conducted a number of replications able to assure a 5% confidence interval and 95% of confidence level for each performance measure. As previously described the performance measures investigated are the following: • •

Work in Process (WIP); Average utilization of the manufacturing system (av.utilization); Throughput time for each part j(thr. Time j); Average throughput time (average thr. Time); Total tardiness time of all the parts (tardiness); Throughput (thr.).

• • • •

The objective of the analysis of simulation results is the comparison between different manufacturing configurations and classical cellular configuration (CMS, used as base for percentage

Table 3. Threshold values Cases

Smax

1

7

Smin 5

2

5

3

3

3

2

4

4

1

5

3

1

6

2

1

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 4. Experimental classes Exp. No.

Configuration

Inter-arrival

Exp. No.

Configuration

Inter-arrival

1

CMS

4

26

RCMS(3,2)

4

2

CMS

4,5

27

RCMS(3,2)

4,5

3

CMS

5

28

RCMS(3,2)

5

4

CMS

6

29

RCMS(3,2)

6

5

CMS

7

30

RCMS(3,2)

7

6

CMS in line

4

31

RCMS(4,1)

4

7

CMS in line

4,5

32

RCMS(4,1)

4,5

8

CMS in line

5

33

RCMS(4,1)

5

9

CMS in line

6

34

RCMS(4,1)

6

10

CMS in line

7

35

RCMS(4,1)

7

11

FCMS

4

36

RCMS(3,1)

4

12

FCMS

4,5

37

RCMS(3,1)

4,5

13

FCMS

5

38

RCMS(3,1)

5

14

FCMS

6

39

RCMS(3,1)

6

15

FCMS

7

40

RCMS(3,1)

7

16

RCMS(7,5)

4

41

RCMS(2,1)

4

17

RCMS(7,5)

4,5

42

RCMS(2,1)

4,5

18

RCMS(7,5)

5

43

RCMS(2,1)

5

19

RCMS(7,5)

6

44

RCMS(2,1)

6

20

RCMS(7,5)

7

45

RCMS(2,1)

7

21

RCMS(5,3)

4

22

RCMS(5,3)

4,5

23

RCMS(5,3)

5

24

RCMS(5,3)

6

25

RCMS(5,3)

7

computation). The aim is to use the performance parameters to highlight the behaviour of different configurations when changing the volume of demand (the variation of average inter-arrival times). Table 5 shows the average utilizations of machines in classical CMS at different inter-arrival times. Therefore the simulations are performed for five congestion levels of the manufacturing system. It is important to emphasize that the results showed do not include machine breakdowns. Table 6 reports the first three parameters (WIP, Tardiness and Throughput) for the different manufacturing configurations. Table 6 shows the

average values over inter-arrival times with the respective standard deviations (St.dev). The standard deviation allows to highlight the variability of the results when the inter-arrival changes. The Table 5. Average utilizations Configuration

CMS

Inter-arrival time

Av. utilization

4

0,99

4,5

0,88

5

0,80

6

0,66

7

0,57

533

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

percentages refer to the comparison with the classical CMS. The positive percentages represent an increase of the respective factor while the negative percentages represent a decrease. Table 7 is the same for the throughput time of different parts and for the average throughout time. Tables 6 and 7 show that CMS with configuration in line has almost the same behaviour of the classical CMS except for the tardiness that increments significantly. Tables 6 and 7 also show that fractal configuration (FCMS) is the worst configuration. This is because the scheduling policy used is more simply. An opportune policy needs to be implemented for the FCMS. This is a limit of FMCS configuration, because a more complex control system has to

be designed. The standard deviation shows the variability of the performance measures related to the inter-arrival changes in fact the FCMS is the configuration with the higher dependence on the inter-arrival changes. As the reader can notice, the RCMS performance depends on the choose of the threshold values. Table 8 reports the variation of performance observed in correspondence of three values of inter-arrival times (5, 6, and 7). The percentages always refer to the comparison with the classical CMS. Among the various configurations of RCMS is showed only one (with thresholds 2, 1) with the most interesting results (see Table 8). Except for value of tardiness (when inter-arrival time is

Table 6. Simulation results WIP average CMS(in line)

Tardiness St. dev

2,15%

average

1,62%

85,97%

Throughput St. dev

179,28%

average

St. dev

0,01%

0,19%

FCMS

495,96%

699,08%

956,98%

1583,56%

-4,49%

6,74%

RCMS 7,5

62,55%

64,65%

148,50%

49,65%

-0,77%

1,66%

RCMS 5,3

76,76%

91,60%

136,93%

49,38%

-0,99%

2,10%

RCMS 3,2

107,54%

118,07%

134,58%

71,72%

18,64%

45,45%

RCMS 4,1

95,70%

133,95%

133,78%

96,78%

-1,47%

2,92%

RCMS 3,1

132,86%

170,05%

191,64%

237,27%

-1,62%

3,36%

RCMS 2,1

203,37%

265,32%

315,70%

514,08%

-2,21%

3,81%

Table 7. Simulation results Thr. Time 1 average

St. dev

Thr. Time 2 average

St. dev

Thr. Time 3 average

St. dev

Average Thr. Time average

St. dev

CMS(in line)

3,27%

1,24%

2,21%

2,68%

0,42%

0,79%

2,17%

1,58%

FCMS

551,65%

775,44%

547,81%

770,79%

352,77%

508,21%

496,34%

699,14%

RCMS 7,5

76,62%

63,94%

75,71%

63,20%

-12,66%

13,99%

53,20%

43,86%

RCMS 5,3

88,43%

87,56%

87,40%

86,09%

-7,56%

5,58%

62,97%

63,25%

RCMS 3,2

113,21%

101,38%

112,44%

100,65%

17,07%

44,23%

87,72%

78,76%

RCMS 4,1

101,34%

117,21%

100,78%

116,89%

-0,88%

16,07%

74,25%

89,56%

RCMS 3,1

139,38%

162,54%

136,99%

159,71%

12,55%

31,48%

104,75%

125,62%

RCMS 2,1

208,51%

275,98%

205,19%

272,66%

38,91%

75,03%

161,62%

218,90%

534

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Table 8. Simulation results: arrival comparison Inter-arrival time

CMS in line

FCMS

RCMS 2,1

WIP

Thr. time 1

Thr. time 2

Thr. time 3

Average Thr. Time

Tardiness

Throughput

5

3,20%

4,41%

3,21%

0,83%

3,08%

7,56%

0,17%

6

3,08%

3,72%

4,00%

0,02%

2,94%

5,88%

0,22%

7

2,56%

3,56%

3,51%

0,15%

2,77%

4,80%

-0,23%

5

65,16%

77,98%

76,58%

29,02%

64,87%

247,06%

0,08%

6

13,64%

19,60%

19,42%

-4,64%

13,72%

22,65%

-0,10%

7

13,23%

17,97%

17,96%

0,26%

13,92%

22,87%

-0,58%

5

18,39%

31,34%

30,25%

-18,17%

18,25%

63,03%

0,08%

6

7,95%

14,47%

14,27%

-11,52%

8,18%

6,19%

-0,20%

7

9,17%

13,43%

13,43%

-5,43%

9,14%

12,61%

0,12%

equal to 5) the other performance converge to values close to CMS configuration with differences about 10%. The better performance of RCMS is obtained with inter-arrival time equal to 5 therefore with a medium –high average utilization of the manufacturing system (see Table 5). With high and low congestion levels the other configurations compared to CMS have very low performance level. This is confirmed in Figure 13 that shows the profile of performance at different congestion levels. Figures 14 and 15 show the comparison of the performance measures. It is clear that FCMS configuration in all cases performs worse especially for average inter-arrival time equal to 5. The design of this configuration needs to be rethought. For

higher inter-arrival times the differences tend to decline. The behaviour of RCMS is more interesting and there is more possibility for improvement. In Figure 13 observing the curve of RCMS (2,1), it is interesting to note that the throughput time of product 3 performs better than other configurations. This is probably due to the fact that the cell 3 has lower loads (since part mix 3 is 20%) and it obtains more synergy from the remainder cell. In that configuration queues larger than 2 units (parts) are not tolerated. In this case, the remainder cell is used frequently and this is the key to a better behaviour of system configuration. The results showed indicate that a better balance of utilizations between dedicated cells and remainder cell leads to an improvement in performance.

Figure 13. Performance comparison: RCMS

535

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Figure 14. Performance comparison: interarrival time

Figure 15. Performance comparison: interarrival time

CONCLUSION AND FUTURE DEVELOPMENT This chapter investigates several configurations of the cellular manufacturing systems. A simulation environment is used to create equal operating conditions for different cellular systems. Each simulation includes the same number of machines. Thus the comparison between systems is normalized. Volume changes are analysed changing of

536

inter-arrival times. It has been considered interesting to compare the performance because the economic environment is extremely turbulent. In particular our attention has focused on alternative approaches to traditional cells. A solution that looks interesting results is the remainder cellular manufacturing system (RCMS). The results of this research can be summarized as it follows:

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles







the classical cellular configuration with machines placed in line (CMS in line) is the best solution with static market conditions; the results are very close to the case of machines that are not in line (CMS); the fractal cellular configuration(FCMS) gives bad results as it is conceived in a static environment and should think a more complex logic with different loading policies; the cellular manufacturing system with remainder cell (RCMS) is already competitive in some cases with larger inter-arrival times; the best configuration is one that requires more stringent threshold values which imply a greater use of the remainder cell.

From this it follows that RCMS could become very competitive when the presence of a turbulent market would involve a greater use of remainder cell, and similarly the presence of noise on the manufacturing system(such as machine breakdowns or maintenance). In literature, it is known that the FCMS and the RCMS are not very competitive against classical CMS. But in previous studies remainder cells were often used as support cells with exclusive use in special circumstances. Our proposal is to adopt loading policies designed to achieve a strategic use of the remainder cells. Simple loading policies included in simulation models show how the remainder cell can be used to keep different performance under certain conditions. This work aims to demonstrate under certain dynamic conditions the proposed configurations can be competitive with classical CMS. Furthermore this chapter demonstrates the strong dependence of the results from the design of loading approaches, which deserve special attention. Future research could focus on defining complex loading policies able to maintain high performance of the manufacturing system in different operating conditions and also taking into account the need for maintenance and possible

failures of the machines(also for those belonging to remainder cell). These policies will certainly improve both the RCMS as for FCMS. Moreover in the RCMS the logic of loading machines have a strong influence on the performance. Under dynamic conditions with market fluctuations these strategies using remainder cells can avoid the reconfigurations of manufacturing systems, avoiding downtimes and reducing costs. Future works could investigate a variety of systems that integrate the configurations showed in this chapter with decision-making systems with intelligence to interpret the variability of real production scenario, moreover would also be interesting to analyze the economic aspect of different manufacturing solutions and how it may influence the choices.

REFERENCES Ahkioon, S., Bulgak, A. A., & Bektas, T. (2009). Cellular manufacturing systems design with routing flexibility, machine procurement, production planning and dynamic system reconfiguration. International Journal of Production Research, 47(6), 1573–1600. doi:10.1080/00207540701581809 Aramoon Bajestani, M., Rabbani, M., Rahimi Vahed, A. R., & Baharian Khoshkhou, G. (2009). A multiobjective scatter search for a dynamic cell formation problem. Computers & Operations Research, 36, 777–794. doi:10.1016/j.cor.2007.10.026 Aryanezhad, M. B., & Deljoo, V., Mirzapour Al-ehashem, S. M. J. (2009). Dynamic cell formation and the worker assignment problem: A new model. International Journal of Advanced Manufacturing Technology, 41, 329–342. doi:10.1007/s00170008-1479-4 Chen, C. H., & Balakrishnan, J. (2005). Dynamic cellular manufacturing under multiperiod planning horizons. Journal of Manufacturing Technology Management, 16(5), 516–530. doi:10.1108/17410380510600491 537

Performance Comparison of Cellular Manufacturing Configurations in Different Demand Profiles

Chen, C. H., & Balakrishnan, J. (2007). Multiperiod planning and uncertainty issues in cellular manufacturing: A review and future directions. European Journal of Operational Research, 177, 281–309. doi:10.1016/j.ejor.2005.08.027

Rheault, M., Drolet, J., & Abdulnour, G. (1995). Physically reconfigurable virtual cells: A dynamic model for a highly dynamic environment. Computers & Industrial Engineering, 29(1–4), 221–225. doi:10.1016/0360-8352(95)00075-C

Hachicha, W., Masmoudi, F., & Haddar, M. (2007). An improvement of a cellular manufacturing system design using simulation analysis. International Journal of Simulation Modelling, 4(6), 193–205. doi:10.2507/IJSIMM06(4)1.089

Safei, N., Saidi-Mehrabad, M., & Babakhani, M. (2007). Designing cellular manufacturing systems under dynamic and uncertain conditions. Journal of Intelligent Manufacturing, 18, 383–399. doi:10.1007/s10845-007-0029-5

Irani, S. A., Subramanian, S., & Allam, Y. S. (1999). Introduction to cellular manufacturing system. In Irani, S. A. (Ed.), Handbook of cellular manufacturing systems (pp. 29–30). John Wiley & Sons. doi:10.1002/9780470172476.ch

Safei, N., & Tavakkoli-Moghaddam, R. (2009). Integrated multi-period cell formation and subcontracting production planning in dynamic cellular manufacturing systems. International Journal of Production Economics, 120, 301–314. doi:10.1016/j.ijpe.2008.12.013

Kelton, W. D., & Sadowski, R. P. (2009). Simulation with Arena. McGraw-Hill. Kesen, S. E., Sanchoy, K., & Gungor, Z. (2010). A genetic algorithm based heuristic for scheduling of virtual manufacturing cells (VMCs). Computers & Operations Research, 37, 1148–1156. doi:10.1016/j.cor.2009.10.006 Kesen, S. E., Toksari, M. D., Gungor, Z., & Guner, E. (2009). Analyzing the behaviors of virtual cells (VCs) and traditional manufacturing systems: Ant colony optimization (ACO)-based metamodels. Computers & Operations Research, 36(7), 2275–2285. doi:10.1016/j.cor.2008.09.002 Maddisetty, S. (2005). Design of shared cells in a probabilistic demand environment. PhD Thesis. College of Engineering and Technology of Ohio University, Ohio(USA). Montreuil, B. (1999). Fractal layout organization for job shop environments. International Journal of Production Research, 37(3), 501–521. doi:10.1080/002075499191643

Selim, M. S., Askin, R. G., & Vakharia, A. J. (1998). Cell formation in group technology: Review evaluation and directions for future research. Computers & Industrial Engineering, 34(1), 3–20. doi:10.1016/ S0360-8352(97)00147-2 Süer, G. A., Huang, J., & Maddisetty, S. (2010). Design of dedicated, shared and remainder cells in a probabilistic demand environment. International Journal of Production Research, 48(19), 5613–5646. doi:10.1080/00207540903117865 Vakharia, A. J., Moily, J., & Huang, Y. (1999). Evaluating virtual cells and multistage flow shops: An analytical approach. International Journal of Flexible Manufacturing Systems, 11, 291–314. doi:10.1023/A:1008117329327 Venkatadri, U., Rardin, R. L., & Montreuil, B. (1997). A design methodology for fractal layout organization. IIE Transactions, 29, 911–924. doi:10.1080/07408179708966411 Wang, X., Tang, J., & Yung, K. (2009). Optimization of the multi-objective dynamic cell formation problem using a scatter search approach. International Journal of Advanced Manufacturing Technology, 44, 318–329. doi:10.1007/s00170-008-1835-4

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 366-384, copyright 2012 by Business Science Reference (an imprint of IGI Global). 538

539

Chapter 31

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing Systems under Uncertain Situations Vahidreza Ghezavati Islamic Azad University, Iran

Mohammad Saeed Jabal-Ameli University of Science and Technology, Iran

Mohammad Saidi-Mehrabad University of Science and Technology, Iran

Ahmad Makui University of Science and Technology, Iran

Seyed Jafar Sadjadi University of Science and Technology, Iran

ABSTRACT In practice, demands, costs, processing times, set-up times, routings, and other inputs to classical cellular manufacturing systems (CMS) problems may be highly uncertain, which can have a major impact on characteristics of manufacturing system. So, development models for cell formation (CF) problem under uncertainty can be a suitable area for researchers and belongs to a relatively new class of CMS problems that not researched well in the literature. In this way, random parameters can be either continuous or described by discrete scenarios. If probability information is known, uncertainty is described using a (discrete or continuous) probability distribution on the parameters, otherwise, continuous parameters are normally limited to lie in some pre-determined intervals. This chapter introduces basic concepts about uncertainty themes associated with cellular manufacturing systems and briefly reviews literature survey for this type of problem. The chapter also discusses the characteristics of different mathematical models in the context of cellular manufacturing.

DOI: 10.4018/978-1-4666-1945-6.ch031

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

INTRODUCTION During the past few decades, there have been various types of optimization techniques and mathematical programming approaches for cellular manufacturing systems under different random situations. In a cell manufacturing, once work cells and scheduling of parts in each cell are determined, it may be possible that cycle time in a specific cell be more than the other cells which creates a bottleneck in a manufacturing system. In this way, there are two different approaches in order to decrease cycle time in bottleneck cell: duplicating bottleneck machines or outsourcing exceptional parts which are known as group scheduling (GS) in the literature. Selecting each approach to balance cycle times among all cells can lead to variations in machines layout characteristics by changes in type and number of machines. Finally, formations of cells are also changed according to the changes in scheduling decisions. Thus, scheduling problem is one of the operational issues which must be addressed in design stage concurrently in an integrated problem so that the best performance of cells would be achieved. It is noted that scheduling problem includes many tactical parameters with random and uncertain characteristics. In addition, uncertainty or fluctuations in input parameters leads to fluctuations in scheduling decisions which could reduce the effects of cell formation decisions. Figure 1 in-

dicates transmission of uncertainty from tactical parameters to the CMS problem. Thus, in order to intensify effectiveness of the solution, integrated problem in uncertain conditions must be studied so that final solution will be robust and immune against the fluctuations in input parameters. In the concerned problem, uncertain parameters can be listed as follows: • • • • • • • • • •

Demand, Processing time, Routings or machine-part matrix, Machines’ availability, Failure rate of machines, Capacities, Lead times, Set-up considerations, Market aspects, …,

where the impact of each factor is discussed in the following sections.

PROBLEM BACKGROUND Group technology (GT) is a management theory that aims to group products with similar processes or manufacturing characteristics, or both. Cellular manufacturing system (CMS) is a manufacturing

Figure 1. Illustration of uncertainty transmission to the CMS decision

540

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

concept to group products into part families based on their similarities is manufacturing processing. Machines are also grouped into machine cells based on the parts which are supposed to be manufactured by these machines. CMS framework is a important application of group technology (GT) philosophy. The basic purpose of CM is to identify machine cells and part families concurrently, and to assign part families to machine cells in order to minimize the intercellular and intracellular costs of parts. Some real-world limitations in CF are: • • •





Available capacity of machines must not be exceeded, Safety and technological necessities must be met, The number of machines in a cell and the number of cells have not be exceeded an upper bound, Intercellular and intracellular costs of handling material between machines must be minimized, Machines must be utilized in effect (Heragu, 1997).

Aggregating traditional considerations with newly ones such as scheduling, stochastic approaches, processing time, variable demand, sequencing, and layout consideration can be more practical. This survey highlights studies that are relevant to the uncertainty planning of CMS problems; however, a survey of certain conditions will also be presented. Cellular manufacturing decisions are strategic decisions which can be affected by operational decisions such as scheduling, production planning, layout consideration, utilities, productivity and etc. Thus, in order to effecting decision making related to cell formation design, it is necessary to integrate strategic decisions and operational decisions in a single problem. Recently, researchers have had some efforts in order to integrate two types of decisions. But the lack of literature is that most of them are studied in certain situations while in

real-world most of the operational parameters are uncertain; and thus, integrated problems must be more studied in uncertain situations. In the literature correspondence to CMS problems, uncertainty has been considered under different circumstances. We have classified previous researches into different groups which are discussed next. Group 1: Uncertainty could appear either in demand or in products’ mix. In this group, there are two approaches of fuzzy theory and stochastic optimization to handle uncertainty. In some of them stochastic demand is aggregated with tactical aspects such as production planning (Hurley and Whybark 1999), layout problem (Song and Hitomi1996) or dynamic and multi period conditions (Balakrishnan and Cheng 2007). Also, in other studies, uncertainty in products’ demand has been resolved by fuzzy approach (Safaei et. al. 2008). Group 2: Researchers formulated and analyzed CMS problem considering fuzzy coefficients in the objective function and constraints (Papaioannou and Wilson 2009). Group 3: Processing times of products are assumed to be uncertain where mathematical programming and fuzzy approaches are implemented to obtain the results which are immune against the perturbation on the uncertainty. Also, some studies such as Sun and Yih (1996) and Andres et. al. (2007) attempted to achieve solutions by heuristic procedures. Some studies have formulated the problem as a queue network and then analyzed it by queuing theory (Yang and Deane 1993). Group 4: Uncertainty normally appears due to fluctuations in design aspects during production process. Since, fluctuations in design aspects are not certain events, so uncertainty can be formulated by a set of future scenarios. In this way, some studies applied interval

541

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

coefficient to resolve uncertainty (Shanker and Vrat 1998). Group 5: In some explorations, uncertainty has been considered in the availability of resources for production equipments. In this way, some works have formulated CMS problem applying probability theory (Kuroda and Tomita (2005) and Hosseini 2000). In addition, some of them considered multi processing routes to be substituted once a machine encounters with failure (Siemiatkowski and Przybylski (2007) and Asgharpour and Javadian 2004). Group 6: Uncertainty has been recognized in similarity coefficients. For example, a new similarity coefficient has been introduced where applied fuzzy theory and then transformed it to a binary matrix (Ravichandran and Chandra Sekhara Rao 2001). Group 7: Capacity level of machines is considered to be uncertain. Since this critical parameter has an important role to determine bottleneck machine, thus it is vital to make flexible decisions under any realization of this parameter (Szwarc et al 1997). Group 8: Finally, uncertainty in CMS problem has been detected in products arrival time to cells. Classical models assume that all products are available at the beginning of the production planning while in real application it may be occurred that products arrive to cell with unknown time. In this way, researchers modeled CMS problem as queue network to resolve uncertainty (Yang and Deane 1993). Literature survey classifications can be described as follows. There exist many researches in certain situations for designing CMS in different areas such as cell formation integrated with scheduling (Solimanpur et al. (2004), Aryanezhad and Aliabadi et al 2011), considering exceptional elements in CF (Tsai et al. (1997), Mahdavi et al. 2007), some works apply meta-heuristics and

542

heuristics methods to solve large scale problems are more practical and appealing real-case problems (Xiaodan Wu et al (2006), Venkataramanaiah 2007).

OPTIMIZATION APPROACHES IN UNCERTAIN SITUATIONS Rosenhead et al (1972) divided decision environments into three groups of deterministic, risk and uncertain. In deterministic situations, all problem parameters are considered to be given. In risk problems parameters have probability distribution function where it is known for decision maker while in uncertain situations there is no information about probabilities. The problems which are classified into the risk are named stochastic and the primary objective is to optimize expected value of system outcome. Also, the uncertain problems are known as robust and the primary objective is mainly to optimize performance of the system in the worst case conditions. The aim of both stochastic and robust optimization methods is to find solution with a suitable performance in realization of any value for uncertain parameter. Random parameters can be either continues or explained by discrete scenarios. If probability information are known, uncertainty will be explained by continues or discrete distribution functions. But if no information is available, parameters are assumed to be in predefined intervals. Scenario planning is a method in which decision makers achieve uncertainty by indicating a number of possible future states. In such conditions, the goal is to find solutions which perform well under all scenarios. In some cases, scenario planning replaces predicting as a way to assess trends and potential modifications in the industry environment (Mobasheri et al 1989). Decisions makers can thus develop strategic responses to a range of environmental adjustments, more

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

adequately preparing themselves for the uncertain future. Under such conditions, scenarios are qualitative descriptions of possible future states, consequences from the present state with consideration of potential key industry events. In other cases, scenario planning is used as a tool for modeling and solving specific operational problems (Mulvey 1996). While scenarios here also depict a range of future states, they do so through quantitative descriptions of the various values that problem input parameters may resolve. Scenario based planning has two main negative aspects. The first is that identifying scenarios and assigning probabilities to them are a difficult task. The second is that we are unable to increase the number of scenarios since due to limitation on computation time which consequently limits the future correspondence situations for decision making. This approach has the advantageous that provides statistical correlation between parameters (Snyder 2006).

DECISION MAKING APPROACHES IN UNCERTAIN SITUATIONS There are different approaches which can be applied in modeling process based on the problem characteristics: Stochastic Optimization (SO), Robust Optimization (RO) and Queuing Theory (QT) with defined decision tree as follows. • • • •



Stochastic Optimization Discrete Planning - Set of Scenario Continues Optimization Mean Value model: the most popular objective in any SO problem is to optimize expected value of the system outcome. For example, minimizing expected cost or maximizing expected income. Mean – Variance Model: in some studies variance and expected of system performance are considered simultaneously in optimization problem.

• •



• •

Probability Approaches Max Probability Optimization: Maximizing the probability of a random event that solution performs good under each realization of random parameter. Chance Constrained Programming: a probability event located in problem constraint sets such as service level constraint. Queuing Theory & Markov Chain: It is a well-known approach. Robust Optimization

The objective in any stochastic optimization problem mainly focuses on optimizing the expected value of system outcome such as maximizing expected profit or minimizing total expected cost. In any stochastic programming we must determine which variables are considered in the first stage (design variable) and which are considered in the second stage (control variable). In other words, which variables must be determined first and which of them must be determined after uncertainty is resolved. In modeling process for cellular manufacturing problem, cell formation decisions are the first and operational and tactical decisions are the second variables. If both decisions are made in a single stage, the model is reduced to a certain problem in which uncertainty of parameters are replaced by mean of variables.

Mean-Variance Models The mean value models discuss only the expected performance of the system without reflecting on the fluctuations in performance and the decision maker’s risk aversion limitations. However, a portion of literature incorporates the company’s level of risk aversion into the decision-making process, classically by applying a mean–variance objective function. Min = E(Cost) + 𝜆Var(Cost)

543

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Probabilistic Approaches The mean-variance models consider only the expected value or variance of the stochastic objective function, there is an extensive portion of literature which considers probabilistic information about the performance of the system; for example, maximizing the probability that the performance is good or minimizing the probability that it is bad, under suitable and predefined explanations of “good” and “bad”. We introduce two such approaches: (1) max-probability problems; (2) chance-constrained programming;

Queuing Theory for CMS Problem Queuing theory can be applied to any manufacturing or service systems (also, in cellular manufacturing systems). For example, in a machine shop, jobs wait to be machined; (Heragu 1997b). In a queuing system, customers arrive by some arrival process and wait in a queue for the next available server. In the manufacturing framework, customers can be assumed as parts and servers may be machines or working cells. The input process shows how parts arrive at a queue in a cell. An arrival process is commonly identified by the probability distribution of the number of arrivals in any time interval. The service process is usually described by a probability distribution. The service rate is the number of parts served per unit time. The arrival rate of a queuing system is usually given as the number of parts arriving per unit time. Thus, measurements of a queue system such as maximization the probability that each server is busy (utilization factor), minimization waiting time in queues (that leads to minimization work in process in cells) and etc can be optimized and cells will be formed optimality.

Robust Optimization Once there is no probability information about the uncertain parameters, the expected cost and

544

other objectives discussed in previous section are inappropriate. Many measurements of robustness have been introduced for this condition. The two most common are mini-max cost and mini-max regret, which are directly related to one another. Just like the stochastic optimization case, uncertain parameters in robust optimization problems may be considered as being either discrete or continuous. Discrete parameters are formulated applying the scenario based planning. Continuous parameters are normally assumed to lie in some predefined interval, because it is often impossible to consider a “worst case scenario” when parameter values are unbounded. This type of uncertainty is described as “interval uncertainty”. The two most common robustness measurements consider the regret of a solution, which is the difference (absolute or percentage) between the cost of a solution in a given scenario and the cost of the optimal solution for that scenario. Regret is sometimes described as opportunity loss: the difference between the quality of a given strategy and the quality of the strategy that would have been chosen had one known what the future held (Snyder 2006). As it was already described, the performance of a cellular manufacturing system heavily influenced by tactical and operational decisions such as scheduling, production planning, layout and etc. Notable point is that the tactical decisions and operational parameters are dependent on many uncertainties that affect the system. As a result, the tactical and operational decisions are suffering from uncertainty. This causes to transfer uncertainty into the decisions related cell formation. Therefore, it is essential for researchers to recognize different types of uncertainty in the problem and make decisions regarded to their impact into the problem. The most important parameters with uncertainty in manufacturing cell formation problem considered as below: •

Demand

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

• • • •

Processing time Routings or machine-part matrix Machine’s failure rate Capacities

One of the factors causing uncertainty in the problem associated with product design changes during the course of production. Moreover, changes in product design with many features of the product are altered. Design changes can occur based on a variety of reasons such as changes in customer expectations, short-term life products, and competing products to market entry. Under such circumstances, many characteristics of products such as demand and time will find a process of change. Note that the reasons of changes expressed are not certain events in the future and thus they have to be predicated as some discrete scenarios. In such case analytical space problem is discrete and can be optimized by discrete optimization. As it was discussed earlier, one of the product features which can be changed due to changes in product design is product routings. In this way, sequence of machines in which product has to visit them may be changed and therefore part – machine index may be changed. In such cases, the values within the part – machine matrix unlike classical models that were only zero or one can

be a probabilistic value between zero and one. In such problems, discrete optimization can be applied to formulation. Another factor with uncertainty is the rate of access to machines based on their failure. Since failure and machine downtime are not certain events, the machine accessibility for the decision maker at the time of manufacturing cells with defined uncertainty is also under uncertainty. Another parameter that is uncertain and can affect formation of work cells is features of capacity. These factors include different items: the capacity of processing machinery on parts as well as physical capacities for manufacturing framework. Such variations must be predicted at the beginning planning horizon. The summary of above discussions can be found Table 1.

MATHEMATICAL MODELLING In this section, different mathematical models with different optimization approaches which include two new models and one published model are discussed. The selected approaches are stochastic optimization and queuing theory.

Table 1. Summary uncertainty developments in CMS problem No.

Uncertain parameter

Optimization Approach

Decision space

1

demand

Stochastic

Continuous & Discrete

2

Processing time

Stochastic

Continuous & Discrete

3

Processing time

Robust

Continuous & Discrete

4

Processing time

Queuing Theory

Continuous

5

Routing

Stochastic

Discrete

6

Routing

Queuing Theory

Discrete

7

Capacity

Stochastic

Discrete

8

Machines’ Availability

Queuing Theory

Continuous & Discrete

9

Machines’ Availability

Stochastic

Continuous & Discrete

10

Lead times

Stochastic & Robust

Continuous & Discrete

545

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Model 1 In this section, a bi-objective mathematical model to form manufacturing cells is presented where uncertainty is accessed in part – machine matrix. As discussed earlier, due to changes in design characteristics of products, several factors are subject to changes such as the processing routings of parts. Thus, according to the forecasting based on the scenario planning, forecasting different routing processes for a part in uncertain situation is possible. In this condition, each part can have different routing process for each scenario. Therefore, in order to design cellular configuration efficiently, all planning conditions must be considered. In current problem the factor with uncertainty is part – machine matrix. In classical models, only zero-one elements are used in part – machine matrix while in the presented problem each element can be a continuous value between zero and one. Each array denotes the probability that part i visits machine j with regard to all scenarios. For example, if there are two scenarios in which the probability of the first scenario is 0.4 and for the second one is 0.6, we have: p1 = 0.4 ⇒ Routing in scenario 1 for part 1: Machine 1 → Machine 2 → Machine 3 → Finish p2 = 0.6 ⇒ Routine in scenario 2 for part 1: Machine 1 → Machine 2 → Machine 3 → Finish M .1 M .2 M .3 M .4   a[ij ] =   . . 1 0 4 1 0 6   Where element [ij] indicates the probability that part i processed on machine j. Since, in both scenarios, machines 1 and 3 are the same in processing routing, so part 1 has to visit them surely (or with probability 1) to do operation process. But, based on the first scenario this part has to visit machine 2 with probability 0.4 and also machine 4 with probability 0.6. As it can be seen, in introduced part – machine matrix

546

each array can have a value between zero and one based on the probability occurrence for scenarios. In a mathematical model which is presented in this section, the first objective function minimizes the costs associated with the under utilization in a manufacturing system. Also, the second objective function is optimizing a random event in manufacturing system unlike the classical models which optimized only certain events. As it was discussed in definitions of a cellular manufacturing system, one of the most important objectives is to minimize the number of inter cellular transportation. In this problem, since processing rout for parts is uncertain, therefore the number of inter cellular transportation is uncertain too. A random event which is considered for optimization is to “minimizing the probability that the number of inter cellular transportation exceeds the upper bound limitations”. For computing above objective the following notations are defined: Parameters: 1 aijs =  0  1 0 p s: N:

if part i needs to be processed on machine j in scenario s otherwise Probability of occurring scenario s Maximum number of intercellular transportation allowed in each scenario

Decision Variables: ns: Number of intercellular transportation in scenario s. 1 es =  0  1 0

if no. of intercellular transportation in scenario s configuration exceeded up bound N otherwise

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

or

final solution. In other words, above probability transforms to the following function:

1 es =  0 

1 0 zs:

MinZ 2 = ∑ es × ps

if ns ≥ N if ns < N Integer additional variable for each scenario.

1 x ik =  0  1 0

Since there is s scenarios in the proposed problem which are similar to s independent random events, thus probability of total events will be equal to summation of probability of each event. In other words, assuming s1,s2,…,sn as n independent random events, we have: P (s1 ∪ s2 ∪ ... ∪ sn ) = P (s1 ) + P (s2 ) + ... + P (sn )

if part i is assigned to cell k otherwise

1 y jk =  0  1 0

(2)

s

if machine j is assigned to cell k otherwise

In order to minimizing under utilization costs in the first objective function, the following function is defined: MinZ1 = ∑ ∑ ∑ ps × (1 − aijs ) × x ik × y jk s

i

j

As a result, in above function if in scenario s the number of inter cellular transportation exceeds the upper bound limitation then we can assume that inter cellular transportation may be occurred with the probability of ps. Finally, the summation of the probability of scenarios with unsatisfied inter cellular transportation restriction denotes the final probability of the problem. In this model, the objective functions and also, the following constraints are effective: MinZ1 = ∑ ∑ ∑ ps × (1 − aijs ) × x ik × y jk s

i

j

MinZ 2 = ∑ es × ps



s

(1) Also, based on the above definitions, an attractive random event for minimizing the second function can be defined as follows:

Constraints:

∑x

ik

=1

∑y

jk

=1

k

p (no. of intercellular transportation in each condition ≥ N)

k

The above random event must be optimized by minimizing the probability of occurrence that leads to maximum utility for decision maker in

∀i

∀j

ns − ∑ ∑ aijs × x ik × (1 − y jk ) = 0 i

(3)

(4)

(5)

j

547

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

n  z s −  s  = 0  N 

(6)

z s ≤ M ×es

(7)

x ik , y jk , es ∈ {0, 1}

z s integer ≥ 0

ns ≥ 0

The first objective minimizes total expected cost associated with the utilization computed when a part do not need to be processed on a machine placed together in a same cell. The second objective minimizes the probability that number of inter cellular transportation exceeds the maximum transportation. Set constraint (3) says that each part must be assigned to a single cell. Set constraint (4) states that each machine can be assigned only to one cell. Set constraint (5) computes total number of inter transportation in each scenario. In set constraint (6) additional variable zs will be zero if the number of inter transportations in scenario s is less than the maximum limit and it is an integer value greater than 1 else. Set constraint (7) guarantees that if ns ≥ N then es will be 1. Otherwise, es will be 0.

Model 2 Applying Queuing Theory to CMS Problem In this section, we formulate a CMS problem as a queue system. Also, assume a birth-death process with constant arrival (birth) and service completion (death) rates. The role of the birth-death process in automated manufacturing systems is described in detail in Viswanadham and Narahari (Viswanadham and Narahari 1992). Specifically, let λ and μ be the arrival and service rate of parts, respectively, per unit time. If arrival rate is greater than the service rate, the queue will grow infinitely. The ratio of λ to μ is named utilization factor or the probability that a machine is busy and is defined as 𝜌 = 𝜆 | 𝜇. Therefore, for a system in steady state, this ratio must be less than one. In this research, we assume M/M/1 queue system for each machine in CMS where each part arrives to cells with rate 𝜆i and parts served by machines. In this condition, due to operate different parts (or different customers) on each machine and each part has different arrival rate, so for each machine (server) ρ is computed using the following property. Figure 2 illustrates modeling of cellular manufacturing system by queuing theory approach.

Figure 2. A CMS problem and queuing theory framework (Ghezavati and Saidi-Mehrabad 2011)

548

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Property 1 the minimum of independent exponential random variables is also, exponential. Let F1, F2,…, Fn be independent random variables with parameters 𝜆1, 𝜆2,…, 𝜆n. Let Fmin = min{F1, F2,…, Fn}. Then for any t ≥ 0, P (Fmin > t ) = P (F1 > t ) × P (F2 > t ) × ... × P (Fn > t ) =e

−λ1t −λ2t

e

...e

−λn t

=e

−[ λ1 +λ2 +...+λn ]t

An interesting implication of this property to inter-arrival times is discussed in Hillier and Lieberman (Hillier and Lieberman 1995). Suppose there are n types of customers, with the ith type of customer having an exponential inter-arrival time distribution with parameter 𝜆i, arrive at a queue system. Let us assume that an arrival has just taken place. Then from a no-memory property of exponential distribution, it follows that the time remaining until the next arrival is also exponential. Using mentioned property, we can see that the inter-arrival time for entire queue system or efficient arrival rate (which is the minimum among all inter-arrival times) has an exponential distribution with parameter: λeff = ∑ λi i =1

Hence, utilization factor or the probability that each machine (j) is busy is as follow (efficient arrival rate divided by service rate): N

λeff µj

P (W ≥ t ) = e s

−µ(1 − ρ)t

=

∑λ i =1

i

µj

(9)

Proof: Assume that there are N customers in a system once a new customer is arrived. Thus, based on the conditional probability theory: ∞

P (Ws ≥ t ) = ∑ P (Ws ≥ t | N = n ) × P (N = n ) n =0

(10)

On the other side, total time in which a new customer has to wait is equal to: (11)

Wq = F1 + F2 + ... + Fn

where Fi denotes service time for customer i. So: (12)

Ws = Wq + Fn +1

N

ρj =

In order to prevent long waiting time for each customer, a chance constraint must be considered in the formulation. Note that distribution function denoting total time for each customer in a M/M/1 system is as follows:

where Fn+1 denotes service time for new arrived customer. It is obvious that sum of the n+1 random variables with exponential distribution with rate 𝜇 will be an Erlang random variable with parameters n+1 and 𝜇. So: n +1

(8)

Chance Constrained Programming Since, both arrival time and service time are uncertain so the amount of time in which each customer spends in server will be uncertain, too.

P (Ws ≥ t | N = n ) = P (∑ Fi > t ) i =1



=

∫ t

µ ×e −µ⋅y

n

(µy ) d n! y

(13)

Note that the probability of being n customers in a M/M/1 model system is:

549

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

pn = ρ n (1 − ρ)

where

ρ=

λ µ

(14)

Based on the Equations 13 and 14, Equation 10 will be computed as: ∞



n =0

t

P (Ws ≥ t ) = ∑ ρ n (1 − ρ)∫ µ × e −µ⋅y





t

n =0

= µ(1 − ρ)∫ e −µ⋅y ∑

ρ n (µy )n dy n!

(µy )n d n! y (15) (16)

Also, based on the exponential series, we have: ρ n (µy )n ∑ n ! = e ρ⋅µ⋅y = eλ⋅y n =0 ∞

(17)

If we replace Equation 17 to the Equation 16, the Equation 9 will be proven. It can be found that Ws has an exponential distribution function with parameter 𝜇 –𝜆. In order to satisfy service level this probability must be at most α. So, the chance constraint will be determined as P(Ws ≥ t) ≤ 𝛼. In order to linearize this nonlinear constraint the following procedure is performed: P (Ws ≥ t ) ≤ α

(18)

⇒ e −µ(1−ρ )t ≤ α

(19)

⇒ −µ(1 − ρ)t ≤ Ln(α)

(20)

The achieved constraint indicates that a customer will be in system more than critical time t with probability at most α. Property 2. If n types of customers have to visit a server to receive service with different arrival rate 𝜆i then the probability of a random 550

customer in which visits the server be ith type will be as follows: pi =

λi

∑λ j

(21) j

In concerned model, the characteristics of a Jacson service network will be applied. In a Jacson network, it is assumed that each customer has to visit multiple servers in order to complete service stages. For example, each part refers to several machines to complete operation processes. In such network, input rate for machines needed for the first operation will be equal to the arrival rate of the part to the system. But, the ratio for any machines need for the second operation input rate will be equal to the output rate from the previous server (or machine). Similarly, any machine needs for the third operation input rate will be equal to the exit rate of the second machine and this process goes on for the other machines. In a cellular manufacturing problem formulated as a queue system, each part based on its routing process visits machines or multi cells in order to receive service. Figure 3 illustrates such process. For each machine, effective input rate is made of two elements. The first fraction is the summation of arrival rate of parts which visit the machine in the first operation. The second fraction is the summation of input rate of parts which visit the machine after the second operation. This rate is equal to the output rate of the previous machine. Figure 3 illustrates difference between arrival rates for machines per a specific part. In this model, such procedure will be applied to compute effective input rate for each machine. In this section, a part—machine matrix—will be applied where sequence operations of parts are determined. This can help us formulate problem as a Jacson network. Each element of this matrix is defined as follows:

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Figure 3. Arrival rate for part 1 into the different machines based on the routing

 j aik =  0  j 0

if kth process of part i is completed by machine j otherwise

k bij =  0  k 0

if part i refers to machine j to complete kth process otherwise Other parameters are defined as follows:

1 z ij =  0 

𝜆I = A rrival rate of part i to manufacturing system. 𝜇j = service rate of machine j (or [1/𝜇j] denotes average operation time on machine j). pij = The probability that a random part type i leaves machine j. 𝛽 = Penalty rate multiplied to arrival process if intercellular movement occurs. It is assumed that if an operation of a part has to transfer to the other cell (or inter cellular movement) then arrival rate will be multiplied by𝛽 in which included transfer time and also waiting time between cells. 𝜆jeff = Effective arrival rate for machine j.

Based on the above definitions 𝜆jeff will be computed by the following equation. λeff = j i =1

1 0

if operation on machine j is the first operation of part i otherwise

1 cij =  0  1 0

if part i needs to be processed on machine j otherwise

m

m

∑ (z

ij

× λi ) + ∑ (1 − z ij ) × cij × λaeff i =1

i ,bij −1

× pi,a

i ,bij −1

In above equation, in order to compute effective input rate for each machine two fractions are considered: the first fraction is the summation of the arrival rate for parts which visit machine j in the first operation. The second term is the summation of input rate for parts which visit the machine after the second operation. Number of operation which completed by machine j is bij based on the defined

551

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

parameters. Thus, number of previous operation is bij – 1. Finally, according to the definition of aik (the machine completes kth operation of part i), the machine which completes previous operation of part i will be ai,b𝜇-1. Therefore, the second term of above equation effective arrival rate for parts visit machine j after the second operation are computed as follows: effective arrival rate of a machine needs before machine j multiplied by the probability of leaving for part i from previous machine. For example, assume that customers arrive to a book store with Poisson distribution with rate 10 per hour where 60 percent is man and 40 percent is women. Hence, the number of men arrives to the store will be Poisson with rate 15×0.6 per hour and also, number of women arrives to the store will be Poisson with rate 15×0.4 per hour. Note that if operation j of part i needs inter cellular transportation, machine j is penalized by increasing arrival rate of part i and the rate is multiplied by 𝛽. Finally, the model must determine whether each operation needs inter cellular transportation or not. It must be mentioned that operation j of part i needs inter cellular transportation when machine j and part i are not located in the same cell. Based on the above description 𝜆eff is computed as Equation 22.

Mathematical Model

In this section, a mathematical model optimizes cell formation decisions based on the queuing theory will be proposed. The objective function is

to minimize total cost included under utilization cost. Also, a chance constraint will be considered in order to prevent additional waiting time of parts in a queue line in front of each machine. As it was discussed, assuming each machine as a M/M/1 model, the chance constraint (13) satisfies considered objective. MinZ = ∑ ∑ (1 − aij ) × x ik × y jk i

Constraints:

∑x

ik

=1

∑y

jk

=1

k

k

ρj −

λeff j µj

(25)

∀j

(27)

−µj × (1 − ρ j )t ≤ Ln(α) ρj ≤ 1

pij −

∀j

λi × cij

∑λ r =1

r

× crj

x ik , y jk ∈ {0, 1}

(28) (29)

∀j

λeff = ∑ (x ik × y jk ) × λi + x ik × (1 − y jk ) × λi × β  × z ij j   i =1   m pi,a × (x ik × y jk ) × λaeff + x ik × (1 − y jk ) × λaeff × β  i , b − 1 i , b − 1 i , b − 1   ij ij ij +∑  i =1 ×c × (1 − z ) ij ij

552

(24)

∀i

=0

Equation 22. m

(23)

j

=0

∀i, j

ρ j , pij ≥ 0

(30)

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Equation 26. m

λeff = ∑ (x ik × y jk ) × λi + x ik × (1 − y jk ) × λi × β  × z ij j   i =1   m pi,a × (x ik × y jk ) × λaeff + x ik × (1 − y jk ) × λaeff × β  i ,bij −1 i ,bij −1  + ∑ i ,bij −1  i =1 ×c × (1 − z ) ij ij

Constraints (24) and (25) compute effective arrival rate and utilization factor for each machine, respectively. Set constraint (28) guaranties satisfaction of chance constrained for each machine where the probability that each part has to wait more than critical time t is at most a. Set constraint (29) ensures that utilization factor for each machine will be less than one. Set constraint (30) determines the probability that a random part leaves machine j be type i.

Model 3 Recently, Ghezavati and Saidi-Mehrabad (2010) proposed a stochastic cellular manufacturing problem in where uncertainty is captured by discrete fluctuations in processing times of parts on machines. The aim of their model was to optimize scheduling cost (expected maximum tardiness cost) plus cell formation costs, concurrently. The mathematical model is represented in this part and interested readers are referred to read the paper for more details.

𝜇ij: Cost part i not utilizing machine j Mmax: Maximum number of machines permitted in a cell C𝜇: Maximum number of cells permitted ps: Probability of scenario s occurs tijs: Processing time for part i on machine j in scenario s DDi: Due Date of part i pc: Penalty cost for unit time delayed Decision variables: 1 x ik =  0  1 0

if part i processed in cell k otherwise

1 y jk =  0  if machine j assigned to cell k otherwise

Parameters:

1 0

1 aij =  0 

1 Z is [r ] =  0 

1 0 ci:

if part i required to be process on machine j otherwise Penalty cost of subcontracting for part i

1 0

if part i assigned to sequence [r] in scenario s otherwise

553

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

F[r]ks: The time in which process of part with sequence [r] ends in cell k and scenario s FD[r]ks: Due date of part with sequence [r] in cell k in scenario s L[r]ks: Tardiness of part with sequence [r] in cell k in scenario s MLs: Maximum Tardiness occurred in scenario s Diks: Total processing times of part i needs to be processed in cell k and scenario s T[r]ks: Total processing times of a part with sequence [r] assigned to cell k in scenario s CF decisions are scenario – independent: they must be made before occurring scenarios and they are made based on their similarities in processing parts and are independent to quantity of processing time. Scheduling decisions are scenario – dependent, thus Z, D, T, FD, L, ML and F variables are indexed by scenario since they must be made after we realize scenario and where the processing time is occurred.

Mathematical Model (Ghezavati, V.R. and Saidi-Mehranad, M., 2010)

∑x i

ik

Z is ,[r +1] ≤ ∑ X ik Z is [r ]

Diks = ∑ aij tijs x ik y jk

s

k

j

i

k

j

i

∑ ∑ ∑u

ij

(35)

(36)

∀i, k, s

j

∑x i

ik

Z is [r ] ≤ 1

(37)

∀r , s, k

T[r ]ks = ∑ Z is [r ]Diks

∀ k , s, r

(38)

∀ k , s, r

(39)

i

r

F[r ]ks = ∑ ∑ Tαks r =1 α =1

FD[r ]ks = ∑ x ik × Z is [r ] × DDi

∀ k , s, r

i

(40)

{

L[r ]ks = max 0, F[r ]ks − FD[r ]ks

}

∀ k , s, r (41)

MLs = Max {L[r ]ks : k = 1,...,C and [r ] = 1,..., P } ∀s

(42)

Minimize Z = ∑ pc × ps × MLs +

∑ ∑ ∑ ciaij x ik (1 − y jk ) +

∀ k , s, r

i

(31)

∑y j

(1 − aij )x ik y jk

jk

≤ M max

∀k

(43)

x ik , y jk , Z isr ~ (0, 1)

(44)

Diks , Trks , Frks , FDrks ≥ 0

(45)

Subject to:

∑x k

∑y k

ik

jk

∑Z r

=1

=1

is [ r ]

=1

∀i

∀j

∀i, s

(32)

(33)

(34)

Set constraints (32), (33) and (43) indicate cell formation constraints and set constraints (34), (35), (36), (37), (38), (39), (40), (41) and (42) perform scheduling computations and rational constraints.

Linearization Approaches In above formulation, since there are both binary and continuous variables where are multiplied

554

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

to each other, nonlinear terms are appeared in formulation process. Two common types of nonlinear terms are: Type 1: Pure 0-1 polynomial problem in which n binary variables are multiplied to each other such as Z = x1 × x2 ×…× xn. Type 2: Mixed 0-1 polynomial problems which n binary variables are multiplied to each other and this term is multiplied to a continuous variable such as Z = x1 × x2 ×…× xn × Y. For linearization type 1 the following method can be applied by introducing some new auxiliary constraints: Z ≤ xi

i = 1, 2,..., n

n

Z ≥ ∑ x i − (n + 1) i −1

Also, for linearization type 2 in a minimization problem, the following auxiliary constraints will be applied: P1: Nonlinear problem MinZ = x 1 × x 2 × ... × x n × y St: L(X ,Y ) P2: Linear form Min

Z

where U is upper bound for continuous variable y and therefore Z will be a continuous variable (Ghezavati and Saidi-Mehrabad 2011).

CONCLUSION In summary, in this chapter basic principles of uncertainty in a cellular manufacturing system were established. Since CMS problem is affected by tactical decisions such as scheduling, production planning, layout considerations, utilization aspects and many other factors, thus each CMS problem must be aggregated with tactical decisions in order to achieve maximum efficiency. As it is known, tactical decisions are made of many uncertain parameters. Since strategic decisions are influenced by tactical decisions, therefore CMS decisions will be mixed with uncertainty. There are some popular approaches which can analysis uncertain problems such as: Stochastic Optimization, Discrete Planning - Set of Scenario, Continues Optimization, Mean Value model, Mean – Variance Model, Max Probability Optimization, Chance Constrained Programming, Queuing Theory and Markov Chain, and Robust Optimization. This chapter has proposed two sample mathematical models and also one published model [32]. It was assumed that processing routing, inter arrival and service time and also processing time to be uncertain. Stochastic optimization and queuing theory were to resolve uncertainty in formulation process. A complete survey on meta-heuristic methods to solve CMS problems can be found by Ghosh et al (2011). For future directions, the following suggested developments can be applied for researchers and readers:

St:



n   Z ≥ y − U × n − ∑ x i    i =1 Z ≥ 0 L(X ,Y )



Uncertain Processing time optimized by robust approach in continuous or discrete space Uncertain capacities optimized by stochastic or robust approach in discrete space

555

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing



• •

• •

Uncertain machines’ availability optimized by stochastic or queuing theory approaches in continuous or discrete space. Aggregating CMS problem with logistics considerations in uncertain environments. Aggregating CMS problem with production planning aspects in uncertain environments. Aggregating CMS problem with layout considerations in uncertain environments. Aggregating CMS problem with scheduling concerns in uncertain environments.

REFERENCES Andres, C., Lozano, S., & Adenso-Diaz, B. (2007). Disassembly sequence planning in a disassembly cell. Robotics and Computer-integrated Manufacturing, 23(6), 690–695. doi:10.1016/j. rcim.2007.02.012 Aryanezhad, M. B., & Aliabadi, J. (2011). A new approach for cell formation and scheduling with assembly operations and product structure. International Journal of Industrial Engineering, 2, 533–546..doi:10.5267/j.ijiec.2010.06.002

Ghezavati, V. R., & Saidi-Mehranad, M. (2010). Designing integrated cellular manufacturing systems with scheduling considering stochastic processing time. International Journal of Advanced Manufacturing Technology, 48(5-8), 701–717. doi:10.1007/s00170-009-2322-2 Ghezavati, V. R., & Saidi-Mehranad, M. (2011). An efficient linearization technique for mixed 0-1 polynomial problems. Journal of Computational and Applied Mathematics, 235(6), 1730–1738. doi:10.1016/j.cam.2010.08.009 Ghosh, T., Sengupta, S., Chattopadhyay, M., & Dan, P. K. (2011). Meta-heuristics in cellular manufacturing: A state-of-the-art review. International Journal of Industrial Engineering Computations, 2(1), 87–122. doi:10.5267/j.ijiec.2010.04.005 Heragu, S. (1997a). Facilities design (p. 316). Boston, MA: PWS Publishing Company. Heragu, S. (1997b). Facilities design, (p. 345). Boston, MA: PWS publishing company Hillier, F. S., & Lieberman, G. J. (1995). Introduction to operation research (6th ed.). New York, NY: McGraw-Hill.

Asgharpour, M. J., & Javadian, N. (2004). Solving a stochastic cellular manufacturing model using genetic algorithm. International Journal of Engineering, Transactions A. Basics, 17(2), 145–156.

Hosseini, M. M. (2000). An inspection model with minimal and major maintenance for a system with deterioration and Poisson failures. IEEE Transactions on Reliability, 49(1), 88–98. doi:10.1109/24.855541

Balakrishnan, J., & Cheng, C. H. (2007). Dynamic cellular manufacturing under multi-period planning horizon. European Journal of Operational Research, 177(1), 281–309. doi:10.1016/j. ejor.2005.08.027

Hurley, S. F., & Clay Whybark, D. (1999). Inventory and capacity trade-off in a manufacturing cell. International Journal of Production Economics, 59(1), 203–212. doi:10.1016/S09255273(98)00101-7

Ghezavati, V. R., & Saidi-Mehrabad, M. (2011). An efficient hybrid self-learning method for stochastic cellular manufacturing problem: A queuing-based analysis. Expert Systems with Applications, 38, 1326–1335. doi:10.1016/j. eswa.2010.07.012

Kuroda, M., & Tomita, T. (2005). Robust design of a cellular-line production system with unreliable facilities. Computers & Industrial Engineering, 48(3), 537–551. doi:10.1016/j.cie.2004.03.004

556

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Mahdavi, I., Javadi, B., Fallah-Alipour, K., & Slomp, J. (2007). Designing a new mathematical model for cellular manufacturing system based on cell utilization. Applied Mathematics and Computation, 190, 662–670. doi:10.1016/j. amc.2007.01.060

Siemiatkowski, M., & Przybylski, W. (2007). Modeling and simulation analysis of process alternative in cellular manufacturing of axially symmetric parts. International Journal of Advanced Manufacturing Technology, 32(5-6), 516–530. doi:10.1007/s00170-005-0366-5

Mobasheri, F., Orren, L. H., & Sioshansi, F. P. (1989). Scenario planning at southern California Edison. Interfaces, 19(5), 31–44. doi:10.1287/ inte.19.5.31

Snyder, L. V. (2006). Facility location under uncertainty: A review. IIE Transactions, 38, 537–554. doi:10.1080/07408170500216480

Mulvey, J. M. (10996). Generating scenarios for the Towers Perrin investment system. Interfaces, 26(2), 1–15. doi:10.1287/inte.26.2.1 Papaioannou, G., & Wilson, J. M. (2009). Fuzzy extensions to integer programming of cell formation problem in machine scheduling. Annals of Operations Research, 166(1), 1–19. doi:10.1007/ s10479-008-0423-1 Ravichandran, K. S., & Chandra Sekhara Rao, K. (2001). A new approach to fuzzy part family formation in cellular manufacturing system. International Journal of Advanced Manufacturing Technology, 18(8), 591–597. doi:10.1007/ s001700170036 Rosenhead, J., Elton, M., & Gupta, S. K. (1972). Robustness and optimality as criteria for strategic decisions. Operational Research Quarterly, 23(4), 413–431. Safaei, N., Saidi-Mehrabad, M., TavakkoliMoghaddam, R., & Sassani, F. (2008). A fuzzy programming approach for cell formation problem with dynamic & uncertain conditions. Fuzzy Sets and Systems, 159(2), 215–236. doi:10.1016/j. fss.2007.06.014 Shanker, R., & Vrat, P. (1998). Post design modeling for cellular manufacturing system with cost uncertainty. International Journal of Production Economics, 55(1), 97–109. doi:10.1016/S09255273(98)00043-7

Solimanpur, M., Vrat, P., & Shankar, R. (2004). A heuristic to optimize makespan of cell scheduling problem. International Journal of Production Economics, 88, 231–241. doi:10.1016/S09255273(03)00196-8 Song, S.-J., & Hitomi, K. (1996). Determining the planning horizon and group part family for flexible cellular manufacturing. Production Planning and Control, 7(6), 585–593. doi:10.1080/09537289608930392 Sun, Y.-L., & Yih, Y. (1996). An intelligent controller for manufacturing cell. International Journal of Production Research, 34(8), 2353–2373. doi:10.1080/00207549608905029 Szwarc, D., Rajamani, D., & Bector, C. R. (1997). Cell formation considering fuzzy demand and machine capacity. International Journal of Advanced Manufacturing Technology, 13(2), 134–147. doi:10.1007/BF01225760 Tsai, C. C., Chu, C. H., & Barta, T. (1997). Analysis and modeling of a manufacturing cell formation problem with fuzzy integer programming. IIE Transactions, 29(7), 533–547. doi:10.1080/07408179708966364 Venkataramanaiah, S. (2007). Scheduling in cellular manufacturing systems: An heuristic approach. International Journal of Production Research, 1, 1–21.

557

Optimization and Mathematical Programming to Design and Planning Issues in Cellular Manufacturing

Viswanadham, N., & Narahari, Y. (1992). Performance modeling of automated manufacturing systems. Englewood Cliffs, NJ: Prentice Hall. Wu, X. D., Chu, C. H., Wang, Y. F., & Yan, W. L. (2006). Concurrent design of cellular manufacturing systems: A genetic algorithm approach. International Journal of Production Research, 44(6), 1217–1241. doi:10.1080/00207540500338252

Yang, J., & Deane, R. H. (1993). Setup time reduction and competitive advantage in a closed manufacturing cell. European Journal of Operational Research, 69(3), 413–423. doi:10.1016/03772217(93)90025-I

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 298-316, copyright 2012 by Business Science Reference (an imprint of IGI Global).

558

559

Chapter 32

Multi-Modal AssemblySupport System for Cellular Manufacturing Feng Duan Nankai University, China Jeffrey Too Chuan Tan The University of Tokyo, Japan Ryu Kato The University of Electro-Communications, Japan Chi Zhu Maebashi Institute of Technology, Japan Tamio Arai The University of Tokyo, Japan

ABSTRACT Cellular manufacturing meets the diversified production and quantity requirements flexibly. However, its efficiency mainly depends on the operators’ working performance. In order to improve its efficiency, an effective assembly-support system should be developed to assist operators during the assembly process. In this chapter, a multi-modal assembly-support system (MASS) was proposed, which aims to support operators from both information and physical aspects. To protect operators in MASS system, five main safety designs as both hardware and control levels were also discussed. With the information and physical support from the MASS system, the assembly complexity and burden to the assembly operators are reduced. To evaluate the effect of MASS, a group of operators were required to execute a cable harness task. From the experimental results, it can be concluded that by using this system, the operators’ assembly performance is improved and their mental work load is reduced. Consequently the efficiency of the cellular manufacturing is improved.

DOI: 10.4018/978-1-4666-1945-6.ch032

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Multi-Modal Assembly-Support System for Cellular Manufacturing

INTRODUCTION Traditionally, when the mass production was major in industry production, various assembly systems had been designed as automated manufacturing lines, which are aimed to produce a single specific product without much flexibility. Nowadays, the tastes of consumers change from time to time; therefore, traditional automated manufacturing lines cannot meet the flexibility and efficiency at the same time. To solve this problem, cellular manufacturing system, also called cell production system, has been introduced. In this system, an operator manually assembles each product from start to finish (Isa & Tsuru, 2002; Wemmerlov & Johnson, 1997). The operator enables a cellular manufacturing system to meet the diversified production and quantity requirements flexibly. However, due to the negative growth of the population in Japan, it will become difficult to maintain the cellular manufacturing system with enough skilled operators in the near future. How to improve the assembly performance of the operators and how to reduce their assembly burden are two important factors, which limit the efficiency of the cellular manufacturing system. Without an effective supporting system, it is difficult to maintain the cellular manufacturing system in Japan. Taking the advantages of the operators and robots, but avoiding their disadvantages at the same time, a new cellular manufacturing system was proposed, namely, the human-robot collaboration assembly system (Duan, 2008). In this human-robot collaboration assembly system, the operators are only required to execute the complicated and flexible assembly tasks that need human assembly skills; while the robots are employed to execute the monotonous and repeated tasks, such as the repetitions of parts feeding during assembly process (Arai, 2009). To make this system has the applicability to assemble a variety of products in different manufacturing circumstances, the following assembly sequence is assumed: each assembly part is collected from

560

the tray shelf by manipulators; all the parts are automatically fed to the operator on a tray as a kit of parts; an operator grasps the individual part respectively and assembles it to form a final product; the assembled product is transferred out to the next station, and so on. In the following part, a multi-modal assemblysupport system (MASS) is introduced, which aims to support an assembly operator in a cellular manufacturing system from both information side and physical side while satisfying the actual manufacturing requirements. MASS system utilizes robots to support the operator and several information devices to monitor and guide the operator during the assembly process. Since it is a human-robot collaboration assembly system, safety strategy must be designed to protect the operator with a reasonable cost benefit balance in the real production line. The remainder of the chapter is organized as follows: Firstly, the background information and the related studies are introduced. Then, the entire MASS system and its subsystems are briefly described. After that, a description of two manipulators and a mobile base are introduced in physical support part, which are used to feed assembly parts to the assembly operator. Assembly information support part contains a discussion of a multimedia-based assembly table and corresponding devices. Safety standard and safety design are presented in safety strategy part. Taking a cable harness task as an example, the effect of MASS system was evaluated. Finally, the conclusion and the future work are given.

PREVIOUS RELATED STUDIES To improve the efficiency of the cellular manufacturing system, various cellular manufacturing systems have been designed to improve the assembly performance of the operators and reduce their assembly burden.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Seki (2003) invented a production cell called “Digital Yatai” which monitors the assembly progress and presents information about the next assembly process. Using a semi-transparent head mount display, Reinhart (2003) developed an augmented reality (AR) system to supply information to the operator. These studies support the operator from information aspect. To reduce the operator’s physical burden and improve the assembly precision, Hayakawa (1998) employed a manipulator to grasp the assembly parts during the assembly process. This improved the assembly cell in physical support aspect. Sugi (2005) aimed to support the operators from both information side and physical side, and developed an attentive workbench (AWB) system. In this system, a projector was employed to provide assembly information to the operator; a camera was used to detect the direction of an operator’s pointing finger; and several self-moving trays were used to deliver parts to the operator. Although AWB achieved its goal of supporting operators from both information aspect and physical aspect, the direct supporting devices are just a projector and several self-moving trays, which are general purpose instruments that cannot meet the actual manufacturing requirements. In the coming aging society, it will be impossible to maintain the working efficiency if everything is done manually by the operator in the current cellular manufacturing system. In order to increase working efficiency, many researchers have used robot technologies to provide supports to the operator (Kosuge, 1994; Bauer, 2008; Oborski, 2004). According to these studies, human-robot collaboration has potential advantages to improve the operator’s working efficiency. However, before implementing this proposal, the most fundamental issue will be the safety strategy, which allows the operators and the robots to execute the collaboration work in their close proximity. Human-robot collaboration has been studied in many aspects but has not been utilized in the real manufacturing systems. This is mainly because

safety codes on industrial robots (ISO 12100, ISO 10218-1, 2006) prohibit the coexistence of an operator in the same space of a robot. According to the current industrial standards and regulations, in a human-robot collaboration system, a physical barrier must be installed to separate the operator and the assisting robot. Under this condition, the greatest limitation is that the close range assisting collaboration is impossible. Based on the definition of Helms (2002), there are four types of human-robot collaboration: Independent Operation, Synchronized Cooperation, Simultaneous Cooperation, and Assisted Cooperation. The assisted cooperation is the closest type of collaboration, which involves the same work piece being processed by the operator and the robot together. In this kind of human-robot collaboration, the operator is working close to the working envelope of the assisting robot without physical separation, so that both of them can work on the same work piece in the same process. The most distinguished concept of this study is that the assisting robot in this work is active and is able to work independently as robot manipulator. The advantage of this collaboration is to provide a human-like assistance to the operator, which is similar with the cooperation between two operators. This kind of assistance can improve the working efficiency by automating portion of the work and enable the operator to focus only on the other portion of work which requires human skill and flexibility. However, since the active robot is involved, this kind of collaboration is extremely dangerous and any mistake can be fatal (Beauchamp & Stobbe, 1995). The challenge of this research work is to design an effective assembly supporting system, which can support the operator in both physical and information aspects. During the assembly process, employing of the assisting robot is an effective method to reduce the operator’s assembly burden while improving the working efficiency. This involves the safety issue in this kind of close range active human-robot collaboration. However, there

561

Multi-Modal Assembly-Support System for Cellular Manufacturing

are no industrial safety standards and regulations. Besides the design of the assembly supporting system, the scope of this work also covers both safety design study and development of prototype production cell in cellular manufacturing.

The entire MASS system is divided into physical support part and assembly information support part, as shown in Figure 1. 1.

MULTI-MODAL ASSEMBLYSUPPORT SYSTEM Structure of the Entire System Following the fundamental idea that robots and operators share the assembly tasks can maximize their corresponding advantages, the MASS system was designed and its subsystems are shown in Figure 1 as structure view and in Figure 2 as system configuration.

Figure 1. Structure of the entire MASS system

562

2.

Physical Supporting Part: The physical supporting part is aimed to support operators from physical aspect, and it is composed of two manipulators with six degrees of freedom and a mobile base, which have two functions: one is to deliver assembly parts from a tray shelf to an assembly table; and the other is to grasp the assembly parts and prevent any wobbling during the assembly process. Information Supporting Part: The assembly information supporting part is designed to aid operators in assembly information aspect. An LCD TV, a speaker, and a laser pointer are employed to provide assembly information to guide the operator.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 2. Configuration of MASS system

3.

Safety Control Part: To guarantee the operator’s safety during the assembly process, vital sensors are used to monitor the operator’s physical conditions during the assembly process, and a serial of safety strategies is used to protect the operator from injury by the manipulators. It controls the collaboration between a robot and an operator (also referred to Figure 2).

In the developed MASS system, there are two stations connected through an intelligent part tray as shown in Figure 2, on which all the necessary parts are fed into the assembly station and the assembled products are transferred out through a shipment from the assembly station. 1.

2.

Part Feeding Station: Only robots work here. It is mainly in charge of part handling, such as bin picking, part feeding, kitting and part transferring. Assembly Station: An operator executes the assembly tasks with some aid of the robots. Supporting information from the

MASS system is implemented to accelerate the operator assembly efficiency. Figure 1 illustrates the setup of the MASS system, in which an operator assembles a product on the workbench in the area of assembly station. The operator is supported with the assembly information and with physical holding of parts for assembly. In this study, the sample product to assemble is a cable harness with several connectors and faster plates. Even experienced operators maybe spend about 15 minutes finishing this assembly task.

Simulator of the Entire System To reduce the design period, in this study, a simulator of the entire system was developed based on ROBOGUIDE (FANUC ROBOGUIDE) and OpenGL (Neider, 1993), as shown in Figure 3. This simulator can not only reproduce the actual motion of the manipulators but also predict collisions in the work space.

563

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 3. Simulator of the entire MASS system

Since MASS system is a human-robot cooperation assembly system, considering the operator’s safety, the distance between the manipulators and the operator should be optimized to prevent the collisions between them. Furthermore, the moving trajectories of the manipulators should also be optimized to prevent the collisions between themselves. In order to accelerate the development period, all of the optimization assignments are done in this simulator first, and then evaluated in the actual MASS system. With the aid of this simulator, the distance between manipulators and the operator can be adjusted easily, and the moving trajectories of the manipulators’ end points can also be reproduced conveniently during the manipulators’ moving process. Therefore, based on the simulation results, the actual system could be conveniently constructed.

564

Physical Support To increase the physical support provided by the MASS system, two manipulators with six degrees of freedom are installed on a mobile base and used to deliver assembly parts to the operator, as shown in Figures 1-4. A CCD camera with an LED light is equipped to each manipulator respectively for recognition of picking target from a part bin in scramble. The manipulators are utilized in part feeding station to 1. 2. 3. 4.

Draw a part bin from part shelves; Pick a part from the bin one by one; Kit parts onto a tray; Check visually the parts in a tray.

The parts are efficiently fed by the manipulators, because one manipulator hangs a bin up and

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 4. Assembly operations with the aid of manipulators

the other one grasps a part out like an operator does. Since the bin picking system by manipulators can work 24 hours a day, it enables high productivity. The base carries a few trays and moves to the assembly station, where the base docks in the electric charge connector. In the assembly station, an operator continuously assembles parts one by one, which are transferred by one of the mobile twin manipulators. To increase the precision of assembly and reduce the operator’s burden, one manipulator can grasp an assembly part to prevent wobbling during assembly, and the operator executes the assembly task on the basis of the manipulator’s assistance, as shown in Figure 4. Obviously, the assistant manipulators move near to the operator during the assembly process. To achieve this collaboration, the manipulators have to penetrate the operator’s area. Since the penetration is prohibited by the regulations of the industrial robots (ISO 12100), a new countermeasure must be developed. After finishing an assembly step, the operator pushes a footswitch to send a control command to the manipulators, and the manipulators provide the next assembly part to the operator and the assembly information of the next assembly step is given. Without this control command, the manipulators cannot move to the next step. Furthermore, the operator can stop the manipulators with an emergency button

when an accident occurs. These strategies enable the manipulators to support human operators in physical aspect effectively and safely.

Assembly Information Support Previous studies, Szeauch as Digital Yatai (Seki, 2003), have already testified that providing assembly information to the operator during his assembly process can not only improve his assembly efficiency, but also reduce his assembly errors. Taking the advantages of the previous studies, and also considering the characteristics of human cognition, an assembly information supporting system is designed to guide operators by means of indicating the next assembly sequence and/or an appropriate way of operation. The developed system has three major advantages: 1. 2.

Each assembly sequence is instructed step by step; Considering the characteristics of human cognition, the assembly information can be provided as easily understandable formats for humans, including text, voice, movie, animation and flashing emphasis marks;

565

Multi-Modal Assembly-Support System for Cellular Manufacturing

3.

The assembly information can be selected and provided to the operator according to his assembly skill level.

The total software system of MASS system in Figure 5 has been developed. It consists of three subsystems as 1. 2. 3.

Multi-modal Assembly Skill TransfER (MASTER); Multi-modal Assembly Information SupportER (MAISER); Multi-modal Assembly FOSTER (MAFORSTER).

MASS is designed to extract the skill information from skilled operators by MASTER and to transfer it to novice operators by MAISER as illustrated in Figure 5. Here, a human assembly skill model was proposed (Duan, 2009), which extracts and transfers human assembly skills as the cognition skill part and the motor skill part. In the cognition skill part, depending on questionnaire, MASTER obtains the different cognition skills between the skilled operators and the novice operators. In the motor skill part, MASTER mainly utilizes motion capture system to obtain the different motor skills between the skilled opFigure 5. Software system of MASS system

566

erators and the novice operators, especially in the assembly pose aspect and the assembly motion aspect (Duan, Tan, Kato, & Arai, 2009). MAISER provides understandable instructions to novice operators by displaying multi-modal information about assembly operations. MAFOSTER controls interface devices to organize comfortable environment for operators to execute the assembly task like a foster does. MAISER works mainly off-line at a data-preparation phase, and watches on-line the state of an operator to avoid bad motion and dangerous states (Duan, Tan, Kato, & Arai, 2009). MAISER takes the role of an instruction phase. Interface devices are installed as shown in Figure1 and Figure 4 again: 1.

LCD TV: The horizontal assembly table with built-in 37 inch LCD TV as shown in Figure 4 may be the first application for assembly. Since it enables operators to read the instructions without moving his/her gaze in different direction, assembly errors can be decreased. The entire assembly scheme is divided into several simple assembly steps, and the corresponding assembly information is written in PowerPoint slides (Zhang, 2008). During the assembly process, these

Multi-Modal Assembly-Support System for Cellular Manufacturing

2.

3.

4.

5.

PowerPoint slides are inputted into the LCD TV and switched by footswitch. Laser Pointer: Showing the assembly position to the operator is an effective way to reduce assembly mistakes. To this end, a Laser pointer, which is fixed on the environment, is projected onto a task to indicate the accurate position of assembly as shown in the left photo of Figure 4. The position can change by the motion of the manipulator. The operator can insert a wire into the instructed assembly position with the aid of the Laser spot. Audio Speakers: To easily permit the operator to understand the assembly information, a speaker and a wireless Bluetooth earphone are used to assist the operator with voice information. Footswitch: During the assembly process, it is difficult for the operator to switch the PowerPoint slides with his hands. Therefore, a footswitch is used, as shown in Figure 1. There are two kinds of footswitches: footswitch A has three buttons, and footswitch B has one button. Just stepping the different buttons on footswitch A, the operator can move the PowerPoint slides forward or backward. Stepping the button on footswitch B, the operator controls the manipulators to supply the necessary assembly parts to the operator, or makes manipulators change the position and orientation of the assembly part during the assembly process. Assembly Information: The assembly support information is provided to the operators to improve the productivity by means of good understanding in assembly tasks and of skill transfer with audio-visual aids. As the software structure for the assembly task description is not discussed in this study, please refer to our papers (Duan, 2008; Tan, 2008). Applying Hierarchical Task Analysis (HTA) one assembly task is divided into several simpler assembly steps, whose cor-

6.

responding information is stored in multimedia. Then appropriate level of information is displayed on LCD panel as shown in Figure 6. In each PowerPoint slide, the assembly parts and assembly tools are illustrated with pictures. The assembly positions are noted with color marks. Following the assembly flow chart, videos showing the assembly motions of the experienced operators will appear to guide the novices to execute the assembly tasks. To facilitate the operator’s understanding of the assembly process, the colors of the words in the slides are the same as the actual colors used for the assembly parts. For example, there are “blue cable” and “grey cable” in Figure 6. In each slide, several design principals of data presentation are introduced such as multimedia principle, coherence principle and spatial contiguity principle (Mayer, 2001). In Figure 6, three types of information are displayed as (a) text instruction, (b) pictorial information, (c) movie, and the sequence of assembly is also illustrated. During the assembly process, the PowerPoint slides are output to an LCD TV and switched by the operator’s foot with footswitch during the assembly process. Assembly Information Database: In this multimedia based assembly supporting system, the assembly information is classified into paper, audio, and video files. The assembly guidance is concisely written in paper files. Guidance of each assembly step is recorded in audio files. After the standard motions of the experienced operators are recorded and analyzed into primitive assembly motions, they are saved into video files. Tan (2008) set up an assembly information database to preserve all of these assembly information files and provide them to the operator depending on the situation. This database contains training data and assembly data: training data are designed for novices, and the assembly information files contain

567

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 6. Multimedia based assembly supporting information

assembly details. Assembly data are used to assist experienced operators by indicating the assembly sequence but not assembly details. As a consequence, this system may promote both novice and experienced operators to enter the workforce. All the operators who used the assembly table with LCD evaluated positively that the instruction on LCD can be read easily and understood smoothly.

Safety Strategy MASS system is a kind of human-robot cooperation system. Although employing the assistant robots to support the operator can increase the assembly efficiency and reduce the assembly burden, this collaboration can be extremely dangerous because the active robot is involved and any mistake can be fatal. To protect the operator during the assembly process, several safety designs are proposed and developed in this manufacturing system, which cover both hardware and software to achieve good robot-human collaboration. Fundamental concepts are:

568

1. 2. 3. 4. 5. 6.

Risk assessment by ISO regulation; Area division by safety light curtains as illustrated in Figure 7; Speed/Force limiter by serve controller; Collision protector by physical devices; Collision detector by IP cameras; Inherent safety theory.

Risk Assessment by ISO Regulation Since no direct industrial safety standards and regulations that govern this type of close range active human-robot collaboration, the safety design in this work is formulated by collective reference to related safety standards and regulations to verify component systems’safety first (non-collaboration safety) and then assess system safety as a whole (collaboration safety). Table 1 summarizes the referred industrial safety standards and regulations in mobile robot manipulators system development and total system development. This chapter mainly focuses on the discussion on human-robot collaboration safety; therefore the non-collaboration safety of component systems is omitted. However, it is important to bear in mind that the following safety designs for collaboration are built in accordance with the referred

Multi-Modal Assembly-Support System for Cellular Manufacturing

Table 1. Related safety standards and regulations Standards and Regulations

Descriptions

Related to mobile robot manipulators system development IEC 60364-4-41 (JIS C0364-4-41)

Low-voltage electrical installations – Part 4-41: Protection for safety – Protection against electric shock

IEC 60364-7-717

Electrical installations of buildings – Part 7-717: Requirements for special installations or locations – Mobile or transportable units

IEC 61140 (JIS C0365)

Protection against electric shock – Common aspects for installation and equipment

BS EN 1175-1

Safety of industrial trucks – Electrical requirements – Part 1: General requirements for battery powered trucks

ISO 10218-1 (JIS B8433-1)

Robots for industrial environments – Safety requirements – Part 1: Robot

Related to total system development ISO 12100-1 (JIS B9700-1)

Safety of machinery – Basic concepts, general principles for design – Part 1: Basic terminology, methodology

ISO 12100-2 (JIS B9700-2)

Safety of machinery – Basic concepts, general principles for design – Part 2: Technical principles

ISO 14121-1 (JIS B9702)

Safety of machinery – Risk assessment – Part 1: Principles

ISO 14121-2

Safety of machinery – Risk assessment – Part 2: Practical guidance and examples of methods

ISO 13849-1 (JIS B9705-1)

Safety of machinery – Safety-related parts of control systems – Part 1: General principles for design

BS EN 954-1

Safety of machinery. Safety related parts of control systems. General principles for design

ANSI/RIA R15.06

Industrial Robots and Robot Systems - Safety Requirements

ISO 13852 (JIS B9707)

Safety of machinery – Safety distances to prevent danger zones being reached by the upper limbs

ISO 14119 (JIS B9710)

Safety of machinery – Interlocking devices associated with guards – Principles for design and selection

ISO 13854 (JIS B9711)

Safety of machinery – Minimum gaps to avoid crushing of parts of the human body

ISO 14118 (JIS B9714)

Safety of machinery – Prevention of unexpected start-up

ISO 13855 (JIS B9715)

Safety of machinery – Positioning of protective equipment with respect to the approach speeds of parts of the human body

ISO 14120 (JIS B9716)

Safety of machinery – Guards – General requirements for the design and construction of fixed and movable guards

standards and regulations in the component level. EU standard permits the collaboration of robots with the operator when the total output of robots is less than 150 (N) at the tip of the end-effecter. Japanese standard defines that each actuator has the power less than 80 (W). The collaboration safety design is presented in hardware design and control design in the following.

Area Division by Safety Light Curtains The software systems in robot controller and other computers are prepared as Dual Check Safety (DCS), which checks speed and position data of motors with two independent CPUs in the robot controller. In risk assessment, we listed up to 168 risks and take its countermeasure respectively

569

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 7. Three robot working zones for safety

so as to satisfy the required performance level. Whatsoever definition industrial robots are, it is strongly prohibited that robots exist with the operator in the same space. Thus a cage is required to separate the operator from the robots. For the area division, the whole cell in Figure 7 is divided into human area (H), robot area (R), and buffer area (B) by safety fences, photoelectric sensors and light curtains in order to obtain safe working areas and to monitor border crossing for safety. Robots are allowed to operate in high speed motion in area R but low speed movement in area B. In area H, the strong restrictions are applied to robot motions. When the manipulators move too close to the operators and cross the light curtain 2, the power of the manipulator is cut down by the light curtain. Consequently, the manipulators stop.

Speed/Force Limiter by Serve Controller As shown in Figure 8, by the servo controller, the speed of the mobile manipulators is limited, and the force/torque at the end-effecter is also limited by software. The controller also has a function of abnormal force limiter in case of unexpected collision of the manipulator against the environment.

570

Based on the recommendation from safety standards and risk assessment, during collaboration process, the speed of the mobile manipulators is limited to below 150 (mm/s) and the working area of the robot is restricted within the pink region in Figure 8. The minimum distance between the robot gripper and surface of the workbench is 120 (mm) according to ISO 13854.

Collision Protectors by Physical Devices During the assembly process, several collision protectors by physical devices have been designed for accident avoidance and the protection of the operator. 1.

Mobile Base: To prevent the operator from being hurt by the manipulators, the localization accuracy of the mobile base should be maintained. With vision system to detect marks on the floor, the system has a localization accuracy of 5 (mm) and 0.1°. The base is equipped with bumper switch for object collision detection and wheel guard to prevent foreign object being tangled with the wheels, as illustrated in Figure 8.

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 8. Robot speed, force, and area restrictions

2.

3.

Footswitch: In the MASS system, the twin mobile manipulators are used to assist the operator to execute the assembly tasks during the assembly process. Without a safety strategy, the operator could be injured by the manipulators. An effective working sequence is one of the effective ways to reduce the probability of collision between an operator and a manipulator. The manipulators are prevented from moving in the direction of the operator as he performs an assembly task. The probability for collisions is reduced with the introduction of the working sequence. To realize the proposed working sequence, a footswitch is used to control the manipulators, as illustrated in Figure 9. When the operator finishes an assembly step, he steps on footswitch, which signals the manipulators to provide the assembly parts to the operator for the next step. Emergency Button: When an accident occurs, the operator can just push the emergency button on the right-hand side of the assembly workbench to stop the entire system, as shown in Figure 9. After any problem has

4.

been solved, the operator pushes the reset button to restart the assembly process. Safe Bar: In addition, steel safe bar is installed in front of the assembly workbench (referred to Figure 9). If other strategies failed to stop the manipulator to collide the operator, this safe bar can protect the operator.

Collision Detector by IP Cameras The developed system installs a robot with higher ability than both EU and Japan Standard. Even though various countermeasures are introduced, the risk assessment shows residual risks. For the intelligent compensation of safety, two IP cameras are utilized to monitor the operator’s safety (referred to Figure 10); that is, the cameras track the color marks on the head and shoulders of the operator to measure the body posture and position to estimate the human operation conditions (Duan, 2009). The vision monitoring system has positioning accuracy of 30 (mm) and process delay of 0.6 (s).

571

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 9. Collision protectors by physical devices

Figure 10. Operator safety monitoring system

Inherent Safety Theory Although several safety strategies are adopted, there is no guarantee that a collision between a manipulator and an operator will never occur. Therefore, the manipulators should be ameliorated according to inherent safety theory (Ikeda & Saito, 2005) to reduce the injury of the operator. The sharp edges of the manipulators are softened into obtuse-angled brims. The force and speed of the manipulators are reduced as much as possible while still meeting the assembly requirement. In addition, the overall mobile robot manipulators system is built with low center of gravity design to prevent tipping.

Evaluation of MASS System To evaluate the effect of the MASS system, a group of operators were required to execute an assembly task of cable harness, as illustrated in Figure 11. In this task, operators must insert correct cables into corresponding correct holes in the connector. After that, following the cable routes, operators must fix the cables to the jigs on the assembly table. The operators executed the cable

572

harness task in two cases: (1) all of the assembly information, including the cable type, the position of the hole in the connector and the assembly step, was only provided by the assembly manual (Exp I); (2) operators executed the cable harness task under the support of MASS (Exp II). Two parameters were measured in the experiments: assembly time and assembly error. The assembly time is compared between conventional manual assembly setup (Exp I) and the new setup (Exp II). Five novice operators and five expert ones performed three assembly trails respectively for both the setups. From Figure 12, it is proved that the overall performance is better (shorter in assembly time) in the new setup (Exp II). Novices and experts show almost the same assembly time from the first trial in the case of the new setup as the dotted lines. It means that the assembly can be executed at the minimum time even by the unskilled operators. Comparing to the assembly time of the conventional setup (Exp I), the novice operators need only 50% of the time in the MASS system (Exp II), which indicates double productivity. Note that

Multi-Modal Assembly-Support System for Cellular Manufacturing

Figure 11. Cable harness task

the assembly time at the third trial converges to the minimum by all the cases. This implies that the assembly operation is easy to achieve and the human ability of learning is high. In other words, this system may be beneficial for very frequent change of products. In terms of assembly quality, 10% to 20% of assembly error (insertion error)

is observed in conventional setup (Exp I), while in new cell production setup (Exp II) the error is totally being prevented by the robot assistance, especially by guidance of laser pointer and by the instruction of the assembly sequences. According to the experimental results, it can be concluded that the developed MASS system

Figure 12. Difference of assembly time by experts and novices

573

Multi-Modal Assembly-Support System for Cellular Manufacturing

can accelerate the operator’s assembly process as well as prevent assembly errors. According to Zhang (2008), this cable harness task is a kind of cognitive assembly task (Norman, 1993); therefore, the mental work load of the operators cannot be ignored. To evaluate the mental work load of the operators in (Exp I) and (Exp II), NASA-TLX (Task Load Index) method (NASA-TLX for Windows U.S.) was used. After the operators finished the cable harness task, they were required to answer the questionnaires. Based on the NASA-TLX method, the mental work load of the operators can be computed. The mental work load of (Exp I) is 62, which is much higher than that of (Exp II), which is 38. This means that based on the support of MASS, the mental work load of the operators can be reduced significantly.

CONCLUSION This work aims to realize a new cellular manufacturing system for frequent changes of products. In this chapter, a multi-modal assembly-support system (MASS) was developed for a cellular manufacturing system. In the MASS, two manipulators are used to replace the operators to execute the laborious tasks. Based on the assembly information database and assembly information supporting system, this system is capable of meeting the assembly and training requirements of the experienced and the novice operators. Besides developing the actual system, a simulator for an entire assembly system was created to reduce the time and costs required for development. To protect the operator from harm, several safety strategies and equipments were presented. According to inherent safety theory, two manipulators are ameliorated, which could reduce the injury of the operators even when they were collided by the manipulators. To evaluate the effect of MASS, a group of experienced operators and novice operators were required to execute a cable harness task. According

574

to the experimental results, basing on the support of MASS, not only the assembly time and the error ratios are reduced, but also the mental work load of the operators is reduced. Therefore, the MASS allows an operator to receive physical and informational support while working in the actual manufacturing assembly process. Future studies should be directed at identifying and monitoring the conditions that contribute to the operator’s fatigue and intention during the assembly process; these efforts will lead to improvements in comfort for the operators and assembly efficiency.

ACKNOWLEDGMENT This study is supported by NEDO (New Energy and Industrial Technology Development Organization) as one of “Strategic Projects of Element Technology for Advanced Robots”. The author Feng DUAN is supported by the Fundamental Research Funds for the Central Universities (No.65010221). We appreciate NEDO and MSTC for accelerating the development of this practical system. In particular, we would like to acknowledge FANUC Company for their excellent cooperation and technical support.

REFERENCES Arai, T., Duan, F., Kato, R., Tan, J. T. C., Fujita, M., Morioka, M., & Sakakibara, S. (2009). A new cell production assembly system with twin manipulators on mobile base. Proceeding of 2009 International Symposium on Assembly and Manufacturing (pp. 149-154). Suwon, Korea. Bauer, A., Wollherr, D., & Buss, M. (2008). Human-robot collaboration: A survey. International Journal of Humanoid Robotics, 5(1), 47–66. doi:10.1142/S0219843608001303

Multi-Modal Assembly-Support System for Cellular Manufacturing

Beauchamp, Y., & Stobbe, T. J. (1995). A review of experimental studies on human-robot system situations and their design implications. The International Journal of Human Factors in Manufacturing, 5(3), 283–302. doi:10.1002/ hfm.4530050306 Colgate, J. E., Wannasuphoprasit, W., & Peshkin, M. A. (1996). Cobots: Robots for collaboration with human operators. Proceedings of the International Mechanical Engineering Congress and Exhibition, 58, 433–439. Duan, F. (2009). Assembly skill transfer system for cell production. Unpublished doctoral dissertation, The University of Tokyo, Japan. Duan, F., Morioka, M., Tan, J. T. C., & Arai, T. (2008). Multi-modal assembly-support system for cell production. International Journal of Automation Technology, 2(5), 384–389. Duan, F., Tan, J. T. C., Kato, R., & Arai, T. (2009). Operator monitoring system for cell production. The International Journal of the Robotics Society of Japan-Advanced Robotics, 23, 1373–1391. Hayakawa, Y., Kitagishi, I., & Sugano, S. (1998). Human intention based physical support robot in assembling work. Proceedings of the 1998 IEEE/ RSJ International Conference on Intelligent Robots and Systems (pp. 930-935). Victoria, B.C., Canada. Helms, E., Schraft, R. D., & Hagele, M. (2002). Rob@work: Robot assistant in industrial environments. Proceedings of 11th IEEE International Workshop on Robot and Human Interactive Communication (pp. 399-404). Berlin, Germany. Ikeda, H., & Saito, T. (2005). Proposal of inherently safe design method and safe design indexes for human-collaborative robots. (. Specific Research Reports of the NIIS-SRR-NO., 33, 5–13. ISO10218-1. (2006). Robots for industrial environments- Safety requirements - Part 1: Robot.

Isa, K., & Tsuru, T. (2002). Cell production and workplace innovation in Japan: Towards a new model for Japanese manufacturing? Industrial Relations, 41(4), 548–578. doi:10.1111/1468232X.00264 ISO 12100. (2010). Safety of machinery. Retrieved from http://www.iso.org/iso/iso_catalogue/ catalogue_ics/catalogue_detail_ics. htm?csnumber=27239 ISO 14121-1. (2007). Safety of machinery - Risk assessment-Part 1: Principles. Kosuge, K., Yoshida, H., Taguchi, D., Fukuda, T., Hariki, K., Kanitani, K., & Sakai, M. (1994). Robot-human collaboration for new robotic applications. Proceeding of the 20th International Conference on Industrial Electronics, Control, and Instrumentation (pp. 713-718). Mayer, R. E. (2001). Multi-media learning. New York, NY: Cambridge University Press. NASA. (n.d.). NASA-TLX for Windows. US Naval Research Laboratory. Retrieved from http://www. nrl.navy.mil/aic /ide/NASATLX.php Neider, J., Davis, T., & Woo, M. (1993). OpenGL programming guide: The official guide to learning OpenGL. Addison-Wesley Publishing Company. Norman, D. A. (1993). Things that make us smart: Defending human attributes in the age of the machine. Addison Wesley Publish Company. Oborski, P. (2004). Man-machine interactions in advanced manufacturing systems. International Journal of Advanced Manufacturing Technology, 23(3-4), 227–232. doi:10.1007/s00170-0031574-5 Reinhart, C., & Patron, C. (2003). Integrating augmented reality in the assembly domain — Fundamentals, benefits and applications. Annals of the CIRP, 52(1), 5–8. doi:10.1016/S00078506(07)60517-4

575

Multi-Modal Assembly-Support System for Cellular Manufacturing

Roboguide, F. A. N. U. C. (n.d.). Robot system animation tool. Retrieved from http://www.fanuc. co.jp/en/ product/robot/ro boguide/index.html Seki, S. (2003). One by one production in the “Digital Yatai” —Practical use of 3D-CAD data in the fabrication. Journal of the Japan Society of Mechanical Engineering, 106(1013), 32–36. Sugi, M., Nikaido, M., Tamura, Y., Ota, J., Arai, T., & Kotani, K. … Sato, Y. (2005). Motion control of self-moving trays for human supporting production cell “attentive workbench”. Proceedings of the 2005 IEEE International Conference of Robotics and Automation (pp. 4080-4085). Barcelona, Spain.

Tan, J. T. C., Duan, F., Zhang, Y., Watanabe, K., Pongthanya, N., & Sugi, M. … Arai, T. (2008). Assembly information system for operational support in cell production. The 41st CIRP Conference on Manufacturing Systems (pp. 209-212). Wemmerlov, U., & Johnson, D. J. (1997). Cellular manufacturing at 46 user plants: Implementation experiences and performance improvements. International Journal of Production Research, 35(1), 29–49. doi:10.1080/002075497195966 Zhang, Y., Duan, F., Tan, J. T. C., Watanabe, K., Pongthanya, N., & Sugi, M. … Arai, T. (2008). A study of design factors for information supporting system in cell production. The 41st CIRP Conference on Manufacturing Systems (pp. 319-322).

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 412-427, copyright 2012 by Business Science Reference (an imprint of IGI Global).

576

577

Chapter 33

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets Gen’ichi Yasuda Nagasaki Institute of Applied Science, Japan

ABSTRACT This chapter deals with modeling, simulation, and implementation problems encountered in robotic manufacturing control systems. Extended Petri nets are adopted as a prototyping tool for expressing real-time control of robotic systems and a systematic method based on hierarchical Petri nets is described for their direct implementation. A coordination mechanism is introduced to coordinate the event activities of the distributed machine controllers through friability tests of shared global transitions. The proposed prototyping method allows a direct coding of the inter-task cooperation by robots and intelligent machines from the conceptual Petri net specification, so that it increases the traceability and the understanding of the control flow of a parallel application specified by a net model. This approach can be integrated with off-the-shelf real-time executives. Control software using multithreaded programming is demonstrated to show the effectiveness of the proposed method.

1. INTRODUCTION Complex robotic systems such as flexible manufacturing systems require sophisticated distributed real-time control systems. A major problem concerns the definition of the user tasks and the cooperation between the subsystems, DOI: 10.4018/978-1-4666-1945-6.ch033

especially since the intelligence is distributed at a low level (machine level). Controlling such systems generally requires a hierarchy of control units corresponding to several abstraction levels. At the bottom of the hierarchy, i.e. machine control level, are the programmable logic controllers (PLC). The next level performs coordination of the PLCs. The third level implements scheduling, that is, the real-time assignments of workpieces

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

and tools to machines. At the machine level, PLCs perform local logical control operations of flexible, modular, high-speed machines through the use of multiple independent drives (Holding, et al. 1992). Implementation languages can be based on ladder diagrams or more recently state machines (Silva, et al. 1982). However, when the local control is of greater complexity, the above kinds of languages may not be well adapted. The development of industrial techniques makes a sequential control system for robotic manufacturing systems larger and more complicated one, in which some subsystems operate concurrently and cooperatively (Neumann, 2007). In the area of real-time control of discrete event robotic systems the main problems that the system designer has to deal with are concurrency, synchronization, and resource sharing problems. Presently, the implementation of such control systems makes a large use of microcomputers. Real-time executives are available with complete sets of synchronization and communication primitives. However, coding the specifications is a hazardous work and debugging the implementation is particularly difficult when the concurrency is important. It is important to have a formal tool powerful enough and allowing developing validation procedures before implementation. Conventional specification languages do not allow an analytical validation. Consequently, the only way to validate is via simulation and step by step debugging. On the other hand, a specification method based on a mathematical tool may be more restricting, but analytical procedures can strongly reduce the simulation step. Rapid prototyping is an economical and crucial way to experiment, debug, and improve specifications of parallel applications. Increasing complexity of synchronizing mechanisms involved in concurrent system design makes necessary a prototyping step starting from a formal, already verified model. Petri nets allow to validate and to evaluate a model before its implementation. The formalism allowing a validation of the main properties of the

578

Petri net control structure (liveness, boundedness, etc.) guarantees that the control system will not fall immediately in a deadlocked situation. In the field of flexible manufacturing cells, the last aspect is essential because the sequences of control are complex and change very often. When using Petri nets, events are associated with transitions. Activities are associated to the firing of transitions and to the marking of places which represents the state of the system. The network model can describe the execution order of sequential and parallel tasks directly without ambiguity (Silva, 1990). Pure synchronization between tasks, choices between alternatives and rendezvous can be naturally represented. Moreover at the machine level Petri nets have been successfully used to represent the sequencing of elementary operations (Yasuda, et al. 1992). In addition to its graphic representation differentiating events and states, Petri nets allow the possibility of progressive modeling by using stepwise refinements or modular composition. Libraries of well-tested subnets allow components reusability leading to significant reductions in the modeling effort. The possibility of progressive modeling is absolutely necessary for large and complex systems, because the refinement mechanism allows the building of hierarchically structured net models. Furthermore, a real-time implementation of the Petri net specification by software called a token player can avoid implementation errors. This is because the specification is directly executed by the token player and the implementation of these control sequences preserves the properties of the model. In this approach, the Petri net model is stored in a database and the token player updates the state of the database according to the operation rules of the model. For control purposes, this solution is very well suited to the need of flexibility, because, when the control sequences change, only the database needs to be changed. Some techniques derived from Petri nets have been successfully introduced as an effective tool for describing control specifications and real-

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

izing the control in a uniform manner (Murata, et al. 1986). However, in the field of flexible manufacturing cells, the network model becomes complicated and it lacks the readability and comprehensibility (David, et al. 1992). Therefore, the flexibility and expandability are not satisfactory in order to deal with the specification change of the control system. Despite the advantages offered by Petri nets, the synthesis, correction, updating, etc. of the system model and programming of the controllers are not simple tasks (Zhou, et al. 1993), (Desrochers, et al. 1995), (Lee, et al. 2006). Some Petri net implementation methods have already been proposed for simulation purposes or for application prototyping (Butler, 1991), (Garcia, 1998), (Piedrafita, et al. 2008). However the implementation method for hierarchical and distributed control of complex robotic systems has not been established sufficiently so far (Breant, et al. 1992), (Girault, et al. 2003), (Zhou, et al. 1999). If it can be implemented using Petri nets, the modeling, simulation and control can be realized consistently. This chapter describes a Petri net based prototyping method for real-time control of complex robotic systems. The presented method, based on the author’s previous works (Yasuda, et al. 2010), (Yasuda, 2010), involves three major steps, and progressively gathers all information needed by the control system design and the code generation for simulation experiments. The first step consists in specifying the conceptual net model for overall system control. The second step consists in transforming the net model into the detailed net model. Based on the hierarchical and distributed structure of the system, the specification procedure is a top-down approach from the conceptual level to the detailed level. The third step consists in decomposing the detailed net into local net models for machine control and the coordination model. The coordination algorithms are simplified since the robots and machines in the system are separately controlled using dedicated task execution programs. In order to deal with complex

models, a hierarchical approach is adopted for the coordination model design. In this way, the macro representation of the system is broken down to generate the detailed nets at the local machine control level. Finally, the C++ code generation using multithreaded programming is described for the prototype hierarchical and distributed control system.

2. MODELING OF DISCRETE EVENT SYSTEMS USING EXTENDED PETRI NETS A Petri net is a directed graph whose nodes are places shown by circles and transitions shown by bars. Directed arcs connect places to transitions and transitions to places. Formally, a Petri net is a bipartite graph represented by the 4-tuple G={P,T,I,O} such that: P = {p1,p2,…,pn} is a finite, not empty, set of places; T = {t1,t2,…,tm} is a finite, not empty, set of transitions; P∩T = ϕ, i.e. the sets P and T are disjointed; I: T→P∞ is an input function, a mapping from transitions to bags of places; O: T→P∞ is an output function, a mapping from transitions to bags of places. The input function I maps from a transition tj to a collection of places I(tj), known as input places of a transition. The output function O maps from a transition tj to a collection of places O(tj), known as input places of a transition. The preincidence matrix of a Petri net is C − = [cij− ] where cij− = 1 (pi ∈ I(tj)), cij− = 0 (pi ∉ I (t j )); the

post–incidence matrix is C + = [cij+ ] where cij+ = 1 (pi ∈ O(tj)), cij+ = 0 (pi ∉ O(t j )), then

the incidence matrix of the Petri net C = C + − C − . Each place contains integer (positive or zero) marks or tokens. The number of tokens in each

579

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

place is defined by the marked vector or marking M = (m1, m2 ,..., mn )T . The number of tokens in one place pi is simply indicated by M(pi). The firing of a transition will change the token distribution (marking) in the net according to the transition firing rule. In the basic Petri net, a transition t j is enabled if, ∀p i ∈ I(t j ), M k (pi ) ≥ w(pi , t j ), where the current marking is Mk and w(pi,tj) is the weight of the arc from pi to tj. A sequence of firings will result in a sequence of markings. A marking Mn is said to be reachable from a marking M 0 if there exists a sequence of firings that transforms M 0 to Mn. The set of all

possible markings reachable from M 0 is denoted as R(M0). A Petri net is said to be k-bounded or simply bounded if the number of tokens in each place does not exceed a finite number for any marking reachable from M 0 , i.e., ∀pi ∈ P,

∀M ∈ R(M 0 ), M(pi)≤k (Reisig, 1985), (Murata, 1989). In the basic Petri net, bumping occurs when despite the holding of a condition, the preceding event occurs. This can result in the multiple holding of that condition. From the viewpoint of discrete event process control, bumping phenomena should be excluded. So, the firing rule was modified so that the system is free of this phenomenon. Because the modified Petri net must be 1-bounded, for each place pi, mi = 0 or 1, and the weight of every arc is 1. A Petri net is said to be ordinary if all of its arc weights are 1’s. Thus the axioms of the modified Petri net are as follows. 1.

2.

A transition tj is enabled if for each place pk ∈ I(tj), mk = 1 and for each place pl ∈ O(t j ),

ml = 0. When an enabled transition tj fires, the marking M is changed to M ′, where for each place pk ∈ I (t j ), mk′ = 0 and for each place pl ∈ O(t j ), ml′ = 1.

580

3.

In any initial marking, there must not exist more than one token in each place.

A transition without any input place is called a source transition, and one without any output place is called a sink transition. A source transition is unconditionally enabled, and the firing of a sink transition consumes a token in each input place, but does not produce any. According to these axioms, the number of tokens in a place never exceeds one, thus the modified Petri net is essentially 1-bounded and said to be a safe graph. Besides the guarantee of safeness, considering not only the modeling but also the actual system control of robotic systems, the additional capabilities of input and output interfaces which connect the net to its environment are required. The extended Petri net adopts the following two elements: 1) Gate arc, 2) Output signal arc (Hasegawa, et al. 1984). A gate arc connects a transition with a signal source, and depending on the signal, it either permits or inhibits the occurrence of the event which corresponds to the connected transition. Gate arcs are classified as permissive or inhibitive, and internal or external. An output signal arc sends a command request signal from a place to an external machine. The interfaces are a set of transitions which represent the communication activities of the net with its environment. Thus the firing rule of a transition is described as follows: an enabled transition may fire when it does not have any internal permissive arc signaling 0 nor any internal inhibitive arc signaling 1 and it does not have any external permissive arc signaling 0 nor any external inhibitive arc signaling 1. A robotic action is modeled by two transitions and one condition as shown in Figure 1. At the “Start” transition the command associated with the transition is sent to the corresponding robot or machine. At the “End” transition the status report is received. When a token is present in the “Action” place, the action is in progressive. The “Completed” place can be omitted, and then the “End” transition is fused with the “Start” transi-

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 1. Extended Petri net model of robotic action with external permissive and inhibitive gate arcs

ing to the structural information of the net specifying the robotic task. These tables are the following ones: 1. 2. 3.

4. tion of the next action. Activities can be assigned an amount of time units to monitor them in time for real performance evaluation. The firing of a transition is indivisible; the firing of a transition has duration of zero. Extended Petri nets to consider timing conditions where each activity is assigned an amount of time units can also be used as shown in Figure 2. Through the simulation steps, the transition vector table is efficiently used to extract enabled or fired transitions. The flow chart of simulation and evaluation procedure is shown in Figure 3. At a step of simulation of robotic task, the configuration of the robots can be seen with graphic simulation. The data structure of the extended Petri net simulator is made up of several tables correspondFigure 2. Examples of representation of timing conditions: external timer with output signal arc and external gate arc, (b) timed transition

5.

6.

The table of the labels of the input and output places for each transition, The table of the transitions which are likely to be arbitrated for each conflict place, The table of the gate arcs which are internal or external, permissive or inhibitive, for each transition, The table of marking which indicates the current marking for each place, The table of places to tasks mapping which points out the tasks that have to be put into the ready state when the corresponding place receives a token, The table of the “end of task” transitions, which associates with each of task the set of transitions with external gate arcs switched each time an “end of task” message is received by the simulator. The “end of task” transitions are only fired on the reception of an “end of task” message.

3. PETRI NET MODELS OF COOPERATIVE CONTROL OF CONCURRENT TASKS A robotic task consists of several subtasks or operations, and a subtask consists of several actions. Conceptually, robotic processes are represented as sequential constructs or state machines, where each transition has exactly one incoming arc and exactly one outgoing arc. The structure of a place having two or more output transitions is referred to as a conflict, decision, or choice, depending on applications. State machines allow the representation of decisions, but not the synchronization of parallel activities. In a net model of robotic task, the set of places can be classified into three groups: idle, operation, and resource places. A token in an idle place indicates that the robot is ready to work and

581

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 3. Flow chart of simulation and evaluation procedure

waiting for a specified signal from another robot or its environment. An operation place represents an operation to be processed for a workpiece or part in a manufacturing sequence and initially it has no token. Resource places represent resources (robots and machines) and their initial tokens represent the number of available resource units. A task executed by a robot or machine can be seen as some connection of more detailed subtasks. For example, transferring an object from a start position to a goal position is a sequence of the following subtasks: moving the hand to the start position, grasping the object, moving to the goal position, and putting it on the specified place. Figure 4 shows the net representation of

582

a robotic task: pick and place operation with the input and output conveyors. While the place “Robot” in Figure 4(a) indicates that the state of the robot is “ready” when the token is in the place, in Figure 4(b) it indicates that the state of the robot is “operating”. The place also indicates the macro representation of the pick and place operation. The parallel net in Figure 4(b) is equivalent to the cyclic net in Figure 4(a) with respect to the enabling conditions of all the transitions in the net. The parallel net assures that the robot can load or unload only one workpiece at a time. Figure 4(c) shows the possible evolution of dynamic behavior of the net.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 4. Net representation of robotic task: (a) cyclic net model of robot, (b) equivalent parallel net model, (c) possible state of net model

Furthermore the subtasks are translated into more detailed actions. A hierarchical approach consists in building a model by stepwise refinements. At each step some parts of the model are replaced by a more complex model. A modular approach by model composition affords to build complex validated Petri nets. Figure 5 shows the view of a hierarchical net representation by the graphic net simulator.

A specification procedure for discrete event robotic systems based on Petri nets is as follows. First, the conceptual level activities of the system are defined through a net model considering the task specification corresponding to the aggregate discrete event process. The places which represent the subtasks indicated as the task specification are connected by arcs via transitions in the specified order corresponding to the flow of subtasks and a workpiece. The places representing

Figure 5. Hierarchical net representation of robotic task (loading) on graphic net simulator

583

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

robots and machines used for the subtasks are also added to connect transitions which correspond to the beginning and ending of their subtasks. Then, the places describing the subtasks are substituted by a subnet based on activity specification and required control strategies in a manner which maintains the structural properties (Yasuda, et al. 2010). For concurrent control in the conceptual net model, two implementation methods of synchronous interaction between two tasks which are executed respectively by one robot are shown in Figure 6(a), (b) (Yasuda, 2000), while an implementation with asynchronous communication based on the well-known signal/wait (semaphore) concept is shown in Figure 6(c). A coordination mechanism is introduced to coordinate concurrent systems with separate robots which require interaction between each other, such as synchronization and resource conflict. Figure 7 shows the net model of a coordination mechanism to conduct synchronous interaction by means of synchronous communication between two robots. This is also the detailed representation of the shared transition. A shared transition for synchronous interaction by separate robots is said to be a global transition, while a transition for independent action by a single robot is said to be a local transition. For synchronous interaction, the coordination algorithm is formally expressed using logical variables such that

the global transition is fired if all of associated transitions in the local net models are fired. The firing condition of a global transition Gj in the conceptual net which represents the event of synchronous action by S robots is written as Gj =

S

t

sub =1

jsub

(1)

where the corresponding event of action by each robot is represented by t jsub (sub = 1⋯S), and ⋂ denotes logical product operation. Figure 8 shows a net model of a coordination mechanism to execute selective control by means of synchronous communication, where the decision place executes any arbitration rule such as order or priority, to select independent action by a robot or cooperative action by two robots. A hierarchical and distributed control system is composed of one system controller and several machine controllers. The coordination mechanism as well as the conceptual net model of the system is implemented in the system controller and detailed net models are allocated to machine controllers. The coordination program is substantially the firing test program of global transitions using the firing rule and a set of relational data tables of global transitions with their input and output places and associated gate arcs. The coordination procedure through communica-

Figure 6. Net representation of synchronous interaction between two concurrent tasks: (a) synchronous communication with a shared transition, (b) interlock with mutual gate arcs,(c) asynchronous communication (signal/wait)

584

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 7. Net model of coordination mechanism for synchronous interaction by means of synchronous communication with a shared transition

tion between the coordinator and machine controllers is shown as follows: 1.

2.

3.

When a machine controller receives a command start signal from the coordinator, it starts the execution of the requested command task. At the end of execution, the machine controller sends a status report as a gate condition to the coordinator. When the coordinator receives the new gate condition, it updates the net model and tests the firing condition associated with the gate condition in the net. If a new transition is fired, then a command associated with the transition is sent to the corresponding machine controller.

For the actual control, the operations of each machine or robot are broken down into a series of unit motions, which is represented by mutual connection between places and transitions. A place means a concrete unit motion of a machine. From these places, output signal arcs are connected to the external machines, and external gate arcs from the machines are connected to the transitions of the net when needed, for example, to synchronize and coordinate operations. When a token enters a place that represents a subtask, the machine defined by the machine code is informed to execute a specified subtask with positional and control data; these code and data are defined as the place parameters. Figure 9 shows the net representation of real-time execution control of a robotic unit action.

Figure 8. Net model of coordination mechanism for selective control

585

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 9. Net representation of execution control of robotic action using output signal arc and external permissive gate arc

4. IMPLEMENTATION OF DISTRIBUTED CONTROL WITH MULTITHREADS The example robotic system has one robot, one machining center, and two conveyors, where one is for carrying in and the other one is for carrying out. The main execution of the system is indicated as the following task specification: 1. 2. 3. 4. 5.

A workpiece is carried in by the input conveyor. The workpiece is loaded to the machining center by the robot. The workpiece is processed by the machining center. The workpiece is unloaded from the machining center by the robot. The workpiece is carried out by the output conveyor.

The robotic system works in the following way: A workpiece comes on the input conveyor

586

up to the take up position. The robot waits in the waiting position in front of the conveyor, and, on stopping, approaches in the take up position, grips the object and returns to the waiting position. Then it turns, goes into the working space of the machine center and there it leaves the workpiece. After automatic gripping of the object the robot draws back and it waits for the machining center to complete object processing. After object processing, the robot goes to the machining center, takes the workpiece from an opened vice and carries it over to a free position on the output conveyor. The discrete event processes of the robotic system are represented at the conceptual level as shown in Figure 10. All the transitions are used to coordinate the overall system. Shared transitions between the robot and a machine, represent synchronous interaction and are coordinated as global transitions to be fired simultaneously. In this step, if necessary, control conditions such as the capacity of the system between the respective subtasks must be connected to regulate the execution of the manufacturing process. Next, each place

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 10. Net representation of robotic task at conceptual level

representing a subtask at the conceptual level is translated into a detailed subnet, which is used for local control of each robot or machine. The prototyping method was applied to produce semi-automatically C++ program from the net models on a general PC using multithreaded programming (Grehan, et al. 1998). Then, by executing the coordination program and net based controllers algorithms based on loaded information on a network of dedicated microcomputers, experiments on the final test can be performed (Yasuda, 2010). Multithreads control software composed of one modeling thread, one simulation thread and several task execution threads, were designed and written in Microsoft Visual C# under OS Windows XP SP3. The simulation thread executes the coordination program and the conceptual net based controller algorithm, while the task execution threads execute local net based controller algorithms, which control robots and machines through serial interfaces using the command/response concept. An example diagram of two-level net based concurrent real-time control of two external machines using one simulation and two task execution threads is shown in Figure 11.

The modeling thread, which is the main thread, executes the event driven net modeling, drawing and modification based on task specification using windows button clicks and mouse operations, as shown in Figure 12. When the transformation of graphic data of the net model into internal structural data is finished, the simulation thread is activated using window buttons by the user from the modeling thread. The simulation thread executes the enabling and firing test using gate conditions as shown in Figure 13, and when a transition is fired, the simulation thread activates the task execution thread and initiates the execution of a subtask by sending commands attached to the fired transition. When all the subtasks in the system are in progressive, the simulation thread waits for the turning on of any gate condition repeatedly. If a subtask is completed, the gate condition is turned on, then the simulation thread receives the gate signal through the external gate arc and updates the table of gate conditions. The program structure and the main C# code of the task execution thread are illustrated in Figure 14 and Listing 1. One task execution thread is allocated in each machine. When a task execution thread is activated, it sends the direct commands with specified positional and control data

587

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 11. Two-level net based concurrent real-time control of two external machines

Figure 12. Program structure and main C# code of modeling thread

588

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Figure 13. Program structure of simulation thread

through the serial interface to the dedicated robot or machine. Then the thread searches the target gate condition and waits for any status report through the interface repeatedly. When the subtask ends its activity normally, the thread receives the normal end report. Then, the thread turns the target gate condition on and the current gate condition off, so that the simulation thread can proceed with the next activations through the external gate arc. During simulation and task execution, it is decided by the simulator that whether it is in a deadlocked situation or not, and the user can stop the task execution through the simulator at any time. In the real-time control both the simulation thread and the task execution threads access to the external gate variables as shared variables; task execution threads write the values of the gate variables after the completion of subtasks and the simulation thread reads them for the friability test considering gate conditions. Mutual exclusive access control and the C# code were implement-

ed as shown in Figure 15 and Listing 2 respectively. The method function uses a “lock” statement of C# for the mutual exclusive access to shared Figure 14. Program structure of task execution thread

589

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Listing 1. Task execution thread

variables so that, while one thread calls the function, the other thread can not call it. Using the method function call, the simulator thread waits for the external gate signal, and after a task execution thread writes the permissive value of the target gate variable, the simulator thread reads it. Experimental results of multithreads scheduling for one and two task execution threads are shown in Figure 16(a), (b), respectively. In case of two threads, one thread takes charge of a robot controller, and another takes charge of a PLC for sequence control of a conveyor respectively through serial interface. Here, the time slice of the OS is about 15ms, and the timer program is inserted in the reference position of each thread to capture the time the method function is called.

Numerous experiments show that the gate condition is transferred through the shared memory from the task execution thread to the simulation thread as quickly as possible. Experiments using a real robot and conveyors show that the simulation thread and the task execution threads are concurrently executed with even priority, and the values of external gate variables are changed successfully in the conceptual net model. The global transitions fire simultaneously with the transition of the conceptual net of the whole system task. The robot cooperated with the conveyors and the machining center, and the example robotic system performed the task specification successfully.

Figure 15. Program structure of method function for mutual exclusive access

590

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Listing 2. Method function for mutual exclusive access

Figure 16. Experimental results of multithreads scheduling: one task execution thread, (b) two task execution threads

591

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

In accordance with the implementation using multithreaded programming, hierarchical and distributed implementation under a real-time operating system on a network of microcomputers connected via a shared serial bus is also possible, where each microcomputer is dedicated to the local net model of a subsystem in the overall robotic system. After the arrival of a request, the response carrying the status information crosses the communication network and gets to the input buffer of the network board of the controller and written in the cache memory. The control data issued by the controller is written in the cache memory shared with the network board, sent to its target microcomputer, and there causes the new controlled status. By a shared communication network replacing traditional directly wired systems, a more flexible, reliable and efficient control performance can be surely expected.

5. CONCLUSION A prototyping methodology to build hierarchical and distributed control systems corresponding to the hardware structure of robotic control systems has been presented. The conceptual net is used to coordinate distributed local machine controllers using the decomposed information of global transitions representing cooperative interaction between machines: the coordination mechanism can be implemented in each layer of the control hierarchy of the system repeatedly. The overall control structure of the example robotic system was implemented on a general PC with serial interface using multithreaded programming. For the example system, detailed net models can be automatically generated using the database of robotic operations. The hierarchical approach allows us to reuse validated net models, such as loading, unloading, and specific handling operations, already defined for other purposes, that is an efficient way to deal with complex net models. The conceptual and local net models are

592

not so large that all of the net based controllers can be implemented on general microcomputers or PLCs. Thus, modeling, simulation and control of large and complex manufacturing systems can be performed consistently using Petri nets.

REFERENCES Breant, F., & Paviot-Adet, E. (1992). OCCAM prototyping from hierarchical Petri nets. In Becker, M., Litzler, L., & Trehel, M. (Eds.), Transputers ’92 Advanced Research and Industrial Applications (pp. 189–209). Amsterdam, Netherlands: IOS Press. Butler, B., Esser, R., & Mattmann, R. (1991). A distributed simulator for high order Petri nets. In Rozenberg, G. (Ed.), Advances in Petri Nets LNCS 483 (pp. 47–63). Berlin, Germany: Springer-Verlag. Caloini, A., Magnani, G., & Pezze, M. (1998). A technique for designing robotic control systems based on Petri nets. IEEE Transactions on Control Systems Technology, 6(1), 72–87. doi:10.1109/87.654878 David, R., & Alla, H. (1992). Petri Nets and Grafcet: Tools for modelling discrete events systems. UK: Prentice-Hall International. Desrochers, A. D., & Al-Jaar, R. Y. (1995). Applications of Petri Nets in manufacturing systems: Modeling, control and performance analysis. New York, NY: IEEE Press. Garcia, F. J., & Villarroel, J. L. (1998). Decentralized implementation of real-time systems using time Petri nets. Application to mobile robot control. In Proceedings of the 5th IFAC Workshop on Algorithms and Architectures for Real-time Control, (pp. 11-16). Girault, C., & Valk, R. (2003). Petri Nets for systems engineering. Berlin, Germany: SpringerVerlag.

Modeling and Simulation of Discrete Event Robotic Systems Using Extended Petri Nets

Grehan, R., Moote, R., & Cyliax, I. (1998). Realtime programming: A guide to 32-bit embedded development. Reading, MA: Addison-Wesley. Hasegawa, K., Takahashi, K., Masuda, R., & Ohno, H. (1984). Proposal of mark flow graph for discrete system control. Transactions of SICE, 20(2), 122–129. Holding, D. J., & Sagoo, J. S. (1992). A formal approach to the software control of high-speed machinery. In G. W. Irwin & P. J. Fleming (Eds.), Transputers in real-time control, (pp. 239-282). Taunton, Somerset, UK: Research Studies Press. Lee, E. J., Togueni, A., & Dangoumau, N. (2006). A Petri net based decentralized synthesis approach for the control of flexible manufacturing systems. In Proceedings of the 2006 IMACS Multiconference Computational Engineering in Systems Applications. Murata, T. (1989). Petri nets: Properties, analysis and applications. Proceedings of the IEEE, 77(4), 541–580. doi:10.1109/5.24143 Murata, T., Komoda, N., Matsumoto, K., & Haruna, K. (1986). A Petri net based controller for flexible and maintainable sequence control and its application in factory automation. IEEE Transactions on Industrial Electronics, 33(1), 1–8. doi:10.1109/TIE.1986.351700 Neumann, P. (2007). Communication in industrial automation – What is going on? Control Engineering Practice, 15(11), 1332–1347. doi:10.1016/j. conengprac.2006.10.004 Piedrafita, R., Tardioli, D., & Villarroel, J. L. (2008). Distributed implementation of discrete event control systems based on Petri nets. In Proceedings of the 2008 IEEE International Symposium on Industrial Electronics, (pp. 1738-1745). Reisig, W. (1985). Petri Nets. Berlin, Germany: Springer-Verlag.

Silva, M. (1990). Petri nets and flexible manufacturing. In Rozenberg, G. (Ed.), Advances in Petri Nets LNCS 424 (pp. 374–417). Berlin, Germany: Springer-Verlag. Silva, M., & Velilla, S. (1982). Programmable logic controllers and Petri nets: A comparative study. In Ferrate, G., & Puente, E. A. (Eds.), In IFAC Software for Computer Control 1982 (pp. 83–88). Oxford, UK: Pergamon. Yasuda, G. (2000). A distributed control and communication structure for multiple cooperating robot agents. In IFAC Artificial Intelligence in Real Time Control 2000 (pp. 283–288). Oxford, UK: Pergamon. Yasuda, G. (2010). Distributed cooperative control of industrial robotic systems using Petri net based multitask processing. In H. Liu, H. Ding, Z. Xiong, & X. Zhu (Eds.), Proceedings of the 3rd International Conference on Intelligent Robotics and Applications LNAI 6425, (pp. 32-43). Berlin, Germany: Springer-Verlag. Yasuda, G., & Ge, B. (2010). Petri net model based specification and distributed control of robotic manufacturing systems. In Proceedings of the 2010 IEEE International Conference on Information and Automation, (pp. 699-705). Yasuda, G., & Tachibana, K. (1992). Implementation of real-time control schemes on a parallel processing architecture using transputers. In Proceedings of IEEE Singapore International Conference on Intelligent Control and Instrumentation, (pp. 760-765). Zhou, M., DiCesare, F., & Desrochers, A. A. (1993). Petri Net synthesis for discrete event control of manufacturing systems. London, UK: Kluwer. Zhou, M., & Venkatesh, K. (1999). Modeling, simulation, and control of flexible manufacturing systems: A Petri Net approach. Singapore: World Scientific.

This work was previously published in Prototyping of Robotic Systems: Applications of Design and Implementation, edited by Tarek Sobh and Xingguo Xiong, pp. 51-69, copyright 2012 by Information Science Reference (an imprint of IGI Global). 593

594

Chapter 34

Human-Friendly Robots for Entertainment and Education Jorge Solis Waseda University, Japan & Karlstad University, Sweden Atsuo Takanishi Waseda University, Japan

ABSTRACT Even though the market size is still small at this moment, applications of robots are gradually spreading out from the manufacturing industrial environment to face other important challenges, like the support of an aging society and to educate the new generations. The development of human-friendly robots drives research that aims at autonomous or semi-autonomous robots that are natural and intuitive for the average consumer to interact with, communicate with, and work with as partners, besides learning new capabilities. In this chapter, an overview of research done on the mechanism design and intelligent control strategies implementation on different platforms and their application to entertainment and education domains will be stressed. In particular, the development of an anthropomorphic saxophonist robot (designed to mechanically reproduce the organs involved during saxophone playing) and the development of a two-wheeled inverted pendulum (designed to introduce the principles of mechanics, electronics, control, and programming at different education levels) will be presented.

INTRODUCTION The development of anthropomorphic robots is inspired by the ancient dream of humans replicating themselves. However, human behaviors are difficult to explain and model. The recent technological advances in robot technology,

artificial intelligence, power computation, etc. have contributed to enable humanoid robots to roughly emulate the physical dynamics and motor dexterity of humans. Nowadays, humanoid robots are able of displaying motor dexterities for dancing, playing musical instruments, talking, etc. Although the long-term goal of true autonomous humanoid robots has yet to be accomplished, the

DOI: 10.4018/978-1-4666-1945-6.ch034

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Human-Friendly Robots for Entertainment and Education

feasibility of integrating them into people’s daily lives is becoming closer. Towards developing humanoid robots capable of interacting more naturally with human partners, robots are required to process and display humanlike emotions. The way a person interacts with a humanoid robot is quite different from interacting with the majority of industrial robots today. Modern robots are generally viewed as tools that human specialists use to perform hazardous tasks in remote environments. In contrast, human-like personal robots are often designed to engage people in order to achieve social or emotional goals. The development of socially intelligent and socially skillful robots drives research to develop autonomous or semi-autonomous robots that are natural and intuitive for the average consumer to interact with, communicate with, work with as partners, and teach new capabilities. In addition, this domain motivates new questions for robotics researchers, such as how to design for a successful long-term relationship where the robot remains appealing and provides consistent benefit to people over weeks, months, and even years. The benefit that social robots provide people extends far beyond the strict task performing utility to include educational, health and therapeutic, domestic, social and emotional goals (e.g., entertainment, companionship, communication, etc.), and more. However, these mechanical devices are still far from understanding and processing emotional states as humans do. Research on musical performance robots seems like a particularly promising path toward helping to overcome this limitation, because music is a universal communication medium, at least within a giving cultural context. Furthermore, research into robotic musical performance can shed light on aspects of expression that traditionally have been hidden behind the rubric of “musical intuition.” The late Prof. Ichiro Kato argued that the artistic activity such as playing a keyboard instrument would require human-like intelligence and dexterity (Kato, et al., 1973). In 1984, at Waseda University, the WABOT-2 was

the first attempt of developing an anthropomorphic music robot capable of playing a concert organ (Sugano & Kato, 1987). Then, in 1985, the WASUBOT built also at Waseda, could read a musical score and play a repertoire of 16 tunes on a keyboard instrument. More recently, thanks to the technological advances on power computation, Musical Information Retrieval (MIR) and Robot Technology, several researchers have been focusing on developing anthropomorphic robots and interactive automated instruments capable of interacting with musical partners. As a result, different kinds of wind playing-instrument automated machines and humanoid robots have been developed for playing wind instruments (Doyon & Liaigre, 1966; Klaedefabrik, 2005; Solis, et al., 2008; Takashima & Miyawaki, 2006; Solis, et al., 2009a; Dannenberg, 2005; Toyota Motor Corporation, 2011; Degallier, 2006; etc.). Other researchers have been focusing in analyzing wind instrument playing from a musical engineering approach by performing experiments with simplified mechanisms (Ando, 1970; Guillemain, et al., 2010; etc.) and from a physiological point of view by analyzing medical imaging data of professional players (Mukai, 1992; Fletcher, 2001; etc.). In this research, we particularly deal with the development of an anthropomorphic saxophone-playing robot designed to mechanically emulate the required organs during the saxophone playing. Due to the interdisciplinary nature of this research, our collaboration with musicians, musical engineers, and medical doctors will certainly contributes to better reproduce and understand the human motor control from an engineering point of view. Certainly, the performance of any musical instrument is not well defined and far from a straightforward challenge due to the many different perspectives and subject areas. An idealized musical robot requires many different complex systems to work together integrating musical representation, techniques, expressions, detailed control and sensitive multimodal interactions within the context of a piece, as well as interac-

595

Human-Friendly Robots for Entertainment and Education

tions between performers and the list grows. Due to the inherent interdisciplinary nature of the topic, this research can contribute to the further enhance musical understanding, interpretation, performance, education, and enjoyment. However, if we consider the use of such a complex mechanisms to introduce undergraduate students the principles of robot technology, there could be difficulties to have experience on-hands with an anthropomorphic robot. On the other hand, the continuous falling of the birthrate in developed countries is resulting in a reduction in the number of students where most of them are going away from scientific fields. This situation may tremendously affect the industry by losing competitive power in the future due to the shortage of talented engineers. Moreover, the curricula of engineering universities is currently lacking in practical, design elements resulting in a shortage of opportunities for promoting the creativity of students. For this purpose, several attempts to built educational robots have been done during the past few decades (Miller, et al., 2008).

DEVELOPMENT OF ANTHROPOMORPHIC MUSICAL ROBOTS Background During the golden era of automata, the “Flute Player” developed by Jacques de Vaucanson was designed and constructed as a means to understand the human breathing mechanism (Doyon & Liaigre, 1966). Vaucanson presented “The Flute Player” to the Academy of Science in 1738. For this occasion, he wrote a lengthy report carefully describing how his flutist can play exactly like a human. The design principle was that every single mechanism corresponded to every muscle (Vaucanson, 1979). Thus, Vaucanson had arrived at those sounds by mimicking the very means by which a man would make them. Nine bellows were

596

attached to three separate pipes that led into the chest of the figure. Each set of three bellows was attached to a different weight to give out varying degrees of air, and then all pipes joined into a single one, equivalent to a trachea, continuing up through the throat, and widening to form the cavity of the mouth. The lips, which bore upon the hole of the flute, could open and close; and move backwards or forwards. Inside the mouth was a moveable metal tongue, which governed the airflow and created pauses. More recently, the “Flute Playing Machine” developed by Martin Riches was designed to play a specially made flute somewhat in the manner of a pianola, except that all the working parts are clearly visible (Klaedefabrik, 2005). The Flute Playing Machine is composed of an alto flute, blower, electro-magnets, and electronics. The design principle is basically transparent in a double sense. The visual scores can be easily followed so that the visual and acoustic information is synchronized. The pieces it plays are drawn with a felt tip pen on long transparent music rolls, which are then optically scanned by the photo cells of a reading device. The machine has a row of 15 photocells, which read felt-tip pen markings on a transparent roll. Their amplified signals operate the 12 keys of the flute and the valve, which controls the flow of air into the embouchure. The two remaining tracks may be used for regulating the dynamics or sending timing signals to a live performer when performing a duet. Since 1990, the authors have been focusing on development of the anthropomorphic flutist robot designed to mechanically emulate the anatomy and physiology of the organs involved during flute playing. In 2007, the Waseda Flutist Robot No. 4 Refined IV (WF-4RIV) was developed. The WF-4RIV has a total of 41-DOFs and it is composed of the following simulated organs (Solis, et al., 2008): lungs, lips, tongue, vocal cord, fingers, and other simulated organs to hold the flute (i.e. neck and arms). The lips mechanism is composed by 3-DOFs to realize an accurate

Human-Friendly Robots for Entertainment and Education

control of the motion of the superior lip (control of airstream’s thickness), inferior lip (control of airstream’s angle) and sideway lips (control of airstream’s length). The artificial lip is made of a thermoplastic rubber named “Septon” (Kuraray Co. Ltd., Japan). The lung system is composed of two acrylic cases, which are sealed. Each of the cases contains a bellow, which is connected to an independent crank mechanism. The crank mechanism is controlled by using an AC motor so that the robot can breathe air into the acrylic cases and breathe air out from them by controlling the speed of motion of the bellow. Finally, the vocal cord is composed by 1-DOF and the artificial glottis is also made of Septon. In order to add vibration to the incoming air stream, a DC motor linked to a couple of gears is used One of the first attempts to develop a saxophone-playing robot was done by Takashima at Hosei University (Takashima & Miyawaki, 2006). Such a robot, named APR-SX2, is composed of three main components: mouth mechanism (as a pressure controlled oscillating valve), the air supply mechanism (as a source of energy), and fingers (to make the column of air in the instrument shorter or longer). The artificial mouth consisted of flexible artificial lips and a reed pressing mechanism. The artificial lips were made of a rubber balloon filled with silicon oil with the proper viscosity. The air supplying system (lungs) consists of an air pump and a diffuser tank with a pressure control system (the supplied air pressure is regulated from 0.0 MPa to 0.02 MPa). The APR-SX2 was designed under the principle that the instrument played by the robot should not be changed. A finger mechanism was designed to play the saxophone’s keys (actuated by solenoids), and a modified mouth mechanism was designed to attach it to the mouthpiece, no tonguing mechanism was implemented (normally reproduced by the tongue motion). The control system implemented for the APR-SX2 is composed by one computer dedicated to the control of the key fingering, air pressure and flow, pitch of the tones, tonguing,

and pitch bending. In order to synchronize all the performance, the musical data was sent to the control computer through MIDI in real-time. In particular, the SMF format was selected to determine the status of the tongue mechanism (on or off), the vibrato mechanism (pitch or volume), and pitch bend (applied force on the reed). Hosei University has developed the APR-SX2; its design is based on the concept of reproducing melodies on a tenor saxophone. Therefore, the saxophone playing robot has been developed under the condition that the musical instrument played by robots should not be changed or remodeled at all. However, a total of twenty-three fingers have been used to play the saxophone’s keys (actuated by solenoids), a modified mouth mechanism has been designed (composed by a flexible artificial lip and a reed pressing force control mechanism were developed) to attach it with the mouthpiece, and no tonguing mechanism has been implemented (normally reproduced by the tongue motion). In contrast, authors proposed in Solis et al. (2009b) the development of an anthropomorphic saxophonist robot as an approach to enable the interaction with musical partners. Therefore, as a long-term goal, we expect that the proposed saxophonist robot is able not only of performing a melody, but also to dynamically interact with the musical partner (i.e. walking while playing the instrument, etc.). As a first result of our research, we have presented the Waseda Saxophonist Robot No. 1 (WAS-1), which it was composed by 15 Degrees of Freedom (DOF) required to play an alto saxophone (Solis, et al., 2009a). In particular, lower lip (1-DOF), tongue (1-DOF), oral cavity, artificial lungs (air pump: 1-DOF and air flow valve: 1-DOF), and fingers (11-DOFs) were developed. Both lips and oral cavity were made of a thermoplastic rubber (named Septon and produced by Kuraray Co.). An improved version, the Waseda Saxophonist Robot No. 2 (WAS-2) was presented, where the design of the artificial lips was improved and a human-like hand was designed (Solis, et al., 2010a). Furthermore, an

597

Human-Friendly Robots for Entertainment and Education

Overblowing Correction Controller was implemented in order to assure the steady tone during the performance by using the pitch feedback signal to detect the overblowing condition and by defining a recovery position to correct it (Solis, et al., 2010b). However, the range of sound pressure was still too limited to reproduce the dynamic effects of the sound (i.e. decrescendo) and deviations on the pitch were detected. Therefore, the design of the oral cavity shape has been improved to expand the range of sound pressure and potentiometers were attached to each finger for implementing a dead-time compensation controller. From the control system point of view, a Pressure-Pitch Controller has been proposed to ensure the accurate control of the pitch during the steady phase of the sound produced by the saxophone. Thus, in the following sub-section, we describe the mechanical improvements on the oral cavity and finger mechanisms. In addition, the implementation of a finger dead-time compensation controller and Multiple-Input Multiple-Output controller to assure the accurate control of both air pressure and sound pitch.

Anthropomorphic Saxophonist Robot: Mechanism Design and Control Implementation In 2010, we have developed the Waseda Saxophonist Robot No. 2 Refined (WAS-2R), which has improved the shape of the oral cavity for increasing the sound range volume and added sensors to each finger for reducing the response delay. In particular, the WAS-2R is composed by 22-DOFs that reproduce the physiology and anatomy of the organs involved during the saxophone playing as follows (Figure 1): 3-DOFs to control the shape of the artificial lips, 16-DOFs for the human-like hand, 1-DOF for the tonguing mechanism, and 2-DOFs for the lung system. In addition, to improve the stability of the pitch of the sound produced, a pressure-pitch controller system has been implemented.

598

In the previous mechanism, it was possible to confirm the enhancement of the sound range produced by WAS-2 (Solis, et al., 2010a). However, we detected that the note C3 was not possible to be produced. Therefore, we considered to analyze in more detail the oral cavity (in particular, the gap between the palate and the tongue) of professional saxophonist while playing the instrument. For this purpose, we have used an ultrasonic sound probe (ALOKA ProSound II, SSD-6500SV) to obtain images of the oral cavity from professional players while producing the sound of the note C4. By analyzing the obtained images, when a higher volume sound is produced, a large gap between the palate and the tongue is observed. In contrast, while producing lower volume sounds, the gab is considerably narrowed. As a result from these measurements, a new oral cavity for the WAS-2R has been designed (Figure 2). Basically, based on the measurements obtained from images obtained from the professional player, the sectional area has been designed with 156 mm2 (previous one was 523 mm2).

Figure 1. The Waseda saxophonist robot no. 2 refined (WAS-2R)

Human-Friendly Robots for Entertainment and Education

Figure 2. Detail of the oral cavity of WAS-2R

In the previous mechanism, a human-like hand (actuated by a wire-driven mechanism) had been designed to enable the WAS-2 to push all the keys of the alto saxophone (Solis, et al., 2010a). However, due to the use of the wire-driven mechanism, a dynamic response delay (approximately 110ms) has been observed. Therefore, in order to reduce such a delay time, we proposed to embed sensors for measuring the rotational angle of each finger (Figure 3). For this purpose, a rotary sensor (RDC506002A from Alps Co.) has been embedded into the each finger mechanism. In particular, each sensor was placed on a fixing mount device produced by a rapid prototyping device (CONNEX 500). As a result, we were able of attaching the sensing system without increasing the size of the whole mechanism. RC servo motors have been

used to control the wire-driven mechanism designed for each finger. As end-effector, an artificial finger made of silicon has been designed. In order to control the sixteen RC motors, the RS485 serial communication protocol has been used. On the other hand, the previous mouth mechanism was designed with 1-DOF in order to control the vertical motion of the lower lip. Based on the up/down motion of the lower lip, it became possible to control the pitch of the saxophone sound. However, it is difficult to control the sound pressure by means of 1-DOF. Therefore, the mouth mechanism of the WAS-2 consists of 2-DOFs designed to control the up/down motion of both lower and upper lips (Figure 4a). In addition, a passive 1-DOF has been implemented to modify the shape of the sideway lips. The artificial lips were also made of Septon. In particular, the arrangement configuration of the lip mechanism is as follows: upper lip (rotation of the motor axis is converted into vertical motion by means of a timing belt and ball screw to avoid the leak of air flow), lower lip (a timing belt and ball screw so that the rotational movement of the motor axis is converted into vertical motion to change the amount of pressure on the reed), and sideway lip. In order to select the motor for the mouth mechanism, the required force for pressing the reed and the maximum stroke of the pins embedded in lip were considered. The target time for

Figure 3. Details of the finger mechanism of WAS-2R

599

Human-Friendly Robots for Entertainment and Education

Figure 4. Mechanism details of the WAS-2R: a) mouth mechanism; b) tonguing mechanism; c) lung mechanism

the positioning was set to 100 ms. In order to assure a compact design for the mechanism, a ball screw and timing belt were used. Due to the space constrains, the ball screw SG0602-45R88C3C2Y (KSS Co.) was used. The shaft diameter is 6 mm, and the lead is 2 mm. From those, the axial direction allowable load and allowable revolution were calculated. The requirement of the system is to move 10 mm in 100 ms. Therefore, the average speed v and acceleration a are 0.1 m/s and 4 m/ s2 respectively. In order to move the pin attached to both sides of lip, the total mass of moving part is 0.05 kg. The axial direction load generated when pin is pulled is given by (1), and this value is the maximum axial direction load applied to the ball screw. The core diameter of the ball screw is 5.1 mm; therefore, the screw shaft minimum moment of inertia of area is given by (2). Fa = 8 + ma = 8 + 0.05 × 4 = 8.2 [N ]

600

(1)

I =

π 4 π × 5.14 d1 = = 33.2 [mm 4 ] 64 64

(2)

The buckling load is computed by (3), where la is the distance between two mounting surfaces (40 mm), E the Young’s modulus (2.1×105 N/ mm2) and η1 the factor according to the mounting method (2.0). As a result of the above calculations, we confirmed that the selected ball screw is safe in use. P 1=

η1 ⋅ π 2 ⋅ E ⋅ I la2

× 0.5 = 12485.6 [N ] (3)

Then, we have verified the critical speed. Due to the reduction ratio is 1, the required motor revolution is given by (4); the sectional area S of the screw axis is computed by (5).

Human-Friendly Robots for Entertainment and Education

Vmax ⋅ 1000 ⋅ 60

1 l A 0.20 × 1000 × 60 1 = × 2 1 = 6000 [rpm ]

Nm =

S=

×

π × 5.12 = 20.4 [mm 2 ] 4

(4)

(5)

Finally, the allowable revolution of threaded shaft can be computed as (6), where S is the section area (20.4 mm2), γ the density (7.85×10-6 kg/ mm3) and λ1 is the factor according to the mounting method (3.927). From the above calculations, we could confirm the required revolution is allowable. Thus, we decided to use this ball screw. N1 = ×

60 ⋅ λ12 2π ⋅ la2

×

60 × 3.9272 E ×I ×g × 0.8 = γ ⋅S 2π × 402

2.1 × 105 × 6.21 × 9.8 × 103 × 0.8 = 1520061 [rpm ] 7.85 × 10−6 × 20.4

(6)

After confirming the ball screw specifications, the selection of the motor was verified. For the mouth mechanism, the motor RE-25 (Maxon Co.) was used. In order to calculate the rotary torque required to translate rotary motion into linear motion, the required rotary torque T1 for an external load is defined as (7), where η is the efficiency of ball screw (0.9). T1 =

Fa ⋅ l 10.2 × 2 ⋅A = × 1 = 2.90[N ⋅ mm ] 2π ⋅ η 2π × 0.9 (7)

Because the preload torque Td of the selected ball screw is 3.0-7.0 Nmm, the preload torque generated T2 is defined as (8). T2 = Td ⋅ A = 7.0 × 1 = 7.0 [N ⋅ mm ]

Considering inertia moment of screw shaft and the pulley on the side of motor, the inertia moment J is computed as (9); where JS is the Inertia moment of screw shaft (2.5×10-8 kg*m2) and JB is the Inertia moment of pulley on the side of motor (9.11×10-7 kg*m2). 2

 l  J = m   × 10−6 + J S + J B  2π  2  2   = 0.05 ×   × 10−6 + 2.50  2π  × 10−8 + 9.11 × 10−7 = 9.41 × 10−7 [kg ⋅ m 2 ] (9) Because the acceleration time is 0.05 sec, the angular acceleration is computed as (10). Therefore, the required acceleration torque T3 is given by (11). 2π ⋅ N m

2π × 6000 60t 60 × 0.050 = 12566.3 [rad / sec2 ] •

ω=

=

(10)



T3 = J × ω× 103 = 9.41 × 10−7 × 12566.3 × 103 = 11.83 [N ⋅ mm ] (11) From the torques calculated above, the total required acceleration torque Tk is given by (12). The effective value of torque required to the motor is then computed as (13). As a result from the calculations below, it is verified that the RE-25 motor covers the required specifications. TK = T1 + T2 + T3 = 2.90 + 7.0 + 11.83 = 21.73 [N ⋅ mm ]

(12)

(8)

601

Human-Friendly Robots for Entertainment and Education

Trms =

T12 × t1 + T22 × t2 + T32 × t3

t 2.90 × 0.10 + 7.02 × 0.10 + 21.732 × 0.05 = 0.10 + 0.10 + 0.05 = 5.033 [N ⋅ mm ] 2

(13) On the other hand, the tonguing mechanism is shown in Figure 4b. The motion of the tongue tip is controlled by a DC motor which is connected to a link attached to the motor axis. In such a way, the airflow can be blocked by controlling the motion of the tongue tip. Thanks to this tonguing mechanism of the WAS-2, the attack and release of the note can be reproduced. In order to select the motor for tongue mechanism, we assumed a response time of 20 ms. As the motor of the tongue mechanism should rotate 20deg in 20ms, the average angular speed is 17.45 rad/s. On the other hand, to approximate the real lingual motion speed, the maximum angular speed is 34.9 rad/s. Therefore, acceleration of it is 3490.7 rad/s2. The required torque to rotate the tongue mechanism covered with SEPTON is 5.5×10-2 Nm and the inertia moment of the center of rotation generated to a part rotate with tongue is 1.19×10-5 kg*m2. Therefore, the required total torque Ttotal for driving the tongue mechanism is computed by (14). Ttotal = T + I θ = 0.09654[N ⋅ m ] = 96.54[N ⋅ mm ]

(14)

As a result of calculations above, and because of motor size, the motor RE-30 (Maxon Co.) is selected for tongue mechanism. Regarding the WAS-2R’s air source, a DC servo motor has been used to control the motion of the air pump diaphragm, which is connected to an eccentric crank mechanism (Figure 4c). This mechanism has been designed to provide a minimum 20 L/min airflow and a minimum pressure

602

of 30kPa. In addition, a DC servo motor has been designed to control the motion of an air valve so that the delivered air by the air pump is effectively rectified. In order to select the motor for the lung mechanism, the requirement specification was based on the maximum oral cavity pressure (8 kPa) and the calculations of the external force F computed by (15), where Fa is the inertia, Fk is the spring and Fp is the pressure. The force applied to motor arm Fl is then computed by (16), where θ is the angle of rotation and ϕ is the angle of arm. Finally, based on the motor load torque T given by (17), where r is the arm length, the motor RE-30 (Maxon Co.) has been selected. F = Fa + Fk + Fp

Fl =

F ⋅ sin(φ + θ) cos φ

T = Fl ⋅ r

(15) (16)

(17)

Regarding the control system in our previous research, a feed-forward air pressure controller with dead-time compensation has been implemented to ensure the accurate control of the air pressure during the attack time (Solis, et al., 2010b). Moreover, for the control of the finger mechanism, a simple ON/OFF controller has been implemented. In particular, the feedback error learning during the attack phase of the sound has been used to create the inverse dynamics model of the Multiple-Input Single-Output (MISO) controlled system based on Artificial Neural Networks (ANN). In addition, an Overblowing Correction Controller (OCC) has been proposed and implemented in order to ensure the steady tone during the performance by using the pitch feedback signal to detect the overblowing condition and by defining a recovery position (offline) to correct it (Solis, et al., 2010b). However, we still detect deviations on the pitch while playing the saxophone.

Human-Friendly Robots for Entertainment and Education

Therefore, we proposed the implementation of the control system shown in Figure 5a. In particular, the improved control system includes a dead-time compensation controller for the finger mechanism (to reduce the effect of response delay due to the wire-driven mechanism) and a Pressure-Pitch Controller (PPC) for the control of the valve and lip mechanism (to assure the accurate control of the pitch). Regarding the implementation of the dead-time compensation control; for each finger of WAS-2R, the pressing time of the saxophone’s key is measured by means of the embedded potentiometer sensor (defined as LN; where N represents the total number of DOFs designed for the finger mechanism). By including the dead-time factor (referred as esL), it is possible

to compensate the finger’s response delay during the saxophone playing (Kim, et al., 2003). As for the implementation of the control system, a pressure-pitch controller during the sustain phase of the sound has been proposed not only to ensure the accurate control of the air pressure during the attack phase of the sound, but also to ensure the accurate control of both air pressure and sound pitch during the sustain phase of the sound. For this purpose, we implemented a feedforward error learning method (Kawato & Gomi, 1992) to create the inverse model of the proposed Multiple-Input-Multiple-Output (MIMO) system which is computed by means of an ANN. During the training process, the inputs of the ANN are defined as follows (Figure 5b): pressure reference (PressureREF), pitch reference (PitchREF). In this

Figure 5. Detail of the control system implemented for the WAS-2R: a) block diagram of the improved control system; b) detail of the ANN during the learning phase based on the feedback error learning method

603

Human-Friendly Robots for Entertainment and Education

case, a total of six hidden units were used (experimentally determined while varying the number of hidden units). As an output, the position of the air valve (ΔValve) and lower lip (ΔLip) are controller to ensure the accurate control of the required air pressure and pitch to produce the saxophone sound. Moreover, during the training phase, the air pressure (PressureRES) and sound pitch (PitchRES) are used as feedback signals and both outputs from the feedback controller are used as teaching signals for the effectively training the ANN. As a result from the training phase, during a saxophone playing performance, the created inverse model is used.

ing the saxophone, we programmed the WAS-2R to play the main theme of the “Moonlight Serenade” composed by Glenn Miller before and after training the inverse model. In particular, as for the neural network parameters, a total of 6 hidden units were used. For the training process, a total of 144 steps were done. The experimental results are shown in Figure 6b; where 1[cent] is defined as (equi-tempered semitone/100). As we could observe, the deviations of the pitch after the training (Standard Error is 41.7) are considerable less than before training (Standard Error is 2372.8).

Musical Performance In order to verify if the re-designed shape of the oral cavity contributes to extend the range of sound pressure, we have compared the previous mechanism with the new one while playing the notes from C3 to C5. The average sound pressure ranges for WAS-2R and WAS-2 are 17.7 dB and 9.69 dB, respectively. Moreover, an intermediate player and professional are 13.2 and 22.6 respectively. From this result, we confirmed an increment of 83% thanks to the new shape of the oral cavity. Therefore, we could conclude that the shape of the gap between the palate and tongue has a big influence on the sound pressure range. Thanks to this considerable improvement on the range of sound pressure, we proposed to compare the reproduction of the decrescendo, which is a dynamic sound effect that gradually reduces the loudness of the sound. For this purpose, we programmed the WAS-2 and WAS-2R to play the principal theme of the “Moonlight Serenade” composed by Glenn Miller. The experimental results are shown in shown in Figure 6a. As we may observe, the WAS-2R was able of reproducing nearly similar to the performance of the professional one. On the other hand, in order to determine the effectiveness of the proposed pressure-pitch controller to reduce the pitch deviations while play-

604

Figure 6. Experimental results: a) reproduction of decrescendo effect; b) comparing the deviations of the pitch before and after training the inverse model of the proposed MIMO system with the WAS-2R.

Human-Friendly Robots for Entertainment and Education

DEVELOPMENT OF EDUCATIONAL ROBOTS Background Even though several universities and companies have been building robotic platforms for educational purposes, we may observe that there is still no platform designed to intuitively introduce the principles of RT from the fundamentals to their application to solve real world problems. In fact, most of the current educational platforms focus on providing the basic components to enable students building their own designed system. However; such kind of platforms are used to merely introduce basic control methods (i.e. Sequential Control), basic programming (i.e. Flow Chart Design, C language), and basic mechanism design. As an approach to cover different aspects of the Robot Technology, in this project we focused in developing an education tool designed to introduce at different educational levels the principle of developing mechatronic systems. In particular, the development of an inverted pendulum mobile robot has been proposed. In fact, the inverted pendulum has been the subject of numerous studies in automatic control (Grasser, et al., 2002; Salerno & Angeles, 2007; Koyanagi, et al., 1992; Kim, et al., 2003; Pathak, et al., 2005; etc.), introduction to Mechatronics (Solis & Takanishi, 2009; etc.), etc. Up to now, several attempts to build educational robots have been made during the past few decades (Miller, et al., 2008). In fact, the development of educational robots started in the early 1980s with the introduction of the Heathkit Hero − 1 (Heath Co.). Such kind of robot was designed to encourage students to learn how robots are built. However, no information on the theory or principles behind the assembly is given. More recently, several other companies in cooperation with universities and research centers have been trying to introduce educational robots to the market. Some examples are as follows: K-Team (K-TEAM Ltd.) introduces the Hemisson, which is

a low-cost educational robot designed to provide an introduction to robot programming by using reduced computational power and few sensors. Another example is the LEGO ® Mindstorms RCX, which is a good tool for early and fast robot design by using the LEGO blocks (LEGO Ltd.). In Japan, we can also find some examples such as: the RoboDesigner kit designed to provide a general platform to enable students to build their own robots (Japan Robotech Ltd.), ROBOVIE − MS from ATRRobotics designed as an education tool to introduce principles of mechanical manufacturing, assembly, and operational programming of small-sized humanoid robot, etc. From the perspective of introducing RT technology to undergraduate students, it is a good example to provide experience to them on control designing, signal processing, distributed control systems and the consideration of real-time constraints for real applications purposes. However, most of the current proposed robots do not consider the educational issues while designing the inverted pendulum (i.e. possibility of changing the center of mass, etc.). In addition, authors consider the importance to consider the introduction of humanrobot interaction to motivate their further interest (i.e. the size of the robot should fit the size of a personal mobile computer, etc.). Therefore, the authors have proposed the development of a two-wheeled inverted pendulum type mobile robot designed to cover the basic principles in electronics, mechanical engineering, programming, as well as, more advanced topics on control engineering, complex programming, and embedded systems. As a result of our research, the Waseda Wheeled Vehicle No.2 Refined (WV2R) has been introduced (Solis, et al., 2009c). In particular, the WV-2R has been designed to enable students to verify the changes on the response of the robot while varying some physical parameters of the robot. From the experimental results, we confirm some of the educational functions of the proposed robot (i.e. PID tuning, varying the center of mass, etc.). However, a hand-made control

605

Human-Friendly Robots for Entertainment and Education

board has been used so that several problems of wire connections were detected. Furthermore, the WV-2R didn’t include any additional mechanism for proposing different kinds of robot contest. Finally, from our discussions with undergraduate students, the development of a simulator could considerably increase their knowledge.

Figure 7. The Waseda wheeled vehicle no. 2 refined II (WV-2RII).

Two-Inverted Pendulum Robot: Mechanism Design and Control Implementation In the 2010, the Waseda Wheeled Vehicle Robot No. 2 Refined II (WV-2RII) was developed as an educational robot designed to implement different educational issues to introduce undergraduate students the principles of RT (Figure 7). The specifications are shown in Table 1. The WV-2RII is composed of two-actuated wheels, a generalpurpose control board (Figure 8a), an adjustable weighting bar attached to the pendulum, a gyro and accelerometer sensors, a remote controller (Figure 8b), and two optional mechanisms that can be easily attached/detached from the main body of the robot. In particular, the general-purpose control board consists of a 32 bits ARM microcontroller, 10 general I/O ports, 2 motor drives, a LCD display, 8 LEDs, a Zigbee module, and 2 servo connectors. The WV-2RII is endowed with two active wheels actuated by DC motors. The model description is shown in Figure 9; where the following parameters are defined as follows. θ: Tilt angle of the chassis φ: Axial component of the angular velocity of the wheel m1: Mass of the chassis m2: Wheel mass J1: Moment of inertia of the chassis J2: Wheel Moment of Inertia l: Distance between wheel axis and robot mass center r: Wheel radius

606

Table 1. The specification of WV-2RII Parameter

Specifications

Height [mm]

530

Weight [kg]

3.8

DOFs

2-DOFs

Microcontroller

STM32F103VB x 1 Accelerometer x 1

Sensors

Red Gyro x 1 Optical Encoder x 2

Motor

RDO-37BE50G9 (12 Volts) x 2

Power Supply

Battery: 6 [V] x 1 RC–Battery: 12 [V] x 1

Remote Controller

ZigBee: 2.6GHz

Figure 8. a) general-purpose control board; b) remote controller for the WV-2RII

Human-Friendly Robots for Entertainment and Education

Figure 9. The model of the two-wheeled inverted pendulum robot

m1 (l cos θ)′′ = −m1g + fy

(21)

J 1θ ′′ = fyl sin θ − fx l cos θ − nT

(22)

x = rϕ

(23)

Equations (24) and (25) follow from above equations upon elimination of intermediate variables fx, fy, S. From Equation (24), we may notice that when the angular acceleration of the body is less than zero, it is possible to correct the vertical inclination of the body to the standing upright position. ϕ ′′ = 1 nT − m1rl(θ ′′ cos θ − θ ′2 sin θ) 2 J 2 + r m1 + r m2 2

By using the above parameters, and by defining T as torque, n as reduction ratio of the gear and S as the Frictional Force on the wheel along the horizontal ground plane (where fx and fy are the components of the force acting between the wheel and pendulum at the center of the wheel), we may define the following Equations (18-23): m2x 2′′ = S − fx

(18)

J 2ϕ ′′ = nT − rS

(19)

m1 (x 2′′ + (l sin θ)′′ ) = fx

(20)

{

}

(25)

If we define the maximum tilt angle of the chassis to 50 degrees, and use the respective physical parameters corresponding to the WV-2RII (m1 = 2.247 kg; m2 = 0.800 kg; J1 = 0.015 kgm2; J2 = 0.002 kg*m2; g = 9.81 m/s2; l = 0.0477 m; r = 0.0725 m) into Eq. (28), we obtain the following relation: ∴ nT ≥ 2.2[Nm ] Based on the above relation, we have selected the motor RDO-37BE50G9 (stall torque 0.160 Nm and Gear ratio 9:1). If we consider the coef-

Equation 24

θ ′′ =

1 m12rl 2 cos2 θ J + m1l 2 + 1 J2 + rm1 + rm2 r

      2     nT + m1rl θ ′ sin θ  m1gl sin θ − m1l cos θ  − nT       J 2    + rm1 + rm2     r 

607

Human-Friendly Robots for Entertainment and Education

ficient of safety of the power generated by the two motors as 0.8, then nT is 2.3 Nm satisfies the required specification. On the other hand, as we have previously introduced, we have developed two additional mechanisms that can be easily attached to the main body of the WV-2RII. In particular, a kicking mechanism for soccer (Figure 10a) and an arm mechanism for sumo (Figure 10b) have been designed and constructed. In particular, the soccer-kicking mechanism is composed by a spring, hook, stopper, and a DC motor. In order to kick the ball, a tension spring is used to increase the speed of movement of the kicking mechanism (maximum output load of 22N). Basically, the kicking mechanism is attached to a hook which is moved until a certain point when the hook is automatically released (by a stopper), the reaction force accumulated by the spring is used to kick the ball. On the other hand, the sumo-arm mechanism is composed by sliding-crank mechanism actuated by a DC motor, an arm base actuated by a RC motor to adjust the pitch of the whole arm mechanism and a pushing plate with embedded switches for detecting the contact with the opponent. Basically, in gear wheels of the slider of the crank mechanism, the fixed and movable racks are used. The rotation motion of the crank is transmitted to the gear wheels and the movable rack moves at twice the stroke of the fixed rack. From this, the

arm mechanism provides a large stroke (around 88mm) by using a compact mechanism. As a further example of application of WV2RII for showing the potentialities of the proposed system, a female undergraduate student (from mechanical engineering background) during an internship at Waseda University was asked to design an upper body with appearance and gestures that are appealing to children using this new additional robot. For this purpose, we asked the student to design of the upper body mechanism, to develop the required commands for controlling it from a remote controller integrated on the WV2RII. The detail of the mechanism designed by the internship student is shown in Figure 11a. The proposed upper body uses 4 RC motors to control the motion of head (2-DOFs) and arms/wings (2-DOFs), lending more expression to the robot. Moreover, the possible motions realizable by the upper body are shown in Figure 11b. In Figure 12, the block diagram of the control system implemented for the WV-2RII is shown. As we may observe, the WV-2RII is controlled by feedback control system. In particular, the rate gyro sensor signal measures the body angular velocity (θ’) and the encoder measures the wheel rotational angle (φ). Because the drift on the signal obtained from the gyro is extremely small, the use of a high-pass filer is not required. Therefore, a low-pass filter is only used to compute the

Figure 10. Detail of the additional mechanisms designed for WV-2RII: a) soccer-kicking mechanism; b) sumo-arm mechanism

608

Human-Friendly Robots for Entertainment and Education

Figure 11. Pictures of the possible motions that the upper body mounted on the WV-2RII

body angular velocity (θ’); where the cut-off frequency is 0.32 Hz. In order to compute the body angle, the body angle and the wheel angular velocity, the body angular velocity and wheel angle are integrated and derivated respectively. In order to control all the parameter, a feedback controller has been implemented by using Equation (26), where k1~k6 parameters are the gain coefficients of the controller which are tuned to assure the stabilization of the system. Furthermore, a current feedback controller has been implemented by Equation (27), where the parameter k7 is tuned for assuring the accurate control of the

command current to each motor. As for the command control signal, the θREF, φ’REF, α’REF are set to zero, while the other commands are sent by a remote controller.

Control Stability In order to verify the robustness of the proposed controller implemented for the WV-2RII, we have placed the pendulum horizontally on the ground without activating the control. From this starting position, we have activated the control system and given as control goal the vertical position (90

Figure 12. Control block diagram implemented for the WV-2RII

609

Human-Friendly Robots for Entertainment and Education

Equations 26 and 27 ′ ) + k5 ⋅ (α − αREF ) + k6 ⋅ (α ′ − αREF ′ ) ioutR = k1 ⋅ θ + k2 ⋅ θ ′ + k 3 ⋅ (φ − φREF ) + k 4 ⋅ (φ ′ − φREF uR = k7 ⋅ (ioutR − iR )

′ ) − k5 ⋅ (α − αREF ) − k6 ⋅ (α ′ − αREF ′ ) ioutL = k1 ⋅ θ + k2 ⋅ θ ′ + k 3 ⋅ (φ − φREF ) + k 4 ⋅ (φ ′ − φRE uL = k 8 ⋅ (ioutL − iL )

degrees). From this experiment, we may observe the dynamic response of WV-2R by analyzing the body angle θ and the motor current measured. The experimental results are shown in Figure 13. As we may observe, the WV-2RII requires around 0.8 sec to reach the target position, where a maximum of 3A is required (the current circuit has been designed to support a peak current up to 7 Amperes).

FUTURE RESEARCH DIRECTIONS Conventionally, anthropomorphic musical robots are mainly equipped with sensors that allow them to acquire information about its environment. Based on the anthropomorphic design of humanoid robots, it is therefore important to emulate two of

the human’s most important perceptual organs: the eyes and the ears. For this purpose, the humanoid robot integrates in its head, vision and aural sensors attached to the sides for stereo-acoustic perception. In the case of a musical interaction, a major part of the typical performance (i.e. Jazz) is based on improvisation. In these parts, musicians take turns in playing solos based on the harmonies and rhythmical structure of the piece. Upon finishing his solo section, one musician will give a visual signal, a motion of the body or his instrument, to designate the next soloist. Toward enabling the multimodal interaction between the musician and musical robots, a Musical-based Interaction System (MbIS) will be integrated on the Waseda Saxophonist robot (Figure 14a). The MbIS has been conceived for enabling the interaction between the musical robot and musicians (Petersen, et al.,

Figure 13. Experimental results while programming the WV-2RII to rise from the ground by analyzing the body angle and the applied motor current

610

Human-Friendly Robots for Entertainment and Education

Figure 14. a) proposed musical-based interaction system; b) two-wheeled double inverted pendulum

2010). Even though the WAS-2R still requires several improvements from the mechanical and control point of view, we do expect the robot can be used for the entertainment of elderly people, reproduce the performance of famous saxophonist players passed away and for education of young players as practical applications. On the other hand, in order to introduce interactive educational robotic systems, the educational platform (both for university students and engineering at the industry) must be designed to cover the basic principles in electronics, mechanics, programming as well as more advanced topics on control, advanced programming and humanrobot interaction. Moreover, to enhance the entertainment issue, the educational platform could also include some aspects of art (i.e. music, etc.) to learn other basic aspects such as signal processing (i.e. musical retrieval information, etc), recognition systems (i.e. Hidden Markov Model, etc.), game design (i.e. audio/motion design), etc. Further challenges on dynamic con-

trol of a two-wheeled double inverted pendulum robot can be also conceived (Figure 14b). Based in this approach, it is possible to be used in classes beyond the classical Electrical, Mechanics, and Mechatronics Engineering curriculum, including Music Engineering (Martin, et al., 2009; Yanco, et al., 2007), etc. The WV-2RII is now being commercialized as “MiniWay” by Japan Robotech Ltd. Even though this robot has been designed as an educational robot, it is possible to conceive (with some mechanical and control design modifications) different kinds of practical applications such as baggage transportation within an airport, guidance for visitors or entertainment of children at museums, etc.

CONCLUSION In this chapter, the mechanism design and control implementation proposed for two different humanfriendly robotic platforms have been introduced.

611

Human-Friendly Robots for Entertainment and Education

In particular, the developments of an anthropomorphic saxophonist robot and a two-wheeled inverted pendulum robot have been detailed. The saxophonist robot has been designed to reproduce the organs involved during the saxophone playing and a feed-forward controller has been implemented in order to accurately control both the air pressure and the sound pitch during a musical performance. On the other hand, the two-wheeled inverted pendulum has been designed to introduce the principles of robot technology at different educational levels and a feedback controller has been implemented in order to assure the stability of the inverted pendulum.

ACKNOWLEDGMENT Part of the research on the Waseda Saxophonist Robot and Waseda Vehicle Robot Part was done at the Humanoid Robotics Institute (HRI), Waseda University and at the Center for Advanced Biomedical Sciences (TWINs). This research is supported (in part) by a Gifu-in-Aid for the WABOT-HOUSE Project by Gifu Prefecture. This work is also supported (in part) by Global COE Program “Global Robot Academia” from the Ministry of Education, Culture, Sports, Science, and Technology of Japan. Finally, the study on the Waseda Saxophonist Robot is supported (in part) by a Grant-in-Aid for Young Scientists (B) provided by the Japanese Ministry of Education, Culture, Sports, Science, and Technology, No. 23700238 (J. Solis, PI).

Dannenberg, R. B., Brown, B., Zeglin, G., & Lupish, R. (2005). McBlare: A robotic bagpipe player. In Proceedings of the International Conference on New Interfaces for Musical Expression, (pp. 80-84). ACM. de Vaucanson, J. (1979). Le mécanisme du fluteur automate: An account of the mechanism of an automation: Or, image playing on the german-flute. In Vester, F. (Ed.), The Flute Library: First Series No. 5. Dordrecht, The Netherlands: Uitgeverij Frits Knuf. Degallier, S., Santos, C. P., Righetti, L., & Ijspeert, A. (2006). Movement generation using dynamical systems: A humanoid robot performing a drumming task. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, (pp. 512-517). IEEE. Doyon, A., & Liaigre, L. (1966). Jacques Vaucanson: Mecanicien de genie. Paris, France: PUF. Fletcher, N., Hollenberg, L., Smith, J., & Wolfe, J. (2001). The didjeridu and the vocal tract. In Proceedings of the International Symposium on Musical Acoustics, (pp. 87-90). ACM. Grasser, F., D’Arrigo, A., Comlombi, S., & Rufer, A. (2002). Joe: A mobile inverted pendulum. IEEE Transactions on Electronics, 49(1), 107–114. doi:10.1109/41.982254 Guillemain, P., Vergez, C., Ferrand, D., & Farcy, A. (2010). An instrumented saxophone mouthpiece and its use to understand how an experienced musician plays. Acta Acoustica, 96(4), 622–634. doi:10.3813/AAA.918317

REFERENCES

Heath Company. (2011). Website. Retrieved from http://www.hero-1.com/broadband/.

Ando, Y. (1970). Drive conditions of the flute and their influence upon harmonic structure of generated tone. Journal of the Acoustical Society of Japan, 297-305.

Japan Robotech Ltd. (2011). Website. Retrieved from http://www.japan-robotech.com/eng/index. html.

612

K-TEAM. (2011). Website. Retrieved from http:// www.k-team.com.

Human-Friendly Robots for Entertainment and Education

Kato, I., Ohteru, S., Kobayashi, H., Shirai, K., & Uchiyama, A. (1973). Information-power machine with senses and limbs. In Proceedings of the CISMIFToMM Symposium on Theory and Practice of Robots and Manipulators, (pp. 12-24). ACM.

Miller, D., Nourbakhsh, I., & Siegwart, R. (2008). Robots for education. In Siciliano, B., & Khatib, O. (Eds.), Springer Handbook of Robotics (pp. 1287–1290). Berlin, Germany: Springer. doi:10.1007/978-3-540-30301-5_56

Kawato, M., & Gomi, H. (1992). A computational model of four regions of the cerebellum based on feedback-error-learning. Biological Cybernetics, 68, 95–103. doi:10.1007/BF00201431

Mukai, S. (1992). Laryngeal movement while playing wind instruments. In Proceedings of International Symposium on Musical Acoustics, (pp. 239–241). ACM.

Kim, H., Kim, K., & Young, M. (2003). On-line dead-time compensation method based on time delay control. IEEE Transactions on Control Systems Technology, 11(2), 279–286. doi:10.1109/ TCST.2003.809251

Pathak, K., & Franch, J., Agrawal, & Sunil, K. (2005). Velocity and position control of a wheeled inverted pendulum by partial feedback linearization. IEEE Transactions on Robotics, 21(3), 505–513. doi:10.1109/TRO.2004.840905

Kim, Y. H., Kim, S. H., & Kwak, Y. K. (2003). Dynamic analysis of a nonholonomic two-wheeled inverted pendulum robot. In Proceedings of the Eighth International Symposium on Artificial Life and Robotics, (pp. 415-418). ACM.

Petersen, K., Solis, J., & Takanishi, A. (2010). Musical-based interaction system for the waseda flutist robot: Implementation of the visual tracking interaction module. Autonomous Robots Journal, 28(4), 439–455. doi:10.1007/s10514-010-9180-5

Klaedefabrik, K. B. (2005). Martin riches Maskinerne / the machines. Berlin, Germany: Kehrer Verlag.

Salerno, A., & Angeles, J. (2007). A new family of two wheeled mobile robot: Modeling and controllability. IEEE Transactions on Robotics, 23(1), 169–173. doi:10.1109/TRO.2006.886277

Koyanagi, E., Lida, S., & Yuta, S. (1992). A wheeled inverse pendulum type self-contained mobile robot and its two-dimensional trajectory control. In Proceedings of ISMCR, (pp. 891-898). ISMCR. Kuraray Co. (2011). Website. Retrieved from http://www.kuraray.co.jp/en/. LEGO. (2011). Website. Retrieved from http:// mindstorms.lego.com/. Martin, F., Greher, G., Heines, J., Jeffers, J., Kim, H. J., & Kuhn, S. (2009). Joining computing and the arts at a mid-size university. Journal of Computing Sciences in Colleges, 24(6), 87–94.

Solis, J., Ninomiya, T. N., Petersen, K., Takeuchi, M., & Takanishi, A. (2009a). Development of the anthropomorphic saxophonist robot WAS-1: Mechanical Design of the simulated organs and implementation of air pressure. Advanced Robotics Journal, 24, 629–650. doi:10.1163/016918610X493516 Solis, J., Petersen, K., Ninomiya, T., Takeuchi, M., & Takanishi, A. (2009b). Development of anthropomorphic musical performance robots: From understanding the nature of music performance to its application in entertainment robotics. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, (pp. 2309-2314). IEEE Press.

613

Human-Friendly Robots for Entertainment and Education

Solis, J., Petersen, K., Yamamoto, T., Takeuchi, M., Ishikawa, S., Takanishi, A., & Hashimoto, K. (2010a). Design of new mouth and hand mechanisms of the anthropomorphic saxophonist robot and implementation of an air pressure feedforward control with dead-time compensation. In Proceedings of the International Conference on Robotics and Automation, (pp. 42-47). ACM. Solis, J., Petersen, K., Yamamoto, T., Takeuchi, M., Ishikawa, S., Takanishi, A., & Hashimoto, K. (2010b). Implementation of an overblowing correction controller and the proposal of a quantitative assessment of the sound’s pitch for the anthropomorphic saxophonist robot WAS-2. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, (pp. 1943-1948). IEEE Press. Solis, J., & Takanishi, A. (2009c). Introducing robot technology to undergraduate students at waseda university. In Proceedings of the ASME Asia-Pacific Engineering Education Congress. ASME. Solis, J., Taniguchi, K., Ninomiya, T., & Takanishi, A. (2008). Understanding the mechanisms of the human motor control by imitating flute playing with the waseda flutist robot WF-4RIV. Mechanism and Machine Theory, 44(3), 527–540. doi:10.1016/j.mechmachtheory.2008.09.002 Sugano, S., & Kato, I. (1987). WABOT-2: Autonomous robot with dexterous finger-arm coordination control in keyboard performance. In Proceedings of the International Conference on Robotics and Automation, (pp. 90-97). ACM. Takashima, S., & Miyawaki, T. (2006). Control of an automatic performance robot of saxophone: Performance control using standard MIDI files. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems Workshop: Musical Performance Robots and Its Applications, (pp. 30-35). IEEE Press.

614

Toyota Motor Corporation. (2011). Website. Retrieved from http://www.toyota-global.com/ innovation/partner_robot/. Yanco, H. A., Kim, H. J., Martin, F. G., & Silka, L. (2007). Artbotics: Combining art and robotics to broaden participation in computing. In Proceedings of the AAAI Spring Symposium Robots and Robot Venues. AAAI.

ADDITIONAL READING Aggarwal, R., Darzi, A., & Sullivan, P. (2009). Skills acquisition and assessment after a microsurgical skills course for ophthalmology residents. Ophthalmology, 116(2), 257–262. doi:10.1016/j. ophtha.2008.09.038 Billard, A. (2003). Robota: Clever toy and educational tool. Robotics and Autonomous Systems, 42, 259–269. doi:10.1016/S0921-8890(02)00380-9 Hashimoto, S. (2009). Kansei technology and robotics machine with a heart. Kansei Engineering International, 8(1), 11–14. Hayashi, E., Yamane, M., & Mori, H. (200). Development of moving coil actuator for an automatic piano. International Journal of Japan Society for Precision Engineering, 28(2), 164–169. iRobot Corp. (2005). iRobot roomba serial command interface (SCI) specification. Burlington, VT: iRobot. Ishii, H., Koga, H., Obokawa, Y., Solis, J., Takanishi, A., & Katsumata, A. (2009). Development and experimental evaluation of oral rehabilitation robot that provides maxillofacial massage to patients with oral disorders. The International Journal of Robotics Research, 28(9), 1228–1239. doi:10.1177/0278364909104295 Kajitani, M. (1989). Development of musician robots. Journal of Robotics and Mechatronics, 1(3), 254–255.

Human-Friendly Robots for Entertainment and Education

Kotosaka, S., & Schaal, S. (2001). Synchronized robot drumming by neural oscillator. Journal of the Robotics Society of Japan, 19, 116–123. Kusuda, Y. (2008). Toyota’s violin-playing robot. Industrial Robot: An International Journal, 35(6), 504–506. doi:10.1108/01439910810909493 Martin, F. G. (2001). Robotic explorations: A hands-on introduction to engineering. Upper Saddle River, NJ: Prentice Hall. Matari, M. J., Koenig, N., & Feil-Seifer, D. (2007). Materials for enabling hands-on robotics and STEM education. Technical Reporty SS-07–09. Washington, DC: AAAI. Mayrose, J., Kesavadas, T., Chugh, K., Dhananjay, J., & Ellis, D. E. (2003). Utilization of virtual reality for endotracheal intubation training. Resuscitation, 59(1), 133–138. doi:10.1016/S03009572(03)00179-5 Murphy, R. R. (2000). Using robot competitions to promote intellectual development. Artificial Intelligence Magazine, 21(1), 77–90. Nakadate, R., Matsunaga, Y., Solis, J., Takanishi, A., Minagawa, E., Sugawara, M., & Niki, K. (2011). Development of a robotic-assisted carotid blood flow measurement system. Mechanism and Machine Theory Journal, 46(8), 1066–1083. doi:10.1016/j.mechmachtheory.2011.03.008 Noh, Y., Segawa, M., Shimomura, A., Ishii, H., Solis, J., Hatake, K., & Takanishi, A. (2008). WKA-1R robot-assisted quantitative assessment of airway management. International Journal of Computer Assisted Radiology and Surgery, 3(6), 543–550. doi:10.1007/s11548-008-0238-1 Rowe, R. (2001). Machine musicianship. Cambridge, MA: The MIT Press. Sobh, T. M., & Wange, B. (2003). Experimental robot musicians. Journal of Intelligent & Robotic Systems, 38, 197–212. doi:10.1023/A:1027319831986

Solis, J., Marcheschi, S., Frisoli, A., Avizzano, C. A., & Bergamasco, M. (2007). Reactive robots system: An active human/robot interaction for transferring skill from robot to unskilled persons. International Advanced Robotics Journal, 21(3), 267–291. doi:10.1163/156855307780131992 Solis, J., Oshima, N., Ishii, H., Matsuoka, N., Hatake, K., & Takanishi, A. (2008). Towards an understanding of the suture/ligature skills during the training process by using the WKS-2RII. International Journal of Computer Assisted Radiology and Surgery, 3(3-4), 231–239. doi:10.1007/ s11548-008-0220-y Solis, J., & Takanishi, A. (2010). Recent trends in humanoid robotics research: Scientific background, applications and implications. Accountability in Research, 17, 278–298. doi:10.1080/0 8989621.2010.523673 Weinberg, G., & Driscoll, S. (2006). Toward robotic musicianship. Computer Music Journal, 30(4), 28–45. doi:10.1162/comj.2006.30.4.28

KEY TERMS AND DEFINITIONS Anthropomorphic: Musical Robots: A robot designed to reproduce the organs involved during the musical instrument playing able of displaying both motor dexterity and intelligence. Bio-Inspired Robotics: A robot that mechanically emulates or simulates living biological organisms. Education Robots: A robot used by students composed by low-cost components commonly found on any robotic platform. Feed-Forward Error Learning: A computational theory of supervised motor learning that can be used as a training method to compute the inverse dynamics model of the controller system. Human-Friendly Robotics: Research field focus on the development of new methodologies for the design, control and safety operation of ro-

615

Human-Friendly Robots for Entertainment and Education

bots designed to naturally and intuitively interact, communicate and work with humans as partners. Humanoid Robots: A robot designed to reproduce the human body in order to interact naturally with human partners within the human environment.

Inverted Pendulum Robot: A robot composed by an inverted pendulum attached to a mobile base equipped with motors that dive it along a horizontal plane.

This work was previously published in Service Robots and Robotics: Design and Application, edited by Marco Ceccarelli, pp. 130-153, copyright 2012 by Information Science Reference (an imprint of IGI Global).

616

617

Chapter 35

Dual-SIM Phones:

A Disruptive Technology? Dickinson C. Odikayor Landmark University, Nigeria Ikponmwosa Oghogho Landmark University, Nigeria Samuel T. Wara Federal University Abeokuta, Nigeria Abayomi-Alli Adebayo Igbinedion University Okada, Nigeria

ABSTRACT Dual-SIM mobile phones utilize technology that permits the use of two SIMs at a time. The technology permits simultaneous access to the mobile network services. Its disruptive nature is with reference to the mobile phone market in Nigeria and other parts of the world. Earlier market trend was inclination to “newer” and “better” phones, in favour of established single-SIM mobile phone manufacturers like Nokia and Samsung. Introduction of dual-SIM phones mainly manufactured by Chinese mobile phone manufacturing firms propelled user preference for phones acquisition which permits dual and simultaneous access to mobile network. This technology has compelled its adoption by established manufacturing names in order that they may remain competitive. It is a clear case of a disruptive technology, and this chapter focuses on it need, effects, and disruptive nature.

1.0 INTRODUCTION Christensen (1997) used the term “disruptive technology” in his book The Innovator’s Dilemma. Such technologies surprise the market by generating a considerable improvement over existing technology, and this can be attained in DOI: 10.4018/978-1-4666-1945-6.ch035

a number of ways. This technology may not be as expensive as an existing technology or more complicated in nature but does attract more potential users (www.wisegeek.com). At times it may be expensive and complicated, requiring highly skilled personnel and infrastructure to implement. Two types of technology change have shown different effects on the industry leaders. Sustained technology sustains the rate of improvement in a

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Dual-SIM Phones

product’s performance in the industry. Dominant industry firms are always at the fore developing and adopting such technologies. Disruptive technology changes or disrupts the performance path and continually results in the failure of the industry leading firms. Few technologies are basically or essentially disruptive or sustaining in nature. It’s the impact created by the strategy or business model that the technology enables that is disruptive (Christensen & Raynor 2003). The advent of Global System for Mobile communication (GSM) resulted in a major communication leap worldwide. Mobile phones actually became an indispensible electronic gadget defining the modern world (Sally, Sebire, Riddington, 2010). Mobile phone manufacturers continue to include different features on their mobile phone products in addition to basic functions of communication. This is with a purpose of sustaining the market for the products. The mobile phone has become a gadget with full range of services. Ranging from basic telephony to business and leisure or entertainment features. However, performance issues with mobile network services furnished further basis for multiple SIM (Subscriber Identity Module) acquisition by users, for improved access. The problems that led to this were initially poor network coverage and poor performance problems of mobile network service providers in the country and later lower call tariff. Mobile phone users acquired phones depending on the number of networks to which they were subscribed and the trend still exists today. An opportunity was created for a product that would satisfy user needs with regard to multiple SIM capabilities.

1.1 History of Mobile Phone The history of mobile phone began in the 1920s. The very first usage of it was in taxis/cars where it was used as a two-way radio for communication. Cell phones evolved over time like any other electronic equipment, and each stage or era was most certainly interesting. From its first official

618

use by the Swedish police in 1946 to connecting a hand-held phone to the central telephone network, modern cell phones evolved tremendously. Ring (1947) created a communication architecture of hexagonal cells for cell phones. Later an engineer discovered that cell towers can both transmit and receive signal in three different directions led to further advancement. Early cell phone users were limited to certain blocks of area often referring to base stations covering a small land area. It was not possible to remain in reach beyond such defined boundaries until Joel’s development of handoff system. By this, users were enabled to roam freely across cell areas without interruption to their calls. Cell phone had analog services between 1982 and 1990. In 1990, Advanced Mobile Phone Services (AMPS) turned the analog services to digital and went online (“History of Cell Phone” 2010).

1.1.1 First Generation (1G) Mobile Phones The USA Federal Communication Commission (FCC) approved for public use the first cell phone called Motorola DynaTAC 8000X from Motorola but was made available to the public market after 15 years and was developed by Dr. Martin Cooper. It was considered to be a lightweight cell phone of about 28 ounces. Its dimensions were 13 x 1.75 x 3.5 inches. First generation mobile phones worked with the Frequency Division Multiple Access (FDMA) technology. The first generation mobiles are large in size and heavy to carry. First generation mobile phones were used only for the voice communication purpose(“History of Cell Phone” 2010).

1.1.2 Second Generation (2G) Mobile Phones The second generation mobile phones were introduced in the 1990s. Second generation (2G) mobile phones worked with both GSM and CDMA (Code Division Multiple Access) technologies.

Dual-SIM Phones

2G network signals are digital while 1G network signals are analog. 2G cell phones were smaller, weighing between 100 to 200 grams; these were hand-held and were portable. Later improvements on these cell phones included faster internet access with GPRS (General Packet Radio Service) and subsequently, EGDE (Enhanced Data rates for Global Evolution) technology. And sharing of files with other mobile devices using infra red or Bluetooth technology. There were other improvements like Short Message Service (SMS) smaller batteries and longer battery life, etc. Due to all these improvements, the mobile phone customer base expanded rapidly worldwide.

1.1.3 Third Generation (3G) Mobile Phones Most present day mobile phones are the third generation phones. The standards used on 3G phones differ from one model of the mobile phone to the other which essentially depends on the network providers. These phones were capable of streaming live videos, stream radio, making video calls, send emails, have mobile TV, have high internet access speed due to HSDPA (High Speed Data Packet Access) and WCDMA (Wideband Code Division Multiple Access) technology. They also use Wi-Fi and touch screen technology apart from performing all the functions of the 2G mobile phones (“History of Cell Phone” 2010).

1.2 Dual SIM Mobile Phones A dual-SIM mobile phone has the capacity to hold two SIM cards. The earliest model of this technology made use of dual-SIM adapters on single SIM phones which of course had only one transceiver. The use of this adapter rendered a slim phone bulky. Sometime the SIM card needed to be trimmed to fit into the adapter and the phone. The dual-SIM adapter could hold two SIMs at a time and was small enough to fit behind the battery of a regular mobile phone. However both

SIMs could not be activated at the same time on the mobile phone. Switching from one SIM to the other was done by restarting the mobile phone; this combination is called a standby dual-SIM phone. Recent dual-SIM phones have both SIMs activated simultaneously such that there is no need to restart the phone, these are referred to as active dual-SIM phones. Most of these phones have two transceivers in built of which one of the transceivers may support both 2G and 3G while the other transceiver only supports 2G. Another type of dual-SIM mobile phone exists which supports both GSM and CDMA network. A new generation of dual SIM mobile phones makes use of only one transceiver yet provide 2 active SIMs simultaneously e.g. the LG GX200. Some dualSIM phone use calls management software and can divert calls from one SIM to the other SIM’s voicemail when a call is in progress or simply indicate that the line is busy. Both SIM share the phone’s memory such that they share the same contact list and SMS and MMS message library (li, R. 2010). A recent introduction is mobile phones capable of holding three SIM cards; an example is the Akai Trio (“Dual SIM” 2011).

1.3 Telephony in Nigeria Up until 2001 Nigeria experienced problems with the services provided by its then main communications service provider the Nigerian Telecommunication Plc (NITEL) including inefficient services, lack of access, limitation of services to places of institution since only landlines were majorly deployed.In 1992, the telecommunications industry in Nigeria was deregulated. First was the commercialization or corporatization of Nigerian Telecommunications Plc (NITEL) while the second was the establishment of the Nigerian Communications Commission (NCC), the telecommunications industry regulator (Alabi 1996). The deregulation led to the introduction of Global System for Mobile communication (GSM) network providers operating on the 900/1800

619

Dual-SIM Phones

MHz Spectrum were MTN Nigeria, Econet (now Airtel), Globacom and Mtel in 2001. As a result the use of mobile phones soared, and has replaced the unreliable services of the Nigerian Telecommunications Limited (NITEL). With an estimated 45.5 million mobile phones in use as at August 2007, and most people having more than one cell phone, Nigeria has witnessed a phenomenal growth in this sector (“Telecommunications in Nigeria,” 2011).

2.0 THE NEED FOR DUAL SIM MOBILE PHONE The GSM service in Nigeria came with its own problems as subscribers were not getting value for their money. Tariffs were high and the GSM service providers were plagued with numerous problems such as instability in power supply, insecurity of infrastructure, call drops, difficulty in network accessibility. Due to the peculiar nature of power supply in Nigeria, GSM service providers had difficulty in powering their cell sites. Electric power generators installed at base stations to supplement or provide power meant additional deployment and operational cost. This in advertently led to increase in call tariffs. GSM service providers also incurred additional cost with regard to securing installed facilities. GSM Service providers have high numbers of security personnel on their payroll, because these guards are needed to guard their installations against theft and vandals. As of October 2007, Airtel (formerly Zain) had 2500 base stations, MTN2900, and Globacom-3000 in Nigeria (Adegoke, Babalola, et al 2008). With two security personnel per cell site, one can relate the cost. These costs go into the total cost of operation thereby leading to increases in call tariffs. The presence of security personnel doesn’t however guarantee the safety of these facilities since there are reported cases of stolen generators and siphoned fuel from reservoirs (Njoku, 2007).

620

Major complaints from network subscribers were the inability to access the network to initiate calls. A subscriber had to dial several times before a call could go through. Sometimes after dialing several times, a subscriber might be connected to the wrong number. Often established calls are abruptly terminated in the middle of conversions. This can happen for several reasons. There may be loss of signal between the network and the mobile phone, when the mobile phone (subscriber) is outside the network coverage area, or the call is dropped upon handoff between cells on the same provider’s network. Other causes include cell sites running at full capacity no room for additional traffic, poor network configuration such that a cell is not aware of incoming traffic from a mobile device; the call is lost when the mobile phone cannot find an alternative cell to handoff.

2.1 The GSM Service Network accessibility, dropped calls and high tariff appear to be must worrisome to the average GSM subscriber. A common maxim then was “of what use is a mobile phone when it cannot be used at will?” Disturbingly, GSM service network problems often persist for days and on rare occasions, for weeks. These problems are peculiar to all the service networks. When one network is down, often service may be available on other networks. The logical option to subscribers was subscription to multiple networks; this of course meant acquisition of multiple GSM phones, with the attendant inconvenience associated with having to keep more than one handset. Many subscribers using multiple handsets experienced loss or theft of some of these phones. Most Nigerians therefore desired and looked for a means of having two SIMs on a phone to overcome the problem of carrying more than one phone. Major mobile phone manufacturers however concentrated on producing sophisticated phones with mind blowing features like camera,

Dual-SIM Phones

FM radio, memory card, WAP, GPRS and EGDE capabilities at the time. These companies excelled in product performance using current technology to produce better and more durable phones with each new release. They sustained product performance with their new innovations and high tech phones.

3.0 THE DISRUPTIVE IMPACT OF DUAL-SIM TECHNOLOGY Mobile phones of all brands, shapes and sizes, were introduced into the phone market at the onset just as GSM service providers were expanding network coverage. Common household names included Nokia, Samsung, Sagem, Sony Erickson and LG. There were a few other brands albeit insignificant compared to these six. The trend was slick, high-tech mobile phones with improved performance and durability. However, Chinese phone manufacturing companies introduced a disruption in this market trend and became a major player on the Nigerian mobile phone market via the introduction of dual-SIM capable phones popularly called “China Phones”. Although these products did not equal the existing brands in performance, look and durability, they provided an innovational intervention for the target market in providing access to multiple service networks on a single phone. As such, with the additional vantage of being cheap and easily affordable, the Nigerian market embraced the product. Most of the features on existing sophisticated phones are also available on the dual-SIM phones. According to a market research company GFK Retail and Technology, 30 per cent of mobile phones in Nigeria are dual-SIM (Rattue, 2011). This development which is directly related to the phenomenal growth of multi-SIM devices globally is not only in Nigeria. In Indonesia, Vietnam, Ghana and India, the market has grown from one in ten in 2009 to one in four by the quarter of 2010. According to the report, in Middle East and Africa, one in every

10 mobile phones sold uses dual SIM. In Asia, 16 per cent of all mobile phones sold have dual SIM capabilities, which represents an increase from 13 per cent at the beginning of 2010 (Rattue, 2011). There were however warranty issues with the first adapter type dual-SIM phones; these adapters could be used with normal single SIM phones. The use of the dual-SIM adapter voided warranty for such phones. Also “China Phones” that are active dual-SIM phones are bought from the dealers without warranty. When asked “why?” they reply that they equally bought them wholesale without warranty. Another issue is that of durability; “China Phones” broke down unpredictably. In event of fault, local repair shops find it difficult to get replacement parts as there are no service centers or parts shop for such products. The lack of International Mobile Equipment Identity (IMEI) numbers in the unbranded made in China handsets makes them non-traceable and creates security concerns. In spite of these shortcomings the demand for them is ever increasing, as low income earners can easily afford them. Most local mobile phone outlets sell mostly these “China Phone”. Established firm and global mobile phone manufacturers are facing stiff competition from Chinese brands and “fakes” in the Nigeria mobile phone market. This they have done by enticing consumers with attractive combination of features at affordable prices. Chief among these features is the dual-SIM capabilities of these mobile phones which established manufacturers are slowly introducing (Rattue, 2011). Samsung’s D880 Duos was not so successful when it was introduced since calls were only possible with its primary SIM, unlike the Chinese brands which offered dual call capabilities. To initiate call from the secondary SIM of a Samsung D880, it must be made the primary. This difficulty in addition to its high cost were factors that made it unsuccessful. Subsequent Samsung active dual-SIM phone models had better performance but their costs were still high. Nokia only introduced their cheap dual-SIM phone, Nokia C Series in Nigeria

621

Dual-SIM Phones

in 2010. There is general acceptance in the country that Nokia phones are more durable than others. However, the Nokia C1-00 is a standby dual-SIM phone as only one of the SIM is active at a time. Initially most Nigerians embraced the dual SIM phones due to inconvenience associated with carrying two mobile phones at the time. Presently, there is improvement in the power sector in the country. There have also been reductions in inaccessibility to GSM networks and the rate of dropped calls. And insecurity still remains an issue at each cell sites. The inclination to ownership of multiple mobile phones is currently not only driven by these factors but by new factors including lower call tariff, promo by various GSM service providers to entice customer and privacy/ personal security issues.

3.1 The Way Forward Some of the problems facing the dual-SIM “China Phones” and possible steps to address them. • • • • •

No warranty Issues Poor durability No service centres Difficulty in Getting Replacement Parts Security Issues (No IMEI number)

The cases of void warranties as a result of using dual-SIM adapter on normal single SIM mobile has drastically reduced if not eliminated by recent active dual-SIM mobile phones. The lack of warranty for a product often creates doubt in the mind of the customers as to durability or authenticity of the product. Wholesale dealers who get these “China Phones” should be made to demand that the manufacturers of such phones issue warranties for them. This will encourage more patronage. The issue of durability can be a result of poor design or the use of substandard materials to implement the dual-SIM technology. Since most of these phones are cheap as compared with other dual-SIM mobile phones manufactured by big and

622

popular mobile phone manufacturers like Nokia, Samsung etc. There is the likelihood that the use of substandard materials is the cause of poor durability. Better materials will lead to increase in cost of production and product cost. I believe they can strike a balance and still produce phones that are reasonably priced. Initially, the “China Phones” had short battery life, but the phones now come with extra battery. Chinese phone manufacturing companies need to establish service centres in the country or train and certify a hand full of owners of local mobile phone repair shop who will in turn pass on the skills acquired to others such that there will be enough skilled technicians who will be able to repair these phones in the event of faults. Replacement parts for “China Phones” should be made available to the trained technicians through the service centres. There is need for regulation to stop the use of dual-SIM mobile phones without IMEI number. International Mobile Equipment Identity (IMEI) number is unique to every GSM and WCDMA mobile phone and found printed inside the battery compartment of the phone. It can be displayed on the screen of the phone by entering*#06# on the keypad. In India when a large percentage of people used such phones, mobile operators implanted IMEIs onto such phones rather than bar services. But the Indian government placed a ban on the usage of phones without IMEI which took effect from December 1, 2009.

4.0 CONCLUSION The need for communication in spite of poor network coverage and quality of service by the Mobile (GSM) service providers, informed ownership of multiple number of single SIM mobile phones to guarantee access to available network services by mobile phone users in Nigeria, with associated multi-phone ownership problems. Major mobile

Dual-SIM Phones

phone manufacturers who prefer a sustained technology model responded to the increase of market with improved and more sophisticated products. Disruptive market innovation by way of dual-SIM mobile phones products met market anticipation. These dual-SIM “ made in China” phones were not as attractive or as durable as the existing sophisticated brands but had most features on these phones and were cheap and affordable. The dual-SIM phones performance were also affected by short battery life, plus absence of warranties, technical support/services outlets and replacement parts. Additional security issues are associated with the phones’ lack of IMEI numbers. However, the dual-SIM innovation met a market need and is widely used in Nigeria. Whereas electricity supply, insecurity and other problems which informed telecoms service quality which informed multiple phone ownership are declining, personal security issues and preference of lower tariff offerings continue to inform multiple network access. As such, dual SIM phones remain a popular market choice. Associated problems with this dual SIM products must however be addressed by the China based manufacturers and other market players.

REFERENCES Adegoke, A. S., Babalola, I. T., & Balogun, W. A. (2008). Performance evaluation of GSM mobile system in Nigeria. Pacific Journal of Science and Technology, 9(2), 436–441. Alabi, G. A. (1996). Telecommunications in Nigeria. Retrieved March 10, 2011, from www. africa.upenn.edu Christensen, M. C. (1997). Innovator’s dilemma. Harvard Business School Press. Christensen, M. C., & Raynor, M. E. (2003). Innovator’s solution. Harvard Business School Press.

Dual, S. I. M. (2011). Retrieved March 10, 2011, from http://en.wikipedia.org/wiki/Dual_SIM Dual SIM Mobile Phones. (2009). Retrieved March 10, 2011, from http://www.dualsimmobilephones.com/2009/09/dual-sim-mobile-phones/ History of Cell Phone. (2010). Retrieved March 10, 2011, from www.historyofcellphones.net/ Li, R. (2010). Cell phone mysteries, what is dualSIM? Retrieved March 9, 2011, from www.articles. webraydian.com Njoku, C. (2007). The real problem with GSM in Nigeria. Retrieved March 9, 2011, from http://www.nigeriavillagesquare.com/index2. php?option=com_content&do_pdf=1&id=7829 Rattue, A. (2009). Buoyant Nigerian market sees 15 million mobile handsets sold in 2009. Retrieved July 12, 2011, from http://www.gfkrt.com/ news_events/market_news/single_sites/005203/ index.en.html Rattue, A. (2011). Multi SIM phenomenon continues in emerging mobile markets. Retrieved July 12, 2011, from http://www.gfkrt.com/news_events/ market_news/single_sites/007260/index.en.html Sally, M., Sebire, G., & Riddington, E. (2010). GSM/EDGE: Evolution and performance (p. 504). John Wiley and Sons Ltd. doi:10.1002/9780470669624 Telecommunications in Nigeria. (2011). Retrieved March 10, 2011, from http://en.wikipedia.org/ wiki/Telecommunications_in_Nigeria#mw-head

KEY TERMS AND DEFINITIONS Active Dual-SIM Phone: A dual-SIM that has both SIM activated, calls can be made or received simultaneously and there is no need to restart or switch between SIMs. Cell Phone: America’s name for mobile phone

623

Dual-SIM Phones

China Phone: substandard and sometimes unbranded dual-SIM mobile phone manufactured in China Dual-SIM Phone: Mobile phone capable of holding two SIM cards that may or may not have

both SIM cards activated to make or receive calls simultaneously. Standby Dual-SIM Phone: A dual-SIM mobile that has one SIM activated at a time and needs to be restarted to activate the other SIM or switch between SIMs.

This work was previously published in Disruptive Technologies, Innovation and Global Redesign: Emerging Implications, edited by Ndubuisi Ekekwe and Nazrul Islam, pp. 462-469, copyright 2012 by Information Science Reference (an imprint of IGI Global).

624

625

Chapter 36

Data Envelopment Analysis in Environmental Technologies Peep Miidla University of Tartu, Estonia

ABSTRACT Contemporary scientific and economic developments in environmental technology suggest that it is of great importance to introduce new approaches that would enable the comparison of different scenarios for their effectiveness, their distributive effects, their enforceability, their costs and many other dimensions. Data Envelopment Analysis (DEA) is one of such methods. The DEA is receiving an increasing importance as a tool for evaluating and improving the performance of manufacturing and service operations. It has been extensively applied in performance evaluation and benchmarking of several types of organizations, so-called Decision Making Units (DMU) with similar goals and objectives. Among these are schools, hospitals, libraries, bank branches, production plants, but also climate policies, urban public transport systems, renewable energy plants, pollutant emission systems, environmental economics, etc. The purpose of this chapter is to present the basic principles of the DEA and give an overview of its application possibilities for the problems of environmental technology.

INTRODUCTION The Earth is the common home for all of us and because of this the great attention paid to environmental problems is more than natural and urgent. The lack of economic value of environmental goods often leads to over-exploitation and degradation of these resources. It is extremely important to monitor and control interactions between DOI: 10.4018/978-1-4666-1945-6.ch036

production technologies and the environment. To keep and conserve the natural environment, environmental technology is developed. Independently of application areas of environment sciences, new approaches and methods, particularly of mathematical modeling, are extremely needed and welcome in this area. It is well-known that mathematical modelling is the most efficient method for investigating different processes, their simulation and prediction.

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Data Envelopment Analysis in Environmental Technologies

Data Envelopment Analysis (DEA) is a relatively new data oriented mathematical method for evaluating the performance of a set of peer entities traditionally called Decision Making Units (DMU) which convert multiple inputs into multiple outputs. Since DEA was first introduced, in its present form, in 1978 (Charnes et al., 1978), we have seen a great variety of applications of DEA being used in evaluating the performances of many different kinds of entities engaged in many different activities in many different contexts in many different countries. These DEA applications have used DMUs of various forms, such as hospitals, schools, universities, cities, courts, business firms, and others, including the performance of countries, regions, etc. The DEA is frequently applied in many areas of applied economic sciences, including agricultural economics, development economics, financial economics, public economics, macroeconomic policy, etc.; and among others, in addition to its traditional confinements in productivity and efficiency analysis; it has also diffused into the field of environmental economics and environmental technology. As it requires very few theoretical assumptions, DEA has opened up possibilities for its use in cases which have been resistant to other approaches because of the complex (often unknown) nature of the relations between the multiple inputs and multiple outputs involved in DMU. There are several examples of areas of environmental science, where DEA is used and where remarkable theoretical and practical results have been achieved. In the framework of problems raised by climate change, one of the major threats to the Earth’s sustainability, we see that DEA is applied for assessing the relative performance of different climate policy scenarios as DMUs, through accounting their long-term economic, social and environmental impacts as input and output parameters. (Bosetti & Buchner, 2008) Quantitative techniques are crucial here in order to make adequate decisions quickly. In the sense of climate change measuring the carbon emission

626

performance is also important. (Zhou et al., 2008) In another context, the paper by (Piot-Lepetit, 1997) considers the usefulness of DEA for estimating potential input reductions and assessing potential reductions of environmental impact on agricultural inputs. We can see the same in the paper (Madlener et al., 2006), where the assessment of the performance of biogas plants is realized. In the study (Rodrigues Diaz et al., 2004) DEA was used to select the most representative irrigation districts in Andalusia. One can find the use of DEA to assess corporate enactment of Environmentally Benign Manufacturing as work parts move from place to place in a company (Wu et al., 2006); this work touches green manufacturing problems. The DEA is used even in political decision making (Taniguchi et al., 2000), and to discuss a methodology to assess the performances of tourism management of local governments when economic and environmental aspects are considered as equally relevant. (Bosetti et al., 2004) The structure of the present chapter is the following. First an overview of DEA method is given. Today the DEA itself has developed and has several forms, versions and modifications, each of which has specific application features. Below we formulate only the basic version of it because there is a lot of literature available in libraries and on the internet. The main part of the paper deals with the case studies, the results of which have been achieved by using the DEA. There is a rising trend to apply DEA and naturally some selection of them is included. Finally the reader finds the conclusions and references. One important objective of this chapter is to emphasize that environmental technologies are very open to innovation, and using new methods of mathematical modeling is a part of this.

Data Envelopment Analysis in Environmental Technologies

OVERVIEW OF THE DATA ENVELOPMENT ANALYSIS In this paragraph we give a short description of Data Envelopment Analysis (DEA). More profound treatments of the topic can be found in many books, (e.g. Cooper et al., 2004; Cooper et al., 2007; Thanassoulis 2003) and there are also books (edited by Charnes et al., 1994) and papers dealing with the applications of DEA. Comprehensive information about DEA can be found on the web-page of Ali Emrouznejad (Emrouznejad 1995-2001) or in the paper (Emrouznejad et al., 2008). Also, it is important to mention that all the papers referred to in the next section “Case Studies” contain a sufficient overview of DEA versions or modifications used in particular cases under consideration.

DEA and Benchmarking Data Envelopment Analysis belongs to the wider class of efficiency measuring methods, to the so-called frontier methods, and is a data oriented approach for evaluating the performance of a set of peer entities called Decision Making Units (DMUs). DEA is a multi-factor productivity analysis model for measuring the relative efficiencies of DMUs and is receiving increasing importance as a tool for evaluating and improving the performance of manufacturing and service operations. DEA is a powerful methodology for identifying best practice frontier and has been extensively applied in performance evaluation and benchmarking of schools, hospitals, bank branches, production plants, public-sector agencies, etc. Technically DEA represents the collection of non-parametric, linear programming techniques developed to estimate the relative efficiency of DMUs. Largely the result of multi-disciplinary research in economics, engineering and management, DEA is a theoretically sound framework that offers many advantages over traditional efficiency measures such as performance ratios and regression.

The most important feature of DEA is its ability to handle effectively the multidimensional nature of inputs and outputs in the production and management process. Efficiency as an economic category is in use widely in profit – targeted organizations and enterprises. The simple idea that greater input into system brings us greater output and increases the profit lies in the base of the general meaning of efficiency. Theoretically and practically several types of efficiency are in use and every one of them gives a different possibility for interpretation and for making new decisions. The same is also seen in DEA concepts. Here for each DMU the efficiency score in the presence of multiple input and output factors is defined as: weighted sum of outputs divided to weighted sum of inputs. All scores are between zero and one; DMU’s whose efficiency is equal to one, are certified as fully efficient or simply efficient. For every inefficient DMU, i.e. for these which efficiency score is less then one, the DEA identifies a set of corresponding efficient units, so called reference group, that can be utilized as benchmarks for the improvement of activities. The estimation of the efficiency of a single DMU, on the base of other organizations acting in the same economic environment, is the main advantage of the DEA approach. Reference group means: the set of real DMUs whose outputs and inputs belong to the outputs and inputs of composite virtual DMU with nonzero weights. The fact that efficiencies of reference DMUs are taken equal to 1 (or 100%) does not mean that the quality of work of those base organizations could not improved. The results of the application of the DEA fix only the situation in the moment when the data for input and output parameters is collected. Geometrically for every DMU evaluated the reference set determines a frontier in the input – output parameters space. This is called best practice production frontier and the points which correspond to efficient DMUs are situated on this. If in the group of DMUs under consideration are n members for all the values of demanded input-

627

Data Envelopment Analysis in Environmental Technologies

output parameters, then in the parameters space we have also n data points. The best practice frontier is constructed then as piecewise linear surface in this space which envelopes set of data points and separates it from the beginning of coordinates. Fully efficient DMUs are vertices of this. Efficiency means geometrically a radial measure which quantitatively equals to the rate of distances from the beginning of coordinates of the composite virtual DMU and DMU being evaluated. Evaluation process of every single nonefficient DMU gives in general different efficiency coefficients and individual reference set because the data points are different. Exceptions are efficient DMUs: as mentioned, the efficiency of a DMU which belongs to reference set is established equal to one but over that there may be other DMUs which efficiency is also equal to one but which do not belong to any reference set. In all textbooks one can find the exact explanations also about geometrical issues. (See e.g. Cooper, et al., 2004). Data for DEA is a set of parameters, evaluated for each DMU. The parameters are divided into input and output parameters which represent different activities of DMUs under consideration. For example, if we consider DMUs as public libraries, the input parameters could be fixed as yearly expenditures on acquisition of new books, yearly expenditures on salaries, the size of collections and the area of the library rooms. As for output parameters one may consider the number of readers and the number of loans. It is interesting to mention some important conclusions of this work although research considered libraries. In the years 2002 by 2005 eight central libraries, i.e. 40% from the whole selection, used their resources effectively. The score of relative efficiency of the rest of libraries varied from 0,740 to 0,979.The data of four years show that the trend of the score of efficiency of central public libraries of Estonia was falling, i.e. the average score decreases. In 2005 there were six central libraries from 20 which were scale efficient, i.e. of optimal size. (Miidla &

628

Kikas, 2009). The list of parameters and division into inputs and outputs can be chosen and fixed differently, depending on the particular goals of research and, naturally, on application area. One can say that DMUs convert multiple inputs into multiple outputs. It is just interesting that the inputs and outputs do not need to have a common measure; they can be quantities of completely different units – meters, dollars, number of persons, number of lakes in regions, etc. It is important that in the whole selection the inputs and outputs of all DMUs have a value; although in the case of absent data there are also approaches which allow the use of DEA (Kao & S-Tai Liu, 2000). Further with regards to the words ‘selection’ or ‘selection group’ we refer to the whole set of DMUs under consideration. The DEA approach uses a linear programming model for measuring the relative efficiencies of those DMUs on the basis of the given data. First, a DMU is fixed and a hypothetical composite operating unit, a hypothetical DMU, based on all units in the selection group, is constructed. The input and output of this composite virtual DMU are determined by computing the weighted average of the inputs and outputs of all real DMUs in the selection group, and the efficiency score of the initially fixed DMU is defined as the rate output and input of this constructed composite DMU. This procedure is repeated for each single DMU, and as the output for the DEA application itself one gets an array of these relative efficiencies, which lay between zero and one. Thus the DEA approach is a kind of peer comparison method. Constraints in the linear programming model require all outputs of the composite virtual DMU to be greater than or equal to the outputs of the DMU being evaluated. So, if in the selection group there are n members then for evaluating all of the members, from the point of view of relative efficiency, we need to establish and solve n problems of linear programming. It should be mentioned that often more than one problem of linear programming is needed to solve each DMU. In the advance

Data Envelopment Analysis in Environmental Technologies

use of DEA, it is necessary to use more than one problem of linear programming when a more detailed analysis is needed. (Thanassoulis, 2003) If the inputs for the virtual composite unit can be shown to be less than the inputs for the DMU being evaluated, the composite DMU will be shown to have the same, or more output for less input. In this case, the model will show that the composite virtual DMU is more efficient than the DMU being evaluated. In other words, the DMU under evaluation is less efficient than the virtual DMU. Since the composite DMU is based on all DMUs in the selection group, the DMU being evaluated can be judged relatively inefficiently when compared to the other units in the selection. The estimation of the efficiency of a single DMU on the basis of others acting in the same environment, the so-called reference group, is the main advantage of the DEA approach. The reference group means the set of real DMUs whose outputs and inputs belong to the outputs and inputs of composite virtual DMU with nonzero weights. This approach makes the DEA very attractive because in the case of environmental entities it is difficult to speak about efficiency for all these in one sense, as it is possible, for example, for profit organizations. The efficiency of reference DMUs is taken equal to 1 (or 100%). The results of the application of the DEA can only fix the situation when applying the results to the time frame in which the data for input and output parameters was collected. The environment may have already changed; applying these results to the following year can render inaccurate data. Geometrically for every DMU evaluated the reference set determines the frontier in the input – output parameters space. This is called the best practice production frontier, although in the case of environmental technologies it is difficult to speak so, and the points which correspond to effective DMUs are situated on this. As mentioned before, the frontier itself is a piecewise linear surface in the input - output parameters space as an envelope to a production possibility set. Points

corresponding to the reference or fully efficient DMUs are vertices of this frontier. Efficiency is geometrically a radial measure quantitatively equal to the rate of distances from the beginning of coordinates of the composite virtual DMU and of DMU being evaluated. This rate is just referred to as the relative efficiency. Evaluation of every single DMU gives in principle a different frontier and an individual reference set. The efficiency of DMUs in the reference set is established equal to one and there may be also other DMUs which efficiency is equal to 1 which do not belong to any reference set, they belong to the composite virtual DMU with zero weight. The best practice frontier relies only on the initial data, i.e. on inputs and outputs of all DMUs in the selection. The algorithm of efficiency estimation does not require the construction of the best practice frontier; the numerical method gives the answer without geometrical interpretation. Considering the DEA methodology one has to assume scaling properties of input parameters, returns to scale. This means the influence of inputs variation to outputs changes. Let us assume that the input parameters are changed, all in the same proportionality. If the outputs change in the same proportion we can speak about constant returns to scale. Otherwise, when outputs do not change in the same rate we can speak about variable returns to scale; more precisely, increasing or decreasing returns to scale if outputs change in a greater or smaller proportion. The scale issue associated with DEA might be under attention when different DEA models are in use and the scaling property leads us to the meaning of optimal size of DMU. Namely, the DMU is shown to be of optimal size when it is efficient both in the sense of constant returns to scale and variable returns to scale. If the DMU is less than optimal, it usually works in the conditions of increasing returns to scale. Conversely, oversized, if compared to the optimal size DMUs, works in the conditions of decreasing returns to scale. This might also be important

629

Data Envelopment Analysis in Environmental Technologies

in some aspects of application in environmental technology. The DEA approach can be input-oriented or output-oriented. In the first case the main question is: how much is it possible to decrease the input parameters for inefficient DMUs in order to keep the present output. In the case of outputoriented method the main question is: how much is it possible to increase the outputs while keeping the present input. Choosing between these two possibilities depends again on the context of application, both of the cases lead to different modifications of the DEA method.

each DMU to that of a single virtual output and virtual input. For a particular DMU the ratio of this single virtual output to single virtual input provides a measure of efficiency that is a function of the multipliers v1,…,vm, u1,…, uk. In mathematical programming parlance, this ratio, which is to be maximized, forms the objective function for the particular DMU being evaluated. Symbolically the problem is the following:

Mathematical Formulation

maximization over

In their initial study, Charnes, Cooper, and Rhodes (Charnes et al., 1978) described DEA as a ‘mathematical programming model applied to observational data that provides a new way of obtaining empirical estimates of relations, such as the production functions and/or efficient production possibility surfaces, that are cornerstones of modern economics.’ In this article they proposed following the fractional program model, known as the CCR DEA model. Assume that there are n DMUs to be evaluated. Each DMU consumes varying amounts of k different inputs to produce m different outputs. Specifically, DMU j consumes amount xji of input i and produces amount yjr of output r. We assume that xji ≥ 0 and yjr ≥ 0 and also assume without loss of generality that each DMU has at least one positive input and one positive output value. The first approach gives us the fractional program problem for evaluating the efficiency score of the DMU number s. In this form, as introduced by Charnes, Cooper, and Rhodes, the ratio of outputs to inputs is used to measure the relative efficiency of the DMU to be evaluated relative to the ratios of all of the n DMUs. We can interpret the CCR construction as the reduction of the multiple-output/multiple-input situation for

v1,…,vm, u1,…, uk

630

Find max {(v1 y1s + … + vm yms)/(u1 x1s + … + uk xks)},

subject to: {(v1 y1i + … + vm ymi)/(u1 x1i + … + uk xki)} ≤ 1, i = 1,…,n v1,…,vm, u1,…, uk ≥ 0 where v1,…,vm, u1,…, uk are weights given to real observed output values y1s , … , yms and input values x1i, …, xki, correspondingly. In this first approach, the fractional program problem has an infinite number of solutions: if (v1*,…,vm*, u1*,…, uk*) is a solution of this, i.e. optimal, then also (α v1*,…, α vm*, α u1*,…, α uk*) has the same property for every α > 0. The additional condition u1 x1s + … + uk xks = 1

Data Envelopment Analysis in Environmental Technologies

λ1, …, λn

makes it possible to convert this fractional problem to following linear programming problem for the estimation of the efficiency of DMU number s.

subject to:

Find

λ1 +…+ λn = 1

z* = max (μ1 y1s + … + μm yms),

λ1x1,i + …+ λnxn,i ≤ Θxs,i , i = 1,…,k

maximization over

λ1y1,j +…+ λnyn,j ≥ ys,j , j = 1,…, m

μ1, …, μm

λ1 ≥ 0,…, λn ≥ 0.

subject to:

As in the case of the main problem, for a complete realization of the DEA it is necessary to solve the linear programming problem for every single DMU in the selection. This gives us the array of relative efficiencies, solutions Θ* for every single DMU and only then it is possible to start with the interpretation process. Given dual problem is important because the units involved in the construction of the composite DMU, i.e. for which corresponding weight λi > 0, can be used as benchmarks for improving the inefficiency of tested DMU’s. DEA allows for computing the necessary improvements required in the inefficient DMU’s inputs and outputs in order to make it efficient. If a DMU is inefficient, the corresponding solution Θ* is less than one; the solution also gives us the parameters of the corresponding composite virtual DMU which is efficient, of course, i.e. the weights for its input and output. This virtual DMU also presents us the corresponding point on the production efficiency frontier; ‘dragging’ the inefficient DMU point to the frontier means a decrease in the inputs of this DMU. Sometimes it so happens that the composite DMU is located on the part of the production efficiency frontier which is parallel to some coordinate plane. In this case it is possible to realize an additional shift of the virtual data point of composite DMU down along of this parallel part towards the beginning of coordinates without decreasing outputs and increasing other inputs. This surplus is called a slack and it gives addi-

(μ1 y1i + … + μm ymi) – (γ1 x1i + … + γk xki) ≤ 0, i = 1,…,n γ1 x1s + … + γk xks = 1 μ1, …, μm, γ1 ,…, γk ≥ 0. This is a basic linear programming problem for obtaining the efficiency score z* of the DMU number s. The above problem needs to run n times in identifying the relative efficiency scores of all the DMUs. Each DMU selects input and output weights that maximize its efficiency score. As mentioned, a DMU is considered to be efficient if it obtains a score of 1; a score of less than 1 implies that it is inefficient. When working with literature, a reader can find several versions of DEA computational schemes and should clearly understand what version is in use in every particular case, particularly understand the assumptions, etc. In this paper one does not find any exhaustive discussion but simply an example of DEA method formulation. Below we bring the dual problem of linear program. Find Θ* = min Θ, minimization over weights

631

Data Envelopment Analysis in Environmental Technologies

tional information about the inefficiency of the DMU under consideration. Slacks show the real possibilities to decrease the corresponding input parameter on the existing real example of other DMU working in such conditions. The interpretation of inefficiency in the case of input-oriented DEA, represented by dual linear programming problem, is easy. Namely, if we have k input parameters under consideration and if the relative efficiency Θ* of a DMU is less than one, Θ* < 1, then to reach the full relative efficiency (Θ* = 1) we must decrease inputs (x1,…,xk) of this DMU Θ* times, i.e. to the values (Θ*x1,…, Θ*xk). Any further discussion of these questions does not fall under the goal of the present paper. It is interesting to mention that in numerous applications of environmental technology, the versions and possibilities of Data Envelopment Analysis have been used in a great variety. The trend is increasing; the full DEA bibliography contains more than 4000 journal articles (Emrouznejad, 1995-2001) comprising many scientific fields.

CASE STUDIES Next we explore the possibilities of DEA application in different areas of environmental technology. The use of DEA as a quantitative non-parametric performance measurement technique, based on linear programming, for assessing the relative efficiency of homogeneous peer entities called Decision Making Units (DMUs), is successfully implemented into environmental research projects. DEA makes it possible to compare the efficiency of each DMU to that of an ideal operating unit, rather than to the average performance. This ideal unit is constructed only on the base of the data for the whole set of DMUs and because of this several DMUs always become fully efficient. The definition of a DMU is generic and flexible and this enforces the dissemination of DEA into new areas. We explain the use of DEA on the base of some examples and most certainly not all

632

application areas are covered. So, the overview given in this paper is in no way exhaustive of the developments in the field of DEA possibilities. The goal is to show some approach details, possible choices of DMUs, the corresponding input and output parameters, and the conclusions made on the basis of DEA. The interpretations of environmental efficiency in different cases are of interest. The order of examples is of no importance, as it is impossible to line up urgent environmental protection problems.

Climate Policy Scenarios DEA is extended from its traditional application in (Bosetti & Buchner, 2008) as a quantitative method to assess the relative performance of different climate policy scenarios when accounting for their long-term economic, social and environmental impacts. Indeed, contemporary developments in the political, scientific and economic debate on climate change suggest that it is of critical importance to develop new approaches, particularly quantitative, to compare policy scenarios for their environmental effectiveness, their distributive effects, their enforceability, their costs and many other dimensions. As for input parameters for DEA application economic, environmental and social costs for every possible policy, also indicators of instant climate situation are considered here. Instead, among the outputs, there are indicators for which lower values are preferred and indicators of benefits and welfare. The authors discuss eleven simulated climate policy scenarios, computed three indicators for each of them (cumulated discounted GDP over the century, temperature increase by 2100, and the Gini equity indicator by 2100), add the application of DEA and make interesting conclusions. Two alternative DEA approaches to compare the sustainability of different policy scenarios are used. One of them is based on the efficiency score defined as a relative ratio and the other is based on Competitive Advantage measured in terms of

Data Envelopment Analysis in Environmental Technologies

absolute prices. The first case fits the traditional DEA application described above. Relative efficiency estimates are computed for each policy, where efficiency is measured as the ratio of the weighted sum of outputs to the weighted sum of inputs, and are obtained through solving a series of linear programming problems. Both, constant returns to scale and variable returns to scale assumptions are used. The interpretation of relative efficiency, computed using the DEA method, is interesting: the policy is 100% efficient if and only if 1) none of its outputs can be increased without either increasing one or more of its inputs, or decreasing some of its other outputs; 2) none of its inputs can be decreased without either decreasing some of its outputs or increasing some of its other inputs. In the second approach DEA is applied in order to get weights while for each scenario the net economic impact, expressed in monetary value, is aggregated through weights to the social and environmental impacts as DMUs and their efficiencies are expressed in their own unitary measures, on the base of the data from real activity. Three major findings are pointed out: 1) stringent climate policies can outperform less ambitious proposals if all sustainability dimensions are taken into account; 2) a carefully chosen burden-sharing rule is able to bring together climate stabilization and equity considerations; 3) the most inefficient strategy results from the failure to negotiate a global post-2012 climate agreement. In conclusions the simulated scenarios, also the interpretational role of the DEA, are discussed in details. It is remarkable that it is possible to support the political, scientific and economic debate on climate change using the DEA method.

Measuring Environmental Performance Index The Environmental Performance Index (EPI) is a method of quantifying and numerically benchmarking the environmental performance of a

country’s policies and has been in recent years universally adopted and quoted by corresponding policy analysts and decision makers. The construction of an aggregated EPI, which offers condensed information on environmental performance, has evolved as an important focus in systems analysis. Among the existing approaches to developing EPIs, some are data-driven while others are theory-driven. In the article of Zhou et al., (2008) we see an example of direct approach, where an aggregated EPI is directly obtained from the observed quantities of the inputs and outputs of the environmental system studied using Data Envelopment Analysis. This work is an example of application of environmental DEA technology, in which outputs are assumed to be weakly disposable (Färe et al., 2004) and which have been widely used to measure industrial productivity when undesirable outputs exist. In recent years this approach has gained popularity in environmental performance measurement due to its empirical applicability. The common procedures for applying DEA to measure environmental performance are to first incorporate undesirable outputs in the traditional DEA framework, and then calculate the undesirable outputs orientation (environmental) efficiencies. In fact, many studies have been devoted to modeling undesirable factors in DEA, e.g. the data translation approach (Seiford & Zhu, 2005) and the utilization of environmental DEA technology. In the article (Zhou et al., 2008) different DEA methods for environmental performance measurement are described (constant returns to scale and variable returns to scale, non-increasing returns to scale) and a study on measuring the carbon emission performance of eight world regions is presented. The centre of attention is on the growing concern on global climate change due to carbon dioxide (CO2) emissions worldwide. The single input, desirable output and undesirable output are: total energy consumption (Mtoe, megatonne of oil equivalent), GDP (gross domestic product, billion 1995 US$) and CO2 emissions (Mt), respectively.

633

Data Envelopment Analysis in Environmental Technologies

Eight regions under consideration (i.e. DMUs for DEA application) are: OECD, Middle East, Former USSR, Non-OECD Europe, China, Asia (excludes China), Latin America and Africa. In this study all the proposed models are radial DEA-based models where efficiency scores are computed as rate of distances from origin of coordinates, DEA frontier and data point of corresponding DMU. However, in some circumstances it may be difficult to compare some DMUs only by the proposed environmental performance indexes because of the weaker discriminating power of radial DEA efficiency measures compared to with non-radial DEA models. Authors propose to incorporate different environmental DEA technologies with the non-radial DEA efficiency scores and to combine different environmental DEA technologies with slacks-based efficiency measures. The results of the study are interesting. For instance, if the pure EPI is chosen and the reference technology exhibits variable returns to scale, OECD has a better carbon emission performance than Africa eventhough it has a larger carbon intensity and carbon factor.

Greenhouse Gas Technologies In the paper (Lee et al., 2008), the authors use the Analytic Hierarchy Process (AHP) and Data Envelopment Analysis (DEA) hybrid model to weigh the relative preferences of greenhouse gas technologies in Korea. The Analytic Hierarchy Process is a subjective method used to analyze qualitative criteria in order to generate a weighting of the operating units and is known as a decisionmaking method which could be used to solve unstructured problems. In general, decision making involves tasks such as planning, the generation of a set of alternatives, the setting of priorities, the selection of the best policy once a set of alternatives has been established, allocation of resources, determination of requirements, prediction of outcomes, designing of systems, measurement of performance, ensuring of system stability, and the

634

optimization and resolution of conflicts. Authors of the paper employed a long-term perspective when establishing the criteria to evaluate energy technology priorities for the greenhouse gas plan. They used the AHP to generate the relative weights of the criteria and alternatives in the greenhouse gas plan. Thereafter, the relative weights were applied to the data used to measure the efficiency of the DEA method. This study represents an example in which the AHP/DEA hybrid model has been used to determine the energy technology priorities for the greenhouse gas plan. The results obtained using this hybrid model provide the government with an effective decision-making tool and also represent a consensus of experts in the greenhouse gas planning sector. Nine greenhouse gas technologies were considered as DMUs for DEA application: CO2 capture storage and conversion technology, non-CO2 gas technology, advanced combustion technology, next-generation clean coal technology, clean petroleum and conversion technology, DME (di-methyl ether) technology, GTL (gas to liquid) technology, gas hydrate technology and greenhouse gas mitigation policy. The parameters of DEA consist of a single input factor and multiple output factors. The input factor consists of the investment cost associated with the development of greenhouse gas technologies; the unit of investment cost was million US dollars in 2006. There are five output factors, namely possibility of developing technology, potential quantity of energy savings, market size, investment benefit, and ease of technology spread. All outputs are multiplied by the relative weights calculated using the AHP approach (these concern United Nations Framework Convention on Climate Change, UNFCCC, economic spinoff, technical spin-off, urgency of technology development, and quantity of energy use) and are thus applied in conjunction with the output factors employed as part of the DEA approach. As a result of the application of the AHP/DEA approach, one greenhouse gas technology, namely non-CO2 gas technology with efficiency score 1,

Data Envelopment Analysis in Environmental Technologies

was found to be more efficient than the other eight greenhouse gas technologies. In conclusion, this hybrid model can be used to efficiently compute the relative efficiency scores of greenhouse gas technologies. This paper also shows decision makers and policy makers in the energy sector that multi-criteria decision making problems can be solved using scientific procedures.

Tourism Management DEA is also effectively used for assessing the performances of tourism management of local governments when economic and environmental aspects are considered as equally relevant. In (Bosetti et al., 2004) the focus is on the comparison of Italian municipalities located in coastal areas. In order to assess the efficiency status of the considered management units, DEA is applied. In this analysis, the DMU represents a municipality producing the tourism good for which two different inputs are given. The first is the cost of managing the tourism infrastructure. More exactly, the cost of the production of tourism services is fixed on the total number of beds, which is considered as an approximation for management expenses, and it is computed by adding up the number of all beds in hotels, camping, registered holiday houses and other receptive structures. The second input is the environmental cost deriving from the increased number of people depending on the same environmental endowment; this parameter is presented as the amount of solid waste in tons per year. As output parameter, an indicator measuring the rate of use of existing beds has been used as a general approximation for profit deriving from the tourist industry, is considered. The authors confirm that in the present study output-oriented models have been preferred to input-oriented ones, as they suit more issues considered as relevant for management purposes and they better help in addressing the germane questions. These applications also give the sense to the input and output indicators because in

order to augment the efficiency of an inefficient municipality, the most direct policy means is to introduce constraints on the uncontrolled deployment of environmental resources, rather than restricting the dimension of the tourism business. Also, variable return to scale models has mainly been considered, although an analysis using a constant return to scale DEA model has also been conducted on the same data set. The main result of this DEA performed over the data set is a ranking of the considered municipalities. The analysis for each municipality specifies not only the relative efficiency scores, but also the potential improvements in the case of scores lower than one.

Consumer Durables A wide program ‘Measurement of Eco-efficiency using Data Envelopment Analysis’ has been introduced in Finland by the Ministry of the Environment. Economic activity consumes natural resources as its input, and produces undesirable emissions and waste as its output. The empirical approach with the starting point in classical economic theory, which takes into account the substitution possibilities by estimating the so-called efficient production frontiers, takes to natural understanding that a production enterprise is called efficient if the consumption of any input cannot be decreased without the corresponding decrease of at least one output or increase of undesirable outputs. The objective of this project is to investigate the applicability of the DEA-method to the measurement of eco-efficiency, and develop the method further on towards a more comprehensive framework supporting the management and incentive mechanisms. The paper by (Kuosmanen & Kortelainen, 2004) is a very good introduction for how to use DEA in environmental evaluation and for comparative assessment of firm performance in this context. In (Kortelainen & Kuosmanen, 2005) a method for eco-efficiency analysis of consumer durables

635

Data Envelopment Analysis in Environmental Technologies

by utilizing the DEA was developed. The novel innovation of the paper is to measure the efficiency in terms of absolute shadow prices that are optimized endogenously within the model to maximize the efficiency of the product and producer. The approach is illustrated by an application to eco-efficiency evaluation of a sort of consumer durables, Sport Utility Vehicles. To assess eco-efficiency of a product, one needs to account for both private net economic benefits and external social costs that arise during the use phase of the product’s life-cycle. The authors mention that DEA seeks endogenously determined optimal, the so-called shadow prices that present every consumer durable in the most favorable light when compared to other products, and that the method does not require any prior arbitrary assumption as on how to set the prices of environmental pressures. The key idea of the approach is to test whether there are any nonnegative efficiency prices at which a consumer durable is efficient. The definition of socially efficient is that a product needs to fulfill the conditions of inactivity and optimality. Firstly, it means that the value added for the consumer durable has to be nonnegative at optimal prices and the rationale behind the condition is that the consumers can be inactive, and not purchase any of the goods if the costs outweigh the benefits. The optimality condition demands that the consumer durable must be the optimal choice at some efficiency prices. The goods are eco-efficient if the shadow price of at least one environmental pressure must be positive. Using the efficiency measures and the shadow prices, all goods are classified into the following categories: efficient goods; eco-efficient goods; weakly efficient, economical goods; inefficient goods; inefficient, but environmentally friendly goods; inefficient, environmentally harmful goods. The authors calculated efficiency scores for 88 different models of Sport Utility Vehicles by using the absolute shadow price approach. For the comparison, the efficiency scores with the

636

environmental efficiency DEA approach were estimated where environmental pressures were modeled as negative outputs. The fuel costs and all environmental pressures were measured per 1 kilometer, which was simultaneously the value of (desirable) output, so the DEA model was invariant to the returns to scale (RTS) specification; all alternative RTS specifications yielded exactly the same results.

Irrigation Districts Management Application of the DEA is proposed as a methodology to assign the correct weightings for the calculation of indexes and to overcome the subjectivity of the interpretations of the results in management of Andalusian irrigation districts (Spain). The case was presented and discussed by (Rodríguez Díaz et al., 2004). This study was used to select the most representative irrigation districts in Andalusia which were then studied in greater depth. Andalusia is a region of southern Spain, a typical Mediterranean region where irrigation and wealth have been closely linked over time and where 815,000 hectares of irrigated area is divided into 156 irrigation districts. In addition to allowing the production of winter crops, irrigation makes it possible to produce a larger number of crops during the extremely dry summers that are characteristic of this Mediterranean climate, something that would otherwise not be possible under dry-land agriculture. The input-oriented DEA model was used and applied for all the irrigation districts, and separately for interior districts. The authors show that two types of clearly differentiated agricultural districts coexist in Andalusia: interior and littoral districts. In this research input parameters for DEA application were: irrigated surface area in hectares, labor in annual working units and the total volume of water applied to an irrigation district. The total value of agricultural production in Euros was considered as the output parameter. Following the DEA, none of the districts in the interior achieved

Data Envelopment Analysis in Environmental Technologies

high efficiencies in numerical experiments where all districts were considered. This leads to the conclusion that the littoral districts would serve as the reference region for districts in the interior. For this reason, the DEA model was applied separately to the interior districts only, and then it was possible to point out important conclusions. Particularly, the DEA study allowed the five most representative irrigation districts of Andalusia to be selected for a more detailed benchmarking study and the DEA turned out to be a useful tool for detecting local inefficiencies and determining the possible improvements for irrigation.

Biogas Plants Today it is widely recognized that the largest source of atmospheric pollution is fossil fuel combustion, which current energy production and use patterns rely heavily on, and therefore, the most crucial environmental problems derive from the energy demand to sustain human needs and economic growth. In the paper (Madlener et al., 2006) we find an interesting study of assessing the efficiency of 41 agricultural biogas plants in Austria. The two input parameters were: the amount of organic dry substrate used and the labor time spent for plant operation. Among the three outputs two were desirable: the net electricity produced and external heat which correspondingly refer to the amount of electricity and heat delivered by the biogas plant for external consumption (i.e. net of what the biogas plant consumes itself), including farm operations not directly related to the biogas plant. The third output parameter, methane emissions to the atmosphere, was an undesirable output that contributes to the greenhouse gas problem. In the paper one can find a detailed discussion of DEA efficiency interpretation for biogas plants under consideration.

National Park Management Wilderness protection is another growing necessity for modern societies, particularly for areas where population density is extremely high and where during the twentieth century the erosion of territory, hence of ecosystems, due to human activities, was dramatically increasing, as for example in Europe. Conservation, however, implies very high opportunity costs and thus it is crucial to create incentives for efficient management practices, to promote benchmarking and to improve conservation management. A methodology based on DEA for assessing the relative efficiency of the management units of the protected area and to indicate how it could be improved, is proposed in (Bosetti & Locatelli, 2005). In it 17 Italian National Parks (National Park Management offices) are considered as DMUs. Three different models have been used to perform DEA. They differ by the choice of input and output indicators. The set of input parameters contains economic costs, computed aggregating management costs and variable costs and extraordinary expenses. The area extension was considered as a proxy of fixed costs, which were assumed to be proportional to the area covered by the park. The output parameters are: the number of visitors to the park as an indicator of its attractiveness, providing potential indirect benefit to the local economy; the number of the parks’ employees, as an indicator of the social and economic indirect and direct benefits; the number of economic businesses which are directly linked, empowered or created thanks to the presence of the park, e.g. the farmers producing within the protected area; the number of protected species, as a good proxy of the environmental quality and biodiversity of the park; and the number of students who visit the park for environmental education trips, as a proxy of social and educational benefits deriving from the park. In some models the inverse of the mentioned biodiversity indicator was included as an input.

637

Data Envelopment Analysis in Environmental Technologies

There are several different definitions of ecological efficiency known. In the case of protected areas, the problem of efficiency becomes even more complicated, because management and financial features have to be considered as well. In this research this definition appears in the DEA application results: when a DMU is scoring maximum efficiency according to all three models, one can say that the management has attained the sustainable development goal in a very broad sense. In cases where DMUs are partially inefficient the authors use the DEA to obtain information concerning potential improvements in the management. In the final conclusions it is pointed out that the DEA is a good benchmarking technique to monitor multi-objectives efficiency. The DEA also provides the possibility for a detailed analysis of potential improvements in the management of National Parks.

Measuring Residential Energy Efficiency In the paper (Grösche, 2009) the energy efficiency improvements of US single-family homes between 1997 and 2001, using a two-stage procedure, is estimated. In the first stage, an indicator of energy efficiency is derived by means of Data Envelopment Analysis, and the analogy between the DEA estimator and traditional measures of energy efficiency is demonstrated. The second stage employs a bootstrapped truncated regression technique to decompose the variation in the obtained efficiency estimates into a climatic component and factors attributed to efficiency improvements. The author claims that the improvement of residential energy efficiency is one major goal of energy policy makers. Put simply, DEA can be considered as a generalization of the energy efficiency defined as the ratio between the amount of a particular produced service s and the amount of energy e consumed for the production (s/e). The households’ total energy consumption serves as the only input for DEA,

638

measured in kWh. Considering the outputs (the ‘produced’ energy services), the author approximates the demand for space heating and cooling, and lightning with the size of living space. The number of household members serves as a proxy for the amount of hot water preparation and cooked meals. To account for energy consumption due to the use of electric appliances, the joint number of TV-sets, videos, DVDs, and computers is incorporated. The overall number of refrigerators and freezers in the household is likewise included in the estimation. The results of the study are mixed: a substantial part of the variation in efficiency scores is due to climatic influences, but households have nevertheless improved their energy efficiency. In particular, households heating mainly with fuel oil or natural gas show significant improvements. A key advantage of the applied procedure is its ease in measuring residential energy efficiency improvements.

Pig Fattening Farms In the paper (Lauwers & van Huylenbroeck, 2003) a method for analyzing environmental efficiency of Belgian pig fattening farms, based on the farm materials balance and worked out, the DEA technique is proposed. In its most fundamental sense, materials balance means the mass flow equation of raw materials used in the economic system, and of the residuals disposed of in the natural environment. The environmental efficiency of the farm is defined similarly to the economic allocative efficiency of farms as DMUs in the DEA approach. Nutrient surplus in pig fattening, as a typical balance indicator, is used to illustrate the concept in a two input – one output case. The input parameters are feed (nutrients) and the piglets or rotation. Because of juridical constraints on farm dimension, the farmers profit maximization objective turns into a maximization of gross margin per pig place, thus the pig fattening process is simplified to one output, the marketable meat

Data Envelopment Analysis in Environmental Technologies

production. Several versions of DEA are applied and discussed: input-oriented and output-oriented. The main conclusion is that, ignoring the balance feature of environmental issues, such as nutrient surplus, might be the main reason why traditional integral analyses of economic and environmental efficiency yield contradictory conclusions.

Environmentally Friendly Manufacturing The essence of environmentally friendly manufacturing is to define sustainable development in terms of manufacturing; conserving nature’s services (resource supplier, water, energy) with the development being centered on economics and trade with a timeline of 20-100 years. Today the corporate environment is an evolving strategy, and it is not surprising that manufacturing operations are required to consider environmental impacts and sustainable development more and more. In Wu et al. (2006) the application of DEA is discussed to measure the efficiency, through material loss and environmental impact of a closed-loop manufacturing process, of a process in the computer industry. The multi-stage DEA is being utilized to measure each manufacturing phase, to compare the starting materials and environmental status to the materials available once the product has been recycled, and the environmental damage (or substances added to the environment) once a product lifecycle has been completed. The multi-stage DEA becomes important for multiple processes, for instance, the closed-loop manufacturing process, and in this case, the outputs from one process can be the inputs for the next. Inputs and outputs varied, the whole set of parameters consisted of expenditure on research and development, years of experience, expenditures on raw materials, number of product parts, use of harmful materials, energy consumption, product recyclability in percents of product/components reusable, emission of pollutants, modification flexibility, amount of products returned, product

recyclability and material recovered. The values for the various inputs/outputs were obtained from the literature. This work is a good example of measuring a company’s environmental conduct in manufacturing by applying the DEA. The conclusion is that if one organization has multiple locations or is responsible for multiple products, the multi-stage DEA is a promising approach. In addition, on the base of the model it is possible to analyze a company’s environmental impact and motivate the company to improve their corresponding indicator numbers.

Grain Farms The aim of the study (Vasiliev et al., 2008) was to analyze the efficiency of Estonian grain farms after Estonia’s transition to market economy and during the accession period to the European Union in 2000–2004. Here DEA is used with the following input parameters: changing expenditures, total investment capital, the surface of land used and the yearly working units. The output was fixed as total production volume in monetary value. The results obtained were: the mean total technical efficiency varied from 0.70 to 0.78, and 62% of the grain farms are operating under the increasing returns to scale. The most pure technically efficient farms were the smallest and the largest, but the productivity of small farms was low when compared to larger farms because of their small scale. Although, solely based on the DEA model, it is not possible to determine the optimum farm scale and the range of Estonian farm sizes operating efficiently.

Final Remarks Environmental Technologies mean cleaner and resource efficient technologies which can decrease material inputs, reduce energy consumption and emissions, discover valuable by-products and minimize waste disposal problems or some combination of these. Data Envelopment Analysis

639

Data Envelopment Analysis in Environmental Technologies

is widely used in many areas for estimating the peer-based efficiency of Decision Making Units which are defined in each single case. Although not the only method of quantitative analysis and efficiency modeling, the DEA is highly recommended as an approach with interpretable output, and which has improved its effectiveness as a productivity analysis tool. The primary advantages of this technique are that it considers multiple input and output factors of DMUs and does not require parametric assumptions of traditional multivariate methods. In general, inputs can include any resources utilized by a DMU, and the outputs can range from actual products produced to a range of performance and activity measures. The DEA has several versions and modifications, and each of these models and methods can be useful in a variety of manufacturing and service areas.

REFERENCES Bosetti, V., & Buchner, B. (2008). Using DEA to assess the Relative Efficiency of Different Climate Policy Portfolio. Ecological Economics, 68(5), 1340–1354. doi:10.1016/j.ecolecon.2008.09.007 Bosetti, V., Lanza, A., & Cassinelli, M. (2004). Using Data Envelopment Analysis to Evaluate Environmentally Conscious Tourism Management. FEEM Working Paper 59. Milan: Fondazione Eni Enrico Mattei. Bosetti, V., & Locatelli, G. (2005). A Data Envelopment Analysis Approach to the Assessment of Natural Parks’ Economic Efficiency and Sustainability. The Case of Italian National Parks (May 2005). FEEM Working Paper No. 63.05. Available at SSRN: http://ssrn.com/ abstract=718621. http://papers.ssrn.com/sol3/ papers.cfm?abstract_id=718621 Charnes, A., Cooper, W. W., Lewin, A. Y., & Seiford, L. M. (Eds.). (1994). Data envelopment analysis: Theory, methodology, and applications. Boston: Kluwer. 640

Charnes, A., Cooper, W. W., & Rhodes, E. (1978). Measuring the efficiency of decision making units. European Journal of Operational Research, 2(6), 429–444. doi:10.1016/0377-2217(78)90138-8 Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data Envelopment Analysis: A Comprehensive Text with Models, Applications, References and DEA-solver Software (2nd ed.). New York: Springer. Cooper, W. W., Seiford, L. M., & Zhu, J. (Eds.). (2004). Handbook on Data Envelopment Analysis (2nd ed.). Boston: Kluwer. Emrouznejad, A. (1995-2001). Ali Emrouznejad’s DEA HomePage. Warwick Business School, Coventry CV4 7AL, UK, http://www.deazone.com/. Emrouznejad, A., Parker, B., & Tavares, G. (2008). Evaluation of research in efficiency and productivity: A survey and analysis of the first 30 years of scholarly literature in DEA. Journal of Socio-Economics Planning Science, 42(3), 151–157. doi:10.1016/j.seps.2007.07.002 Färe, R., & Grosskopf, S. (2004). Modeling undesirable factors in efficiency evaluation [comment]. European Journal of Operational Research, 157, 242–245. doi:10.1016/S0377-2217(03)00191-7 Grösche, P. (2009). Measuring residential energy efficiency improvements with DEA. Journal of Productivity Analysis, 31(2), 87–94. doi:10.1007/ s11123-008-0121-7 Kao, C., & Tai Liu, S. (2000). Data Envelopment Analysis with Missing Data: an Application to University Libraries in Taiwan. The Journal of the Operational Research Society, 51, 897–905. Kortelainen, M., & Kuosmanen, T. (2005). EcoEfficiency Analysis of Consumer Durables Using Absolute Shadow Prices. EconWPA Working Paper at WUSTL, No. 0511022.

Data Envelopment Analysis in Environmental Technologies

Kuosmanen, T., & Kortelainen, M. (2004). Data Envelopment Analysis in Environmental Valuation: Environmental Performance, Eco-efficiency and Cost-Benefit Analysis. Discussion Paper No.21, Department of Business and Economics, University of Joensuu. Lauwers, L. H., & van Huylenbroeck, G. (2003). Materials Balance Based Modelling Of Environmental Efficiency. In 2003 Annual Meeting, August 16-22, 2003, Durban, South Africa 25916, International Association of Agricultural Economists. Lee, S. K., Mogi, G., Shin, S. C., & Kim, J. W. (2008) Measuring the Relative Efficiency of Greenhouse Gas Technologies: An AHP/DEA Hybrid Model Approach. In Proceedings of the International MultiConference of Engineers and Computer Scientists 2008 Vol. II, IMECS 2008, 19-21 March, 2008, Hong Kong, (pp. 1615-1619). Madlener, R., Antunes, C. H., & Dias Luis, C. (2006). Multi-Criteria versus Data Envelopment Analysis for Assessing the Performance of Biogas Plants. CEPE Working Paper No.49, Centre for Energy Policy and Economics (CEPE), Zurich. Miidla, P., & Kikas, K. (2009). The efficiency of Estonian central public libraries. Performance Measurement and Metrics, 10(1), 49–58. doi:10.1108/14678040910949684 Piot-Lepetit, I., Vermersch, D., & Weaver, R., D. (1997). Agriculture’s environmental externalities: DEA evidence for French agriculture. Applied Economics, 29(3), 331–338. doi:10.1080/000368497327100 Rodrigues Diaz, J. A., Camacho Poyato, E., & Lopez Luque, R. (2004). Applying Benchmarking and Data Envelopment Analysis (DEA) Techniques to Irrigation Districs in Spain. Irrigation and Drainage, 53, 135–143. doi:10.1002/ird.128

Seiford, L., M., Zhu, J. (2005). A response to comments on modeling undesirable factors in efficiency evaluation. European Journal of Operational Research, 161(2), 579–581. doi:10.1016/j. ejor.2003.09.018 Taniguchi, M., Akinaga, J., & Abe, H. (2000). Evaluation for Neighboring Environment considering Comparative Stduy and DEA Analysis. Infrastructure Planning Review, 17, 423–430. Thanassoulis, E. (2003). Introduction to the Theory and Application of Data Envelopment Analysis: a Foundation Text with Integrated Software. Norwell, MA: Kluwer Academic Publishers. Vasiliev, N., Astover, A., Mõtte, M., Noormets, M., Reintam, E., Roostalu, H., & Matveev, E. (2008). Efficiency of Estonian grain farms in 2000-2004. Agricultural and Food Science, 17(1), 31–40. doi:10.2137/145960608784182272 Wu, T., Fowler, J., Callerman, T., & Moorehead, A. (2006) Multi-stage DEA as a Measurement of Progress in Environmentally Benign Manufacturing. In The 16th International Conference on Flexible Automation and Intelligent Manufacturing, (pp. 221-228), Limerick, Ireland, June, 2006. Zhou, P., & Anga, B., W. & Poh, K.L. (2008). Measuring environmental performance under different environmental DEA technologies. Energy Economics, 30(1), 1–14. doi:10.1016/j. eneco.2006.05.001

ADDITIONAL READING A Data Envelopment Analysis (DEA) Home Page. (1996) http://www.etm.pdx.edu/dea/homedea. html. Beasley, J. E. (1996) OR-Notes. http://people. brunel.ac.uk/~mastjjb/jeb/or/dea.html.

641

Data Envelopment Analysis in Environmental Technologies

Coelli, T. J., & Rao, D. P. O’Donnell. C. J., Battese, G. E. (2005) An Introduction to Efficiency and Productivity Analysis, Springer. Molinero, C. M., & Woracker, D. (2008) Data Envelopment Analysis a Non-Mathematical Introduction. Available at SSRN: http://ssrn.com/ abstract=6317 Working paper. Thore, S. A. (Ed.). (2002) Technology Commercialization: DEA and Related Analytical Methods for Evaluating the Use and Implementation of Technical Innovation. Springer, 2002. Zhu, J. (Developer) DEAFrontier. http://www. deafrontier.net/.

KEY TERMS AND DEFINITIONS Constant Returns to Scale: The outputs of a DMU change in the same proportion as inputs; the producers are able to linearly scale the inputs and outputs without increasing or decreasing efficiency.

Decision Making Unit: Decision making units form a homogenous set of peer entities which convert multiple inputs into multiple outputs and efficiency of which is under consideration in DEA. Efficiency (Computed in Data Envelopment Analysis): Weighted sum of outputs divided by the weighted sum of inputs. Environmental Technologies: Cleaner and resource efficient technologies which can decrease material inputs, reduce energy consumption and emissions, discover valuable by-products, and minimize waste disposal problems or some combination of these. Relative Efficiency: A decision making unit is to be rated as fully efficient on the basis of available evidence if and only if the performances of other decision making units do not show that some of its inputs or outputs can be improved without worsening some of its other inputs or outputs. Variable Returns to Scale: The outputs of a decision making unit change in other proportion as inputs, increasingly or decreasingly.

This work was previously published in Environmental Modeling for Sustainable Regional Development: System Approaches and Advanced Methods, edited by Vladimír Olej, Ilona Obršálová and Jirí Krupka, pp. 242-259, copyright 2011 by Information Science Reference (an imprint of IGI Global).

642

643

Chapter 37

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm Alexandros Xanthopoulos Democritus University of Thrace, Greece Dimitrios E. Koulouriotis Democritus University of Thrace, Greece

ABSTRACT This research explores the use of a hybrid genetic algorithm in a constrained optimization problem with stochastic objective function. The underlying problem is the optimization of a class of JIT manufacturing systems. The approach investigated here is to interface a simulation model of the system with a hybrid optimization technique which combines a genetic algorithm with a local search procedure. As a constraint handling technique we use penalty functions, namely a “death penalty” function and an exponential penalty function. The performance of the proposed optimization scheme is illustrated via a simulation scenario involving a stochastic demand process satisfied by a five–stage production/ inventory system with unreliable workstations and stochastic service times. The chapter concludes with a discussion on the sensitivity of the objective function in respect of the arrival rate, the service rates and the decision variable vector.

INTRODUCTION This chapter addresses the problem of production coordination in serial manufacturing lines which consist of a number of unreliable machines linked with intermediate buffers. Production coordination in systems of this type is essentially DOI: 10.4018/978-1-4666-1945-6.ch037

the control of the material flow that takes place within the system in order to resolve the tradeoff between minimizing the holding costs and maintaining a high service rate. A time-honored approach to modeling serial manufacturing lines is to treat them as Markov Processes (Gershwin, 1994, Veatch and Wein, 1992) and then solve the related Markov Decision Problem (MDP), by using standard iterative algorithms such as

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

policy iteration, (Howard, 1960), value iteration, (Bellman, 1957) etc. However the classic dynamic programming (DP), approach entails two major drawbacks: Bellman’s curse of dimensionality, i.e. the computational explosion that takes place with the increase of the system state space, and the need for a complete mathematical model of the underlying problem. The limitations of the DP approach gave rise to the development of sub-optimal yet efficient production control mechanisms. A class of production control mechanisms that implement the JIT (Just In Time) manufacturing philosophy known as pull type control policies/ mechanisms has come to be widely recognized as capable of achieving quite satisfactory results in serial manufacturing line management. Pull type control policies coordinate the production activities in a serial line based only on actual occurrences of demand rather than demand forecasts and production plans as is the case in MRP-based systems. In this chapter, six important pull control policies are examined, namely Kanban and Base Stock (Buzacott and Shanthikumar, 1993), Generalised Kanban (see Buzacott and Shanthikumar (1992), for example), Extended Kanban (Dallery and Liberopoulos, 2000), CONWIP (Spearman et al., 1990) and CONWIP/Kanban Hybrid (Paternina-Arboleda and Das, 2001). Pull production control policies are heuristics characterised by a small number of control parameters that assume integer values. Parameter selection significantly affects the performance of a system operating under a certain pull control policy and is therefore a fundamental issue in the design of a pull-type manufacturing system. In this chapter the performance of JIT manufacturing systems is evaluated by means of discrete-event simulation (Law and Kelton, 1991). In order to optimize the control parameters of the system the simulation model is interfaced with a hybrid optimization technique which combines a genetic algorithm with a local search procedure. The application of simulation together with optimization meta-heuristics for the modeling and

644

design of manufacturing systems is an approach that has attracted considerable attention over the past years. In Dengiz and Alabas (2000) simulation is used in conjunction with tabu search in order to determine the optimum parameters of a manufacturing system while Bowden et al. (1996) utilize evolutionary programming techniques for the same task. Alabas et al. (2002) develop the simulation model of a Kanban system and explore the use of genetic algorithm, simulated annealing and tabu search to determine the number of kanbans. Simulated annealing for optimizing the simulation model of a manufacturing system controlled with kanbans is applied in Shahabudeen et al. (2002), whereas Hurrion (1997) constructs a neural network meta-model of a Kanban system using data provided by simulation. Koulouriotis et al. (2008) apply Reinforcement Learning methods to derive near-optimal production control policies in a serial manufacturing system and compare the proposed approach to existing pull type policies. Some indicative applications of genetic algorithms (GAs) in manufacturing problems can be found in Yang et al. (2007), Yamamoto et al (2008), Smith and Smith (2002), Shahabudeen and Krishnaiah (1999) and Koulouriotis et al. (2010). Panayiotou and Cassandrass (1999) develop a simulationbased algorithm for optimizing the number of kanbans and carry out a sensitivity investigation by using finite perturbation analysis. It has been suggested in the literature that the results of a genetic algorithm can be enhanced by conducting a local search around the best solutions found by the GA, (for related work see Yuan, He and Leng, 2008 and Vivo-Truyols, Torres-Lapasio and Garcıa-Alvarez-Coque, 2001). On that basis, this hybrid optimization scheme has been adopted in the present study. The main contributions of this work are the following. The performance of six important pull production control policies in a hypothetical scenario is investigated using discrete event simulation. In order to determine the control parameters of each policy the proposed hybrid

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

GA is employed. The objective function to be optimized is a weighted sum of the mean Work In Process, (WIP), inventories subject to the constraint of maintaining the service level, (SL), above a specified target. Due to the fact that the objective function is stochastic we use resampling, i.e., performing multiple evaluations of the same parameter vector and using the mean of these evaluations as the fitness measurement of this individual, a practice discussed by Fitzpatrick and Grefenstette, (1988) and Hammel and Bäck, (1994). As a constraint handling technique two types of penalty functions are explored; a “death penalty” function and an exponential penalty function. The exponential penalty function is designed according to an empirical method which is based on locating points which lie on the boundaries between feasible and infeasible region from the output of the genetic algorithm with the “death penalty” function. Our numerical results support the intuitive perception that the “death penalty” approach most of the times will yield worse results than the exponential penalty function which penalizes solutions according to the level of the constraint violation. The chapter concludes with a discussion on how the objective function behaves for different levels of arrival rate and service rates as well as on its sensitivity to the decision variable vector. The remaining material of this chapter is structured as follows. Sections “Base Stock Control Policy” to “CONWIP/Kanban Hybrid Control Policy” give a brief description of six important pull production control policies for serial manufacturing lines. In sections “Optimization Problem: Objective Function” and “Hybrid Genetic Algorithm” we discuss the main aspects of the simulation optimization methodology that we followed and namely, the formal definition of the parameter optimization problem and issues concerning the genetic algorithm and local search procedure that was used. We report our findings from the simulation experiments that we conducted for one serial line starting from

section “Experimental Results: Simulation Case” and thereafter. Finally, in the last section we state our concluding remarks and point to possible directions for future research.

SYSTEM DESCRIPTION: JIT PRODUCTION CONTROL POLICIES We examined manufacturing serial lines that produce a single product type and consist of a number of workstations/machines with intermediate buffers. We assume that the first machine is never starved. Customer demands arrive at random time intervals and request the release of one finished part from the finished goods buffer. Demands are satisfied immediately from the finished parts inventory while in the case where there are no parts available in the last buffer the demand is backordered. We do not consider customer impatience in our model, so no demand is ultimately lost to the system. Manufacturing facilities have the ability to work on only one part at a time during a manufacturing cycle. All machines have random production time, time between failures and repair time. As soon as a stage i part is manufactured, it is placed in the output buffer of that station. A control policy coordinates the release of parts from the output buffer of that station to the next machine. The unreliability of the manufacturing operations along with the stochastic demand for final products dictates the use of safety buffers of intermediate and finished parts in order to attain the target service rate. However the use of safety stocks incurs significant holding costs that could bring the manufacturer to a position of competitive disadvantage, and therefore, it is essential to balance the trade-off between minimizing WIP inventories and maintaining a high service level. Figure 1 shows a manufacturing system with three stations in tandem. The following sections briefly explain the way that the Kanban, Base Stock, CONWIP, CONWIP/ Kanban Hybrid, Extended Kanban and Generalized Kanban control policies for serial lines operate.

645

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Figure 1. A three station manufacturing line

BASE STOCK CONTROL POLICY A Base Stock, (see Buzacott and Shanthikumar, 1993), manufacturing line is completely described by N parameters, the base stock levels Si of each production station, i = 1, 2,..., N where N is the number of the system’s workstations. The Si parameters correspond to the number of parts that exist in the system’s buffers at the time the system is in its initial state that is before any demands have arrived to the system. This control policy operates as follows. When a demand arrives to the system it is immediately transmitted to every manufacturing station, authorizing it to start working on a new station i part. Base Stock has the advantage of reacting rapidly to incoming demand, with the drawback of providing no control at all on the system’s inventories.

KANBAN CONTROL POLICY The Kanban control policy that was originally developed by the Toyota Motor industry and became the topic of considerable research thereafter, (Sugimori et al, 1977, Buzacott and Shanthikumar, 1993, Berkley, 1992, Karaesmen and Dallery, 2000). A Kanban manufacturing line’s control parameters are the production authorizations Ki of each station, i = 1, 2,..., N . The Ki parameter corresponds to the maximum number of parts that are allowed in station i (manufacturing facility – output buffer). Workstation i is authorized to start working on a new part as soon as a finished station i part is released from its

646

output buffer. The information of a demand arrival is transmitted from the last manufacturing station to the first one station – by – station. If there is a buffer with no parts in it then this transmission is interrupted. The Kanban policy offers very tight synchronization between the various production stations of the system at the expense of the relatively slow response to demand fluctuations.

CONWIP CONTROL POLICY CONWIP is an abbreviation for CONstand Work In Process (Spearman et al, 1990). According to this policy the total number of parts that exist in the system, (Work In Process), can never exceed a certain level, which is the C control parameter of the policy. Parameter C is equal to the sum of the system’s base stocks Si, i = 1, 2,..., N . All machines in a CONWIP line are authorized to produce whenever they have this ability, (they are operational and have a raw part to work on), except the first one. The first machine of the system is authorized to start working on a new part as soon as a unit from the finished parts buffer is released to a customer.

GENERALIZED KANBAN AND EXTENDED KANBAN CONTROL POLICIES These two control policies combine the merits of Base Stock and Kanban as they react rapidly to the arrival of demands and effectively control

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

the WIP at the same time. They are described by two parameters per station, the base stocks Si and the production authorizations Ki,(Ki ≥ Si), which are borrowed from the Base Stock and Kanban policies respectively. The finite number of production authorizations guarantees that the system’s inventories will not exceed the pre – defined levels, but the station coordination here is not as tight as in Kanban. A station can be granted a production authorization even if a part is not released from its output buffer. For a detailed description of the way Generalized Kanban and Extended Kanban operate the reader is referred to Liberopoulos and Dallery (2000) and Buzacott and Shanthikumar (1992).

CONWIP/KANBAN HYBRID CONTROL POLICY A CONWIP/Kanban Hybrid system (see Paternina – Arboleda and Das (2001) for example), as implied, operates under a combination of the CONWIP and Kanban control policies. Departure of a finished part from the system authorizes the first station to allow a new raw part to enter the system. All workstations except the last one have a finite number of production authorizations Ki, i = 1, 2,..., N − 1 . Station production authorizations Ki, the base stock SN of the last workstation and the total WIP, (parameter C), that is allowed in the system are CONWIP/Kanban Hybrid’s control parameters.

OPTIMIZATION PROBLEM: OBJECTIVE FUNCTION The mathematical formulation of the parameter optimization problem for serial lines controlled by pull production control policies is given below. Let x = [x 1 x 2  x n ], x i ∈ Z , be the control parameter vector of some pull production control policy, i.e. the station i production authorizations

(kanbans) in a Kanban system, or the initial buffer levels in a Base Stock system etc. The objective is to find the control parameter values x that maximize the expected value of the stochastic objective function f (x, ω) , subject to the constraint of maintaining the service level (SL) equal to or above a specified target t. maximize: E  f (x, ω ) , x i ∈ Z, i = 1, 2,...., n  

(1)

subject to: E SL (x, ω ) ≥ t  

(2)

ω is used to denote the stochastic nature of f, and + SL. SL is an unknown function of x and t ∈ ℜ . The evaluation of functions f and SL is the result of a simulation experiment. The value of SL is the number of demands satisfied by on-hand inventory divided by the number of total demands which arrived to the system. The value of f is a weighted sum of the mean WorkInProcess inventories and is calculated according to 3. N

f = −∑ hi H i

(3)

i =1

where hi stands for the cost of storing one item in output buffer i per time unit, and H i is the average inventory level in buffer i We know that the optimal solution in this type of problems is located very close to the boundaries between feasible and infeasible region. Additional difficulty in obtaining the optimal solution emanates from the fact that fitness measurements contain random “noise” caused by the simulation model.

647

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

HYBRID GENETIC ALGORITHM In order to solve the optimization problem stated in the previous section we propose a hybrid optimization technique which combines a genetic algorithm with a local search procedure. The genetic algorithm evolves a population of candidate solutions (individuals), where each solution is evaluated with the use of a simulation model, and the individual with the highest fitness value found by the GA is used to initialize the local search procedure. The fitness v (x) of each indi-

1.

Input GA parameters: chromosome length l, population size s, sample size m, elite count e, crossover probability Pcross , mutation probability Pmut , max number of generations g

2.

Initialize population randomly, set generation_counter←0 WHILE(generation_counter < g ) a. evaluate population. set

3.

generation _ counter ← generation _ counter + 1

b.

vidual is represented by Equation (4). v (x ) =

1 m



m i =1

 f (x, ω) + p (SL(x, ω))  

(4)

where f (x, ω) is calculated according to (3), p (x) is a properly defined penalty function of the service level, and m is a positive integer (sample size). The parameters of the GA are the chromosome length l, the population size s, the sample size m , a positive integer e called elite count, the crossover probability Pcross , the mutation probability Pmut and the max number of generations g. The individuals which constitute the genetic algorithm population are encoded as binary bit-strings, therefore parameter l controls the size of the search space. Parameter e determines the number of individuals that pass deterministically to the next generation. The local search procedure is characterized by a single parameter δ ∈ ℜ + . Let xcur be the current solution of the local search algorithm, vcur its fitness value and vbest the best fitness value found so far. If we denote the search space by S and a distance function, (e.g. Euclidean distance), by dist(), then the neighborhood of xcur is written as N (xcur ) = {y ∈ S : dist (x, y) ≤ δ } .

The pseudocode of the hybrid genetic algorithm is presented below.

648

4. 5. 6. 7.

8.

scale fitness values proportionally to raw fitness measurements c. apply selection operator ▪ select e individuals with highest fitness values ▪ select the remaining s − e individuals using stochastic uniform selection d. apply crossover operator e. apply mutation operator return individual xbest with highest fitness Initialize local search algorithm: xcur ← xbest , define neighborhood parameter δ evaluate xcur . Set vbest ← vcur , flag ← TRUE WHILE (flag = TRUE ) a. evaluate all points in N (xcur ) b.

select xnew ∈ N (xcur ) with best fitness

c.

value vnew IF (vnew > vbest ) THEN set vbest ← vnew ,

xcur ← xnew ELSE flag ← FALSE return xcur . Terminate

THEN

The selection operator (Step 3.c) determines which individuals will be chosen to create the next generation. The first e individuals in terms of fitness value pass to the next generation by default. The remaining s - e individuals are selected with the use of a stochastic uniform selection routine. This technique can be visualized as

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

a line in which each individual corresponds to a section of the line of length proportional to its scaled fitness value. The algorithm moves along the line in equal-sized steps. At each step, the algorithm selects an individual from the corresponding section it finds itself on. In the crossover stage (Step 3.d), pairs of individuals are selected at random with probability Pcross ∈ (0, 1) in order to be recombined. In the implementations of the GA for the one-parameter-per-workstation manufacturing systems we used the single-point crossover method. For the remaining systems, (Extended and Generalized Kanban) uniform crossover was used. According to this technique, two individuals exchange bits on the basis of a randomly generated binary vector of equal length called crossover mask. The mutation operator (Step 3.e) modifies the value of a bit in the population with probability Pmut ∈ (0, 1) . The genetic algorithm terminates when it completes a predefined number of iterations g and returns the individual with the highest fitness value which is used to initialize the local search algorithm. The complexity of the hill-climbing procedure is O (t × k ) , where t is the number of iterations and k the neighborhood size. The complexity of the genetic algorithm depends on the number of generations, the size of the population and the genetic operators/parameters used.

EXPERIMENTAL RESULTS: SIMULATION CASE We examined a five-machine manufacturing line with equal operation times. The base simulation scenario consists of the following parameters: Machines operate with service rates which are normally distributed random variables with mean 1.1 parts/time unit and st.d. 0.01. Repair to failure times are exponentially distributed with mean 1000 time units. Failures are operation dependent. Repair times are also assumed exponential with

a MeanTimeToRepair of 10 time units. Times between two successive customer arrivals are exponential random variables with mean 1.11 time units, i.e. the arrival rate is Ra = 0.9 . Since the service rates are all equal to 1.1 parts/ time unit and the machines are failure-prone, the maximum attainable throughput rate under any control policy will be Tmax < 1.1 . Consequently, the arrival rate set to 0.9 parts/time unit corresponds to a heavy loading conditions simulation case. The inventory costs for storing one part per time unit in buffer i are h = [h1 h2  h5 ] = [1.0 1.2 1.44 1.73 2.07]. Note that the holding costs increase at a rate of 20% when moving downstream from buffer to buffer. This increase is due to the value which is added to a part as it is progressively converted into a final product. The system operates under a complete backordering policy, which means that no customer demand is ultimately lost to the system. The justification for selecting the aforementioned probability distributions in order to model the arrival process, the service rates etc. can be found in queueing theory and in manufacturing systems literature. Some indicative references, among others, are the influential works of Law and Kelton (2000) and Bhat (2008). The input parameters of the simulation model were selected in a manner to mimic a situation where the system is under heavy loading conditions. This is a case of primary interest since the differences in performance between the various pull type control policies are most clearly illustrated when the manufacturing line is pushed towards its maximum throughput rate. In order to investigate the sensitivity of the production/ inventory system under examination for different levels of arrival rates and service rates as well as the robustness of the solutions obtained by the proposed optimization methodology, four variants of the base simulation scenario are also considered. In the first two variants, all inputs to the simulation model are kept constant except for the arrival rates which are set to Ra = 1.0 and Ra = 0.8

649

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 1. Simulation scenarios parameters Ra

Rp

st.d.

MTTR

inventory costs

base case

0.9

1.1

0.01

1000

10

h

variant 1

1.0

1.1

0.01

1000

10

h

variant 2

0.8

1.1

0.01

1000

10

h

variant 3

0.9

1.1

0.1

1000

10

h

variant 4

0.9

1.1

0.001

1000

10

h

respectively, i.e. we examine the system’s behaviour for increased/decreased demand for final products. In the remaining two variants of the base simulation case we vary the standard deviation of the service rates. In one case the system’s performance is evaluated for service rates that vary significantly around the mean (st.d. = 0.1) and in the other case we examine what would happen if the “randomness” of the service rates decreased (st.d. = 0.001). The system configuration for the five simulation cases is presented in Table 1. The goal is to maximize the expected value of the weighted sum of the mean WorkInProcess N

inventories f = −∑ hi H i , i = 1, 2,.., 5 , subject i =1

to the constraint of E SL(x) ≥ 90.0% .

HYBRID GENETIC ALGORITHM PARAMETERS The dimensions of the related optimization problem for the Kanban, Base Stock, CONWIP and Kanban/CONWIP Hybrid systems are dim = 5. For the Extended and Generalized Kanban systems the dimensionality of the problem rises to dim’ = 10. The authors conducted a series of pilot experiments in order to come up with the most suitable hybrid genetic algorithm parameters for this particular problem. An important issue was to resolve the trade-off between quality of final solution and computational cost as the evaluation of the fitness value of the candidate solutions is computationally expensive. We experimented

650

MTBF

with population sizes in the range [20,50], crossover probabilities in the range [0.3, 0.8] and mutation probabilities in the range [0.001, 0.1]. For the one-parameter-per stage policies the single-point crossover operator was implemented whereas for the two-parameter-per-stage policies we applied uniform crossover. The reason for making this distinction is that offspring produced with the latter crossover technique are generally more diverse compared to their parents than offspring generated by single-point crossover. This is a desirable property due to the fact that the search space for the two-parameter-per-stage policies is by orders of magnitude larger that the search space for the one-parameter-per-stage policies and therefore an intense exploration strategy is required. The neighborhood of the local search algorithm was set to include all data points around the current point x with Euclidean distance equal to or less than 1:   n 2 N (x) = y ∈ S : ∑ (x i − yi ) ≤ 1 . Given   i =1   that the decision variables are integers this is obviously the minimum neighborhood size one could select but since the major part of the search is carried out by the genetic algorithm and the local search procedure is used merely to fine-tune the already obtained solutions, it is acceptable to use a small neighborhood. Admittedly, the parameters of the optimization algorithm were initialized heuristically and one cannot discard the possibility that different values for the parameters could yield better results but a full factorial experiment for the design of the optimization scheme would

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

fall beyond the scope of this chapter. The genetic algorithm’s parameters that were ultimately selected are: population size = 30, crossover and mutation probabilities, Pcross = 0.5 and Pmut = 0.05 respectively. The individual which scored the highest fitness value passes to the next generation with probability 1, i.e. the elite count parameter was set to 1. Each individual was evaluated 50 times, m = 50, where each replicate was executed for 80,000 time units. The GA produced 100 generations for the problems with dimensionality dim = 5, (optimizing Kanban, Base Stock, CONWIP, CONWIP/Kanban Hybrid systems). For the problems with dimensionality dim’ = 10, (optimization of Extended Kanban and Generalized Kanban systems) the GA produced 240 generations of individuals.

COMPUTATIONAL COST The simulators for the six pull type manufacturing systems as well as the proposed optimization algorithm were coded in C++ and the experiments were conducted on a PC with AMD Athlon processor at 1.8 GHz and 512 MB RAM. The factor that primarily affects the execution time of the hybrid GA is the control parameter evaluation, i.e. the computational cost of the simulation model. Every solution evaluation, that is 50 independently seeded executions of the simulation model, lasts approximately 5 seconds and therefore the evaluation of a generation of candidate solutions (30 individuals) takes about 2.5 minutes to complete. The execution of the hybrid GA for a one-parameter-per-stage policy (100 generations) lasts approximately 4.7 hours, of which 4.2 hours are consumed by the simulation model. On the other hand, the execution of the hybrid GA for a two-parameter-per-stage policy (240 generations) lasts approximately 11 hours, where 10 hours are devoted to the solution evaluation phase.

DEATH PENALTY RESULTS For the implementation of the hybrid genetic algorithm with “death” penalty we used the following penalty function.   0.0, p (x ) =   −1000.0,   

if if

E SL(x) ≥ 90.0% E SL(x) < 90.0% (5)

We reiterate that the expected value E SL(x) is the arithmetic mean of m measurements of SL. This is a very straight-forward implementation. Every individual that does not satisfy the service level constraint is penalized heavily and will be probably discarded in the next iteration of the algorithm. The results from the hybrid genetic algorithm runs for each control policy are displayed in Table 2. The rows containing the results of the standard genetic algorithm, (without the local search component), are labeled with the initials GA followed by the control policy’s name, while the results of the hybrid algorithm are labeled with the initials GAL and the control policy’s name. We made this distinction in order to clarify whether the local search offers some significant improvement or not. The last column of Table 2 contains the fitness values calculated according to Equation (4) of the corresponding parameter sets. The CONWIP system scored the highest fitness value vb = −25.29 followed by the Generalized Kanban, the Extended Kanban, the Hybrid CONWIP/Kanban, the Kanban and the Base Stock systems in decreasing fitness value order. With the exception of the CONWIP system, the local search algorithm enhanced the best fitness values found by the standard genetic algorithm in a range from 1.35% to 9.34%. It is important to stress here that this improvement refers to the fitness values and not the actual objective function values of the optimization problem. In the case of the Generalized Kanban system the local search al-

651

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 2. Best parameter sets and fitness values for “death” penalty function (n.i. stands for “no improvement”) Policies

x1(/x1’)

x2(/x2’)

x3(/x3’)

X4(/x4’)

x5(/x5’)

C

v(x)

GA_Kanban (Ki)

2

2

1

8

13

-

-32.11

GAL_Kanban (Ki)

1

2

1

8

12

-

-29.12

GA_BaseStock (Si)

2

7

2

0

14

-

-31.70

GAL_BaseStock (Si)

1

6

2

0

14

-

-29.49

GA_CONWIP (Si, C)

0

5

3

5

6

19

-25.29

GAL_CONWIP (Si, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_Hybrid (Ki, i=1,2,3,4, B5, C)

1

1

3

7

10

22

-28.75

GAL_Hybrid(Ki,i=1,2,3,4, S5, C)

1

1

2

7

10

21

-26.71

GA_E. Kanban (Ki/Si)

10/0

11/2

9/1

2/2

23/15

-

-26.54

GAL_E. Kanban (Ki/Si)

6/0

11/2

6/1

2/2

23/15

-

-26.18

GA_G. Kanban (Ki/Si)

8/4

2/0

15/3

16/2

14/14

-

-1028.14

GAL_G.Kanban (Ki/Si)

6/2

2/0

15/3

15/2

14/14

-

-26.15

gorithm appears to have “repaired” the infeasible solution found by the standard genetic algorithm. The average percentage of infeasible solutions in the final generations of the genetic algorithm runs was equal to 7.2%. Table 3 contains the objective function values E  f (x) * and service levels E SL(x) * % with 95% confidence bounds of

the best parameters found by both the standard genetic algorithm and the hybrid genetic algorithm with the “death” penalty function. These data were produced by running 50 replicates of each of the six simulation models for tsim = 1, 500, 000.0 time units and then averaging the corresponding variables. This is a 18.75 times

Table 3. Objective function values E  f (x) * and % service levels E SL(x) * % for best parameter sets found by standard GA and hybrid GA with “death penalty” (95% confidence). (K stands for Kanban, BS for Base Stock etc. n.i. stands for “no improvement”) GA results

Hybrid GA (with local search)

Policies

E SL(x) * %

E  f (x) *

Policies

E SL(x) * %

E  f (x) *

K

91.17 ± 0.09

-32.07 ± 0.04

K

90.08 ± 0.09

-29.13 ± 0.03

BS

90.37 ± 0.09

-31.73 ± 0.02

BS

90.03 ± 0.11

-29.53 ± 0.02

C

89.87 ± 0.08

-25.26 ± 0.01

C

n.i.

n.i.

C/K H

91.34 ± 0.08

-28.83 ± 0.03

C/K H

90.43 ± 0.11

-26.79 ± 0.04

EK

90.29 ± 0.07

-26.57 ± 0.0

EK

90.21 ± 0.09

-26.23 ± 0.03

GK

90.23 ± 0.09

-28.18 ± 0.02

GK

89.91 ± 0.01

-26.12 ± 0.02

652

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

longer simulation than that used to evaluate the fitness of the individuals in the genetic algorithm. By using exhaustively long simulation times we can compute far more accurate estimators, (indicated by the superscript *), which can be considered to approximate the true expected values of these performance measures. This way, relatively safe conclusions can be drawn regarding both the quality of the solutions found by the hybrid genetic algorithms and the performance of each of the competing pull type control policies. Of course, by increasing the simulation time and/ or the resampling, the optimization algorithm is less likely to be mislead by “lucky” candidate solutions which score well once by chance but the consequent computational cost is prohibitive of doing so. Apart from that, we are interested in establishing whether the algorithm is capable of locating good and hopefully optimal solutions in the presence of a relatively low signal-to-“noise” ratio. By observing the data in Table 3 we see that the local search algorithm produced actual improvements in the objective function values while preserving the feasibility of the solutions in all cases except the CONWIP and Generalized Kanban systems. The results regarding the Generalized Kanban system are somehow contradicting. In Table 2 the local search algorithm appears to have repaired the infeasible solution, while in Table 3

the original solution is now found to be actually feasible and the local search solution is the one which violates the constraint. Actually, these results merely demonstrate an inherent weakness of search algorithms that generate a single point per iteration when compared to genetic algorithms in “noisy” environments. A hill-climbing method, like the one used here, compares candidate solutions in each iteration only with best solution found so far, and therefore, it is easy to be mislead by a “lucky” solution. In genetic algorithms, on the contrary, for a solution to be maintained it must outweigh an entire collection of solutions and not just a previously best one. As an overall assessment, we could argue that the results presented in this section support the conclusion that even with this simple static penalty function a genetic algorithm can produce quite good solutions. Two typical plots of the best solution found by the genetic algorithm with the “death” penalty function versus the number of generations can be found in Figure 2. These two plots exhibit a somehow similar pattern. For illustration purposes we use a time window of 140 generations and divide the plot areas in two regions with a perpendicular dotted line. Notice that in the left side of the plots, if we disregard random fluctuations caused by the simulation model, the two curves are increasing

Figure 2. Typical plots of best fitness value found by GA with “death” penalty function versus number of iterations (140 iterations window)

653

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

almost monotonically. In this area, the best solution found by the algorithm is not lying somewhere near the boundaries between feasible and infeasible region. In the intersection point of the curve with the dotted perpendicular line the curve suddenly “dives”. This is indicative that the currently best individual is marginally feasible, (or infeasible), and that it failed to satisfy the constraint in this evaluation. As a consequence, it was penalized heavily and substituted by another individual which happened to have a lower fitness value. From this point on, the curve displays similar abrupt fluctuations, indicating that the population evolves towards the boundaries between feasible and infeasible region and the optimal solution.

DESIGNING EXPONENTIAL PENALTY FUNCTION Intuitively, the “death” penalty approach, as attractive as it can be due to its simplicity, does not seem to be the best approach to handle constraints. Even the slightest violation of the imposed constraints results in penalizing heavily a good solution. This way an individual which scores excellently for a series of consecutive generations may be discarded by the algorithm. This is an undesired property in the kind of optimization problem that we are dealing with, where fitness measurements are distorted by random fluctuations caused by the stochasticity of the simulation model. Another weakness of this approach is that it damages the diversity of the population, as the majority of the individuals are crowded in the feasible region. Given that the optimal solution lies on the feasibility boundaries, the search would probably be more efficient if the population evolved towards the boundaries from both feasible and infeasible regions. For example, it is unclear why a slightly infeasible individual which is located very close to the optimal solution should be assigned a worse fitness value than a feasible individual that scores poorly. For all of the above mentioned reasons, the idea of penalizing infeasible solutions accord-

654

ing to the level of the constraint violation seems more appealing, (see Venkatraman and Yen (2005) for guidelines on designing penalty functions). The problem that needs to be addressed now is how to design such a “soft-limit” penalty function. A reasonable choice is to use an exponential penaltyfunction p(x) = c u , where c = const ∈ ℜ and u = t − SL is the difference between the target service level and E SL(x) the measured

expected service level. The intuitive, minimal penalty rule, (Le Riche et al. 1995), suggests that the penalty for infeasible individuals should be just above the threshold below which infeasible solutions score better than their feasible, possibly optimal, neighbors. In practice, however, it is quite difficult to achieve this. The procedure we followed in order to implement, at least to some extent, this intuition, is the following. Using the output of the executions of the genetic algorithm with the “death” penalty function we created plots like the ones in Figure 2. By examining these plots it was easy to locate solutions that were very close to the feasibility boundaries, (these points are indicated by the characteristic “dive” of the curve). The next step was to examine the neighborhood of such a point in order to determine how a small change in parameters affected the service level SL as well as the objective function f (x) . The value of SL is affected primarily by the control parameters of the last three machines so we could limit ourselves to a relatively small neighborhood. Having collected this data, we were able to select the parameter c of the penalty function p(x) = c u , in the spirit of the “minimal penalty rule”. This is an empirical technique that may not be easy or even possible to apply to other problems, nevertheless it provides the means to design a penalty function that will work well and outperform most of the times the “death” penalty approach as supported by our experimental results presented in the following section.

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

EXPONENTIAL PENALTY FUNCTION RESULTS

The results from the hybrid genetic algorithm runs for each control policy are displayed in Table 4. The rows containing the results of the standard genetic algorithm, (without the local search component), are labeled with the initials GA followed by the control policy’s name, while the results of the hybrid algorithm are labeled with the initials GAL and the control policy’s name. The last column of Table 4 contains the fitness values of the corresponding parameter sets. The CONWIP system scored the highest fitness value vb = −25.31 followed by the Extended Kanban, the Hybrid CONWIP/Kanban, the Generalized Kanban, the Base Stock and the Kanban systems in decreasing fitness value order. The hybrid optimization algorithm outperformed the standard genetic algorithm in the cases of the Base Stock, the Extended Kanban and the Generalized Kanban systems. For the three remaining systems the local search failed to offer an improvement in minimizing v (x) . Table 5 contains the objective function values E  f (x) * and service levels E SL(x) * % with 95% confi-

After following the procedure outlined in the previous section we were able to construct the following penalty function (6).  0.0,  p(x) =  9.0u ,  3 9.0 , 

u ≤ 0. 0 (6)

0. 0 < u < 3. 0 3.0 ≤ u

where u = t − E SL(x) is the difference between the target service level t = 90.0% and the measured service level E SL(x) . Note that for service

levels equal to or lower than 87.0% we ground the penalty to a constant value. The reason we do this is that we want the raw fitness values v(x) to be within a range for the selection operator of the genetic algorithm to work properly. We reiterate that in our implementation the values of the individuals are scaled proportionally to their raw fitness measurements prior to selection.

dence bounds of the best parameters found by both

Table 4. Best parameter sets and fitness values for exponential penalty function (n.i. stands for “no improvement”) Policies

x1(/x1’)

x2(/x2’)

x3(/x3’)

x4(/x4’)

x5(/x5’)

C

v(x)

GA_Kanban (Ki)

1

1

1

7

13

-

-28.05

GAL_Kanban (Ki)

n.i.

n.i.

n.i.

n.i.

n.i.

-

n.i.

GA_BaseStock (Si)

0

3

0

0

17

-

-27.19

GAL_BaseStock (Si)

0

2

0

0

17

-

-25.95

GA_CONWIP (Si, C)

0

6

1

6

6

19

-25.31

GAL_CONWIP (Si, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_Hybrid (Ki, i=1,2,3,4, B5, C)

1

4

5

9

1

20

-25.92

GAL_Hybrid(Ki,i=1,2,3,4, S5, C)

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

n.i.

GA_E. Kanban (Ki/Si)

4/1

10/0

2/2

4/2

22/15

-

-25.82

GAL_E. Kanban (Ki/Si)

3 /1

10/0

2/2

4/2

22/15

-

-25.72

GA_G. Kanban (Ki/Si)

12/5

7/0

14/4

13/1

16/14

-

-29.34

GAL_G.Kanban (Ki/Si)

9/2

3/0

13/4

13/1

15/14

-

-25.95

655

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 5. Objective function values E  f (x) * and % service levels E SL(x) * % for best parameter sets found by standard GA and hybrid GA with “exponential penalty” (95% confidence). (K stands for Kanban, BS for Base Stock etc. n.i. stands for “no improvement”) GA results

Hybrid GA (with local search)

Policies

E SL(x) * %

E  f (x) *

Policies

E SL(x) * %

E  f (x) *

K

90.10 ± 0.08

-28.01 ± 0.03

K

n.i.

n.i.

BS

90.34 ± 0.12

-27.20 ± 0.03

BS

89.95 ± 0.13

-25.97 ± 0.03

C

89.94 ± 0.10

-25.27 ± 0.02

C

n.i.

n.i.

C/K H

90.21 ± 0.09

-25.87 ± 0.02

C/K H

n.i.

n.i.

EK

90.07 ± 0.11

-25.84 ± 0.02

EK

89.98 ± 0.10

-25.71 ± 0.02

GK

90.31 ± 0.09

-29.34 ± 0.02

GK

89.73 ± 0.09

-25.89 ± 0.02

the standard genetic algorithm and the hybrid genetic algorithm with the exponential penalty function. These data were produced by running 50 replicates of each of the six simulation models for tsim = 1, 500, 000.0 time units and then averaging the corresponding variables. All three solutions found by the local search algorithm when initialized with the solutions of the genetic algorithm were marginally infeasible. The local search algorithm falsely interpreted the effect of random noise as an actual improvement and thus substituted the feasible solutions by infeasible ones. Of course, we cannot rule out that this was caused in part by the penalty function itself. However, we must mention that the amount of the constraint violation was rather trivial. The average percentage of infeasible solutions in the final generations of the genetic algorithm runs with the exponential penalty function was equal to 8.5%.

DISCUSSION ON THE PERFORMANCE OF THE TWO PENALTY FUNCTIONS By comparing the data in Tables 3 and 5 we notice that the standard genetic algorithm with the expo-

656

nential penalty function outperforms both the standard genetic algorithm and the hybrid algorithm with the “death penalty” function for all systems except the CONWIP and the Generalized Kanban. In terms of objective function value, the use of the exponential penalty function rather than the “death penalty” improved the solution by 3.84% for the Kanban system, by 7.89% for the Base Stock system and by 3.43% for the CONWIP/Kanban Hybrid system. For the Extended Kanban system we monitored a 1.5% lower value of E  f (x) *,

while for the CONWIP system the results were practically the same. Only for the Generalized Kanban system the “death” penalty approach succeeded in producing a 4% better solution than the exponential penalty approach. The superiority of the exponential penalty function over the “death” penalty function can be explained qualitatively as follows: Figure 3 shows typical plots of the genetic algorithm’s convergence with the “death” penalty and the exponential penalty function. Notice that at some point near the 60th generation both curves have approximately the same height. The best solutions found by the two implementations of the algorithm in these points probably belong to the same level set and are lying somewhere

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

close to the feasibility boundaries. In some subsequent iteration of the algorithm with the “death” penalty, this solution apparently violates the constraint and is therefore discarded. The height of the curve shows that the individual which replaced it has a significantly lower fitness value. This is not the case in the algorithm with the exponential penalty where the properly designed penalty function prevents the good solution to be discarded, at least not from a much worse candidate solution. Concluding the discussion on the performance of the hybrid GA we state summarize our major findings: i) the incorporation of a local search element can enhance the genetic algorithm’s performance with the disadvantage of that the local search algorithm is more susceptible to falsely interpreting random noise as actual objective function improvements than the genetic algorithm, ii) the “death penalty” approach most of the times will yield worse results than a function which penalizes solutions according to the level of the constraint violation like the exponential penalty function used here.

COMPARISON OF PULL TYPE PRODUCTION CONTROL POLICIESSENSITIVITY ANALYSIS Table 6 presents the objective function values and the corresponding service levels for the six JIT

control policies with the best parameters found by the proposed optimization strategy. Note that the Base stock, CONWIP and Extended Kanban solutions attain a service level below 90% but since the constraint is within the 95% confidence halfwidth we consider them to be feasible. The CONWIP policy ranks first followed in close distance by the Extended Kanban, Hybrid and Base Stock policies. The Kanban and Generalized Kanban policies occupy the last two positions of the objective function value ranking. Since in this simulation scenario the demand process pushes the manufacturing system towards its maximum throughput rate, the poor performance of the Kanban mechanism is anticipated since this policy offers tight coordination between the manufacturing stages but does not respond rapidly to incoming orders. On the other hand the performance of the Generalized Kanban system is somewhat unexpected since it is supposed to be an enhancement of the original Kanban policy. However, this is not the case for the control policy that is mostly related to Generalized Kanban, the Extended Kanban mechanism, which ranks second. The main characteristic of the Base Stock policy, that is fast reaction to demand, is supported by the experimental output. Finally, the fact that in a CONWIP or CONWIP/Kanban Hybrid system the WIP tends to accumulate to the last buffer al-

Figure 3. Typical plots of best fitness value found by GA with “death” and exponential penalty functions versus number of iterations

657

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 6. Objective function values – service levels of pull control policies with best parameters for base simulation case

E  f (x)

Kanban

Base Stock

CONWIP

CONWIP/Kanban Hybrid

Extended Kanban

Generalized Kanban

-28.01±0.03

-25.97±0.03

-25.27±0.02

-25.87±0.02

-25.71±0.02

-28.18±0.02

90.10 ± 0.08

89.95 ± 0.13

89.94 ± 0.10

90.21 ± 0.09

89.98 ± 0.10

90.23 ± 0.09



E SL(x)



lows this two policies to achieve a high service level while operating a lean manufacturing mode. Table 7 shows the statistics of the system’s performance measures for the four variants of the basic simulation case. In the case where the demand rate increases (first column of Table 7) we notice that the service level as well as the average WIP decreases for all policies, but some control mechanisms are more sensitive to this change than others. Specifically, the service rate in the Kanban and Hybrid systems decreases dramatically, whereas the Base Stock and Generalized Kanban policies seem to be more robust regarding the increase of the demand rate. In the second variant (decreased arrival rate) of the basic simulation case one can see that all six control policies practically achieve the same service level. This

is an indication that when the demand can be easily satisfied by the manufacturing system the role of the production control policy diminishes. In this case the distribution of the objective function values over the control mechanisms also tends to level out. The increase of the standard deviation of the processing times (variant 3) has an effect similar to that of the increase of the demand rate. The reasons for that can be attributed to the resulting decreased coordination among the various production stages which increases the frequency with which machine starvation or blockage events occur. Again, the Kanban mechanism is mostly affected by this parameter due to its tight production coordination scheme, while the Generalized Kanban mechanism seems to react rather ro-

Table 7. Objective function values – service levels of pull control policies with best parameters for variants of base simulation case Ra=1.0

E  f (x)

Ra=0.8 ∗

E SL(x)



E  f (x)

st.d.=0.1 ∗

E SL(x)



E  f (x)

st.d.=0.001 ∗

E SL(x)



E  f (x)



E SL(x)

K

-15.21

55.61

-32.77

96.52

-20.92

76.2

-28.32

90.43

BS

-23.75

64.2

-28.57

96.05

-25.48

88

-26.01

90.07

C

-19.15

62.89

-28.68

96.18

-24.54

87.91

-25.33

90.12

H

-14.71

57.91

-30.06

96.38

-21.23

79.43

-26.03

90.42

EK

-18.05

61.89

-29.33

96.21

-24.96

88.05

-25.74

90.07

GK

-20.57

62.92

-31.74

96.29

-27.56

88.72

-28.17

90.27

658



Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

bustly. Finally, the decrease of the standard deviation of the processing times (variant 4) seems to have a negligible effect on the system’s behavior as indicated by the experimental data presented in the last column of Table 7. Tables 8 and 9 contain data regarding the sensitivity of the system’s behavior in respect to the parameters of the controlling policy. For example, in Table 8, the cells in the i-th row that belong to the columns labeled as “Base Stock” show the objective function value and service level that result when the i-th component (parameter Si) of the corresponding decision variable vector is increased by the minimum possible value, i.e. by one. In general, the service level (objective function) is an increasing (decreasing) function of the control parameters. However, the rate with which the service level/ objective function changes depends on the type of the control policy and the index (position) of the parameter in the parameter vector. For instance, in the five-station Kanban system, adding an additional kanban in the last stage will result in a larger decrease of the objective function value than adding an extra kanban in any of the upstream stages. It is interesting to observe the cases of the CONWIP and CONWIP/Kanban Hybrid systems where the unitary increase of a control parameter in any of the stages 2,3,4,5 seems to have the same effect.This can be explained by the fact that since

the last workstation is authorized to produce whenever it has this ability all parts in upstream buffers are continuously “pushed” towards the finished goods buffer, and therefore WIP in intermediate stages is scarce. By increasing the initial stock in the first buffer the average WIP in intermediate stages increases and thus this change has greater impact to the objective value and service level. Generalized Kanban and Extended Kanban are characterized by two parameters per stage and therefore the sensitivity analysis must consider both of these parameters. The systems’ performance for a unitary change in the base stock of the i-th stage is shown in the columns labeled as “base stocks” whereas the cells under the label “free kanbans” contain system performance information when the total number of kanbans of the i-th stage is increased by one but the base stock remains unaltered. The effect of adding an extra base stock to a stage of a Generalized/Extended Kanban system is similar to that of adding a kanban to a stage of a Kanban system. The mean WIP is also an increasing function of the control parameters (Ki - Si) but as one can see from Table 9 rather large changes in the number of free kanbans are needed for a significant change in the objective function value to occur.

Table 8. Objective function value – service level sensitivity to parameter vector (Kanban, Base Stock, CONWIP, Hybrid) Kanban

E  f (x)

Base Stock ∗

E SL(x)



E  f (x)



CONWIP

E SL(x)



E  f (x)

CONWIP/Kanban Hybrid ∗

E SL(x)



E  f (x)



E SL(x)

1

-29.1

90.32

-27.15

90.35

-27.53

90.92

-28.03

91.13

2

-29.34

90.51

-27.19

90.26

-27.24

90.82

-27.84

91.02

3

-29.52

90.65

-27.77

90.80

-27.24

90.77

-27.83

91.02

4

-29.61

90.51

-27.83

90.74

-27.24

90.81

-27.79

91.0

5

-29.86

90.94

-27.86

90.78

-27.24

90.83

-27.76

90.99



659

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Table 9. Objective function value – service level sensitivity to parameter vector (Extended Kanban, Generalized Kanban) Extended Kanban

Generalized Kanban free kanbans (Ki-Si)

base stocks (Si)

E  f (x)



E SL(x)



E  f (x)



E SL(x)

free kanbans (Ki-Si)

base stocks (Si) ∗

E  f (x)



E SL(x)



E  f (x)



E SL(x)

1

-26.75

90.21

-25.83

90.07

-29.18

90.35

-28.20

90.27

2

-27.12

90.47

-25.79

90.01

-29.65

90.63

-28.27

90.29

3

-27.16

90.55

-25.8

90.1

-29.58

90.69

-28.29

90.24

4

-27.38

90.66

-25.77

90.06

-29.81

90.78

-28.26

90.28

5

-27.61

90.86

-25.73

90.06

-30.08

91.13

-28.21

90.32

CONCLUSION AND FUTURE RESEARCH We implemented a hybrid optimization technique which combines a genetic algorithm with a local search procedure to find optimal decision variables for a family of JIT manufacturing systems. The goal was to maximize a weighted sum of the mean WorkInProcess inventories subject to the constraint of maintaining a target service level. Our numerical results indicate that the performance of a genetic algorithm can be easily enhanced by incorporating a local search component, however, the local search algorithm is more susceptible to falsely interpreting random noise as actual objective function improvements than the genetic algorithm. Moreover, our results support the intuitive perception that penalizing candidate solutions according to the level of constraint violation will yield better results than the “death penalty” approach most of the times. The performance of the JIT control policies with optimized parameters is presented analytically and commented upon. Finally, we conduct a sensitivity analysis in respect to the variation of the demand rate, the standard deviation of the service rates and the control parameter vector. The results of the analysis offer considerable insight to the

660



underlying mechanics of the JIT control policies under consideration. Constraint handling and “noisy” or dynamic environments in the context of genetic optimization of manufacturing systems are currently active research fields. Indicatively, a relatively recent and interesting direction is to use evolutionary multi-objective techniques to handle constraints as additional objectives.

REFERENCES Alabas, C., Altiparmak, F., & Dengiz, B. (2002). A comparison of the performance of artificial intelligence techniques for optimizing the number of kanbans. The Journal of the Operational Research Society, 53(8), 907–914. doi:10.1057/ palgrave.jors.2601395 Bellman, R. E. (1957). Dynamic Programming. Princeton: Princeton University Press. Berkley, B. J. (1992). A review of the kanban production control research literature. Production and Operations Management, 1(4), 393–411. doi:10.1111/j.1937-5956.1992.tb00004.x

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Bhat, U. N. (2008). An Introduction to Queueing Theory: Modelling and Analysis in Applications. Boston: Birkhauser. Bowden, R. O., Hall, J. D., & Usher, J. M. (1996). Integration of evolutionary programming and simulation to optimize a pull production system. Computers & Industrial Engineering, 31(1&2), 217–220. doi:10.1016/0360-8352(96)00115-5 Buzacott, J. A., & Shanthikumar, J. G. (1992). A general approach for coordinating production in multiple cell manufacturing systems. Production and Operations Management, 1(1), 34–52. doi:10.1111/j.1937-5956.1992.tb00338.x Buzacott, J. A., & Shanthikumar, J. G. (1993). Stochastic Models of Manufacturing Systems. New York: Prentice Hall. Dallery, Y., & Liberopoulos, G. (2000). Extended kanban control system: combining kanban and base stock. IIE Transactions, 32(4), 369–386. doi:10.1080/07408170008963914 Fitzpatrick, J. M. & Grefenstette, J. J. (1988) Genetic algorithms in noisy environments. Machine Learning: Special issue on genetic algorithms, 3, 101-120. Gershwin, S. B. (1994). Manufacturing Systems Engineering. New York: Prentice Hall. Hammel, U., & Bäck, T. (1994). Evolution strategies on noisy functions. How to improve convergence properties. Parallel Problem Solving from Nature, 3, 159–168. Howard, R. (1960). Dynamic Programming and Markov Processes. Massachussets. MIT Press. Hurrion, R. D. (1997). An example of simulation optimization using a neural network metamodel: finding the optimum number of kanbans in a manufacturing system. The Journal of the Operational Research Society, 48(11), 1105–1112.

Karaesmen, F., & Dallery, Y. (2000). A performance comparison of pull type control mechanisms for multi-stage manufacturing. International Journal of Production Economics, 68, 59–71. doi:10.1016/ S0925-5273(98)00246-1 Koulouriotis, D. E., Xanthopoulos, A. S., & Gasteratos, A. (2008). A Reinforcement Learning Approach for Production Control in Manufacturing Systems. In 1st International Workshop on Evolutionary and Reinforcement Learning for Autonomous Robot Systems, (pp. 24-31). Patra, Greece. Koulouriotis, D. E., Xanthopoulos, A. S., & Tourassis, V. D. (2010). Simulation optimisation of pull control policies for serial manufacturing lines and assembly manufacturing systems using genetic algorithms. International Journal of Production Research, 48(10), 2887–2912. doi:10.1080/00207540802603759 Law, A., & Kelton, D. (2000). Simulation Modelling and Analysis. New York: McGraw Hill. Le Riche, R., Knopf-Lenoir, C., & Haftka, R. T. (1995). A Segregated Genetic Algorithm for Constrained Structural Optimization. In Sixth International Conference on Genetic Algorithms (pp. 558-565). Liberopoulos, G., & Dallery, Y. (2000). A unified framework for pull control mechanisms in multi-stage manufacturing systems. Annals of Operations Research, 93, 325–355. doi:10.1023/A:1018980024795 Panayiotou, C. G., & Cassandras, C. G. (1999). Optimization of kanban - based manufacturing systems. Automatica, 35(3), 1521–1533. doi:10.1016/S0005-1098(99)00074-6 Paternina – Arboleda, C. D, & Das, T. K. (2001). Intelligent dynamic control policies for serial production lines. IIE Transactions, 33, 65–77. doi:10.1080/07408170108936807

661

Constrained Optimization of JIT Manufacturing Systems with Hybrid Genetic Algorithm

Shahabudeen, P., Gopinath, R., & Krishnaiah, K. (2002). Design of bi-criteria kanban system using simulated annealing technique. Computers & Industrial Engineering, 41(4), 355–370. doi:10.1016/S0360-8352(01)00060-2

Venkatraman, S., & Yen, G. G. (2005). A generic framework for constrained optimization using genetic algorithms. IEEE Transactions on Evolutionary Computation, 9(4), 424–435. doi:10.1109/ TEVC.2005.846817

Shahabudeen, P., & Krishnaiah, K. (1999). Design of a Bi-Criteria kanban system using Genetic Algorithm. International Journal of Management and System, 15(3), 257–274.

Vivo-Truyols, G., Torres-Lapasio, J. R., & Garcıa-Alvarez-Coque, M. C. (2001). A hybrid genetic algorithm with local search: I. Discrete variables: optimisation of complementary mobile phases. Chemometrics and Intelligent Laboratory Systems, 59, 89–106. doi:10.1016/S01697439(01)00148-4

Smith, G. C., & Smith, S. S. F. (2002). An enhanced genetic algorithm for automated assembly planning. Robotics and Computer-integrated Manufacturing, 18(5-6), 355–364. doi:10.1016/ S0736-5845(02)00029-7 Spearman, M. L., Woodruff, D. L., & Hopp, W. J. (1990). CONWIP: a pull alternative to kanban. International Journal of Production Research, 28, 879–894. doi:10.1080/00207549008942761 Sugimori, Y., Kusunoki, K., Cho, F., & Uchikawa, S. (1977). Toyota production system and kanban system materialization of just-in-time and respect-for-humans systems. International Journal of Production Research, 15(6), 553–564. doi:10.1080/00207547708943149 Veatch, M. H., & Wein, L. M. (1992). Monotone control of queueing networks. Queueing Systems, 12, 391–408. doi:10.1007/BF01158810

Yamamoto, H., Qudeiri, J. A., & Marui, E. (2008). Definition of FTL with bypass lines and its simulator for buffer size decision. International Journal of Production Economics, 112(1), 18–25. doi:10.1016/j.ijpe.2007.03.007 Yang, T., Kuo, Y., & Cho, C. (2007). A genetic algorithms simulation approach for the multiattribute combinatorial dispatching decision problem. European Journal of Operational Research, 176(3), 1859–1873. doi:10.1016/j. ejor.2005.10.048 Yuan, Q., He, Z., & Leng, H. (2008). A hybrid genetic algorithm for a class of global optimization problems with box constraints. Applied Mathematics and Computation, 197, 924–929. doi:10.1016/j.amc.2007.08.081

This work was previously published in Supply Chain Optimization, Design, and Management: Advances and Intelligent Methods, edited by Ioannis Minis, Vasileios Zeimpekis, Georgios Dounias and Nicholas Ampazis, pp. 212-231, copyright 2011 by Business Science Reference (an imprint of IGI Global).

662

663

Chapter 38

Comparison of Connected vs. Disconnected Cellular Systems: A Case Study Gürsel A. Süer Ohio University, USA Royston Lobo S.S. White Technologies Inc., USA

ABSTRACT In this chapter, two cellular manufacturing systems, namely connected cells and disconnected cells, have been studied, and their performance was compared with respect to average flowtime and work-inprocess inventory under make-to-order demand strategy. The study was performed in a medical device manufacturing company considering their a) existing system b) variations from the existing system by considering different process routings. Simulation models for each of the systems and each of the options were developed in ARENA 7.0 simulation software. The data used to model each of these systems were obtained from the company based on a period of nineteen months. Considering the existing system, no dominance was established between connected cells vs. disconnected cells as mixed results were obtained for different families. On the other hand, when different process routings were used, connected system outperformed the disconnected system. It is suspected that one additional operation required in the disconnected system as well batching requirement at the end of packaging led to poor performance for the disconnected cells. Finally, increased routing flexibility improved the performance of the connected cells, whereas it had adverse effects in the disconnected cells configuration.

DOI: 10.4018/978-1-4666-1945-6.ch038

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Comparison of Connected vs. Disconnected Cellular Systems

INTRODUCTION Cellular Manufacturing is a well known application of Group Technology (GT). Cellular Design typically involves determining appropriate part families and corresponding manufacturing cells. This can be done either by grouping parts into families and then forming machine cells based on the part families or machine cells are determined first and based on these machine cells the part families may be formed or lastly both these formations can take place simultaneously. In a cellular manufacturing system, there may be a manufacturing cell for each part family or some of the manufacturing cells can process more than one part family based on the flexibility of the cells. The factors affecting the formation of cells can differ under various circumstances, some of them are volume of work to be performed by the machine cell, variations in routing sequences of the part families, processing times, etc. A manufacturing system in which the goods or products are manufactured only after customer orders are received is called a make-to-order system. This type of system helps reduce inventory levels since no finished goods inventory is kept on hand. In this chapter, two types of cellular layouts are analyzed, namely connected cells (single-stage cellular system) and disconnected cells (multistage cellular system) and their performance is compared under various circumstances for a make-to-order company. This problem has been observed in a medical device manufacturing company. The management was interested in such a comparison to finalize the cellular design. It was also important to research the impact of flexibility within each system for different combinations of family routings. A similar situation of connected vs. disconnected cellular design was also observed in a shoe manufacturing company, and in a jewelry manufacturing company. Authors believe that this problem has not been addressed in the literature

664

before even though it has been observed in more than one company and therefore worthy to study.

BACKGROUND The connected cells represent a continuous flow where the products enter the cells in the manufacturing area, complete the machining operations and exit through the corresponding assembly and packaging area after completion of the assembly and packaging operations. In other words, the output of a cell in the manufacturing area becomes the input to the corresponding cell in the assembly and packaging area. The biggest advantage of connected cells is that material flow is smoother and hence flowtime is expected to be shorter. This is also expected to result in lower WIP inventory. This paper focuses on a cellular manufacturing system similar to the system shown in Figure 1. There are three cells in the manufacturing area and three cells in the assembly and packaging area. In these cells, M1 through M3 represent the machines in the manufacturing area, A1, A2 and P1 through P3 represent the machines in the assembly and packaging area. The products essentially follow a unidirectional flow. The three cells in manufacturing area are similar since they have similar machines and all the products can be manufactured in any of the cells. However, the situation gets complicated in the assembly and packaging area. The three cells have restrictions in terms of the products that they can process. Therefore, deciding which manufacturing cell a product should be assigned is dictated by the packaging cell(s) it can be processed later on. This constraint makes the manufacturing system less flexible. In the disconnected cell layout, the products enter the manufacturing area, complete the machining operations and exit this area. On exiting the manufacturing area, the products can go to more than one of the assembly and packaging cells. In other words, the output from the cells in

Comparison of Connected vs. Disconnected Cellular Systems

Figure 1. Connected cells

the manufacturing area can become an input for some of the cells in the assembly and packaging area (partially flexible disconnected cells) or all of them (completely flexible disconnected cells). Figure 2 shows a partially flexible disconnected cells case where the parts from cell 1 in the manufacturing area can go to any of the cells in the assembly and packaging area. Parts from cell 2 can only go to cell 2, and cell 3 of the assembly and packaging area. Parts from cell 3 of the manufacturing area can only go to cell 3 of the assembly and packaging area. The disconnected system design allows more flexibility. On the other hand, due to interruptions in the flow, some delays may occur which may eventually lead to higher flowtimes and WIP inventory levels.

LITERATURE REVIEW A group of researchers compared the performance of cellular layout with process layout. Flynn and Jacobs (1987) developed a simulation model using SLAM for an actual shop to compare the performance of group technology layout against process layout. Morris and Tersine (1990) developed simulation models for a process layout and a cellular layout using SIMAN. The two performance measures used were throughput time and

work-in-process inventory (WIP). Yazici (2005) developed a simulation model using Promodel based on data collected from a screen-printing company to ascertain the influence of volume, product mix, routing and labor flexibilities in the presence of fluctuating demand. A comparison between one-cell, two-cell configurations versus a job shop is made to determine the shortest delivery and highest utilization. Agarwal and Sarkis (1998) reviewed the conflicting results from the literature in regard to superiority of cellular layout vs. functional layout. They attempted to identify and compile the existing studies and understand conflicting findings. Johnson and Wemmerlov (1996) analyzed twenty-four model-based studies and concluded that the results of these work cannot assist practitioners in making choices between existing layouts and alternative cell systems. Shafer and Charnes (1993) studied cellular manufacturing under a variety of operating conditions. Queueing theoretic and simulation models of cellular and functional layouts are developed for various shop operating environments to investigate several factors believed to influence the benefits associated with a cellular manufacturing layout. Another group of researchers focused on analyzing cellular systems. Selen and Ashayeri (2001) used a simulation approach to identify improvements in the average daily output through

665

Comparison of Connected vs. Disconnected Cellular Systems

Figure 2. Disconnected cells with partial flexibility

management of buffer sizes, reduced repair time, and cycle time in an automotive company. Albino and Garavelli (1998) simulated a cellular manufacturing system using Matlab to study the effects of resource dependability and routing flexibilities on the performance of the system. Based on the simulation results, the authors concluded that as resource dependability decreases, flexible routings for part families can increase productivity. On the contrary, from an economic standpoint they concluded that benefits will greatly reduce from an increase routing flexibility cost and resource dependability. Caprihan and Wadhwa (1997) studied the impact of fluctuating levels of routing flexibility on the performance of a Flexible Manufacturing System (FMS). Based on results obtained, the authors concluded that there is an optimal flexibility level beyond which the system performance tends to decline. Also, increase in routing flexibility when made available with an associated cost seldom tends to be beneficial. Suer, Huang, and Maddisetty (2009) discussed layered cellular design to deal with demand variability. They proposed a methodology to design a cellular system that consisted of dedicated cells, shared cells and remainder cell. Other researchers studied make-to-order and make-to-stock production strategies. Among them, DeCroix and Arreola-Risa (1998) studied the

666

optimality of a Make-to- Order (MTO) versus a Make-to-Stock (MTS) policy for a manufacturing set up producing various heterogeneous products facing random demands. Federgruen and Katalan (1999) investigated a hybrid system comprising of a MTO and a MTS systems and presented a host of alternatives to prioritize the production of the MTO and MTS items. Van Donk (2000) used the concept of decoupling point (DP) to develop a frame in order to help managers in the food processing industries to decide which of their products should be MTO and which ones should be MTS. Gupta and Benjaafar (2004) presented a hybrid strategy which is a combination of MTO and MTS modes of production. Nandi and Rogers (2003) simulated a manufacturing system to study its behavior in a make to order environment under a control policy involving an order release component and an order acceptance/rejection component. Authors are not aware of any other study that focuses on comparing the performance of connected cells with disconnected cells and therefore we believe this is an important contribution to the literature.

Comparison of Connected vs. Disconnected Cellular Systems

DESCRIPTION OF THE SYSTEM STUDIED: THE CASE STUDY This section describes the medical device manufacturing company where the experimentation was carried out. The products essentially follow a unidirectional flow. The manufacturing process is mainly divided into two areas, namely fabrication and packaging. Each area consists of three cells and cells are not identical. The one piece-flow strategy is adapted in all cells. The company has well defined families which are determined based on packaging requirements. Furthermore, the cells have been already formed. The average flowtime and the work-in-process inventory are the performance measures used to evaluate the performance of connected cells and disconnected cells.

Product Families The products are grouped under three families: Family 1 (F1), Family 2 (F2), and Family 3 (F3). The finished products are vials consisting of blood sugar strips and each vial essentially contains 25 strips. The number of products in families 1, 2 and 3 are 11, 21 and 4, respectively. The families that are described were already formed by the manufacturer based on the number of vials (subfamilies) included in the box. Family 1 requires only one subassembly (S), one box (B1), one label (L), and one Insert for instructions (I); family 2 (F2) requires 2 subassemblies, one box (B2), one label and one insert, and family 3 (F3) requires 4 subassemblies, one box (B3), one label and one insert to become finished product as shown in Table 1. Obviously, this family classification is strictly from manufacturing perspective and marketing department uses its own family definition based on product function related characteristics. The family definition has been made based on limitations of packaging machines. Not all packaging machines can insert 4 vials into a box. This seemingly simple issue becomes an obstacle in assigning products to packaging cells

and furthermore becomes a restriction in assigning products to even manufacturing cells in connected cellular design.

Fabrication Cells The fabrication area is where the subassemblies are manufactured. This area contains three cells which manufacture a single common subassembly and hence all three families can be manufactured in any of the three cells. The fabrication area has a conveyor system which transfers the products from one machine to another based on one-piece flow principle.

Operations in Fabrication Cells There are three operations associated with the fabrication area: • • •

Lamination Slicing and Bottling Capping

The machines used for operation 1 in all three cells are similar and work under the same velocities (120 vials/min) but the number of machines within each cell varies. Operation 2 has machines that process 17 vials/min and 40 vials/min. Similarly, operation 3 has machines that process 78 vials/min and 123 vials/min. Table 2 shows the distribution of machines and velocities among the three cells.

Table 1. Product structures of families Components Family

S

L

I

F1

1

1

1

F2

2

1

1

F3

4

1

1

B1

B2

B3

1 1 1

667

Comparison of Connected vs. Disconnected Cellular Systems

Table 2. Number of machines and their production rates in fabrication cells Op.2

Op. 1 Type I

Output of Bottleneck (vials/min)

Op. 3 Type II

Type I

Type II

Production Rate (vials/min)

120

17

40

78

123

Cell 1

1

2

2

0

1

114

Cell 2

1

4

0

1

0

68

Cell 3

2

3

2

0

2

131

Packaging Cells

Operations in Packaging Cells

The packaging area also has a conveyor system similar to the fabrication area which transfers products within packaging cells and also from the fabrication cells to the packaging cells. In the packaging area, the subassemblies produced in the fabrication area are used to produce the various finished products. The packaging cell 1 is semiautomatic while cells 2 and 3 are automatic. This difference in the types of machines results in constraints that do not allow the packaging of certain products in certain cells. There are a total of 36 finished products which differ in the quantity of vials they contain, the type of raw material the vials are made of, and the destination of the country to where they are shipped. The original cell feasibility matrix for the families is given in Table 3 and the restrictions are due to constraints in the packaging of the vials.

There are five operations performed in packaging area and each operation requires one machine. The operations are described as follows: • • • • •

Feeding (This operation is only performed in the case of disconnected cells) Labeling Assembly (Automatic in cells 2 and 3, semi-automatic in cell 1) Sealing Bar Coding

Table 4 shows the production rates of the machines in all cells.

ALTERNATE DESIGNS CONSIDERED In this section, the current product-cell feasibility restrictions are discussed for both connected and disconnected cellular systems.

Connected Cells Table 3. Feasibility matrix of families and packaging cells Packaging Cell 1

Family F1

Packaging Cell 2

X

X

F2

X

X

F3

X

668

Packaging Cell 3 X X

In this system, cells are set up such that the packaging cells form an extension or continuation of the respective fabrication cells. In other words, the output of a cell in fabrication area becomes the input for the corresponding packaging cell. Hence, it is referred to as a connected system. The connected system for the current product-

Comparison of Connected vs. Disconnected Cellular Systems

Table 4. Production rates for assembly-packaging machines in vials/minute Cell

Family

Operation 4

Cell 1

Cell 2

Cell 3

5

6

7

8

Family 1

160

135

80

150

150

Family 2

160

135

80

150

150

Family 3

160

135

80

150

150

Family 1

160

135

100

150

150

Family 2

160

135

180

150

150

Family 3

NA

NA

NA

NA

NA

Family 1

NA

NA

NA

NA

NA

Family 2

160

135

150

150

150

Family 3

160

135

280

150

150

cell feasibility is shown in Figure 3. The output of family 1, family 2, and family 3 is essentially based on the bottleneck or the slowest machine in each cell of the fabrication or the packaging area and they are shown in Table 5.

Disconnected Cells In this case, the output of a cell in the fabrication area can become an input for more than one cell in the packaging area depending upon the constraints in the packaging area. This can be considered to be a partially flexible disconnected cells type of system. The cell routing for each family is shown in Figure 4. In this figure, solid lines indicate that all the products processed in that particular fabrication cell can be processed in the assembly and

Figure 3. Cell routing of families for the connected system

packaging cell that they are connected to. On the other hand, the dashed lines show that only some of the products processed in the fabrication cell can be processed in the corresponding assembly and packaging cell. This provides a greater amount of flexibility with respect to the routing of the parts in the cellular system. The output rates of family 1, family 2, and family 3 depend on the fabrication-packaging cell combination and they are determined by the slowest machine as shown in Table 6.

Cases Considered The experimentation discussed in this chapter can be grouped in the following sections: •



Original Family-Cell Feasibility Matrix Production orders are based on customer orders. Various Family-Cell Feasibility Options Seven different family-cell feasibility options have been considered as given in Table 7. In this case too, production orders are based on customer orders.

669

Comparison of Connected vs. Disconnected Cellular Systems

Table 5. Output rates for cells in the connected system Cell #

Cell 1

Cell 2

Family #

Family 1

Output Rate of the Bottleneck Machine in Fabrication Area (vials/min) 114

80

Output Rate (vials/min) 80

Family 2

80

80

Family 3

80

80

100

68

Family 1

68

Family 2 Cell 3

Output Rate of the Bottleneck Machine/Operator in Packaging Area (vials/min)

Family 2

135 131

Family 3

135

131

135

Figure 4. Cell routing of families for disconnected system

METHODOLOGY USED

Simulation Models

This section describes the methodology used to develop the different simulation models in Arena 7.0.

The models were run 24 hours a day which basically represented 3 shifts round the clock. Setup times and material handling times were negligible. Preemption was not allowed due to material control restrictions by FDA. Vials move based on one-piece flow between machines. The simulation models are discussed for different cases separately in the following paragraphs. Case 1: Connected Cells: After the entities are created, they are routed to cells 1, 2 or 3 based on the type of family they belong to. The entities enter the fabrication area as a batch equivalent to the customer order size. Once a batch of entities enters the cell they are split and there is a one-piece flow in the cell. Entities belonging to a family go to one of its feasible cells based on the shorter queue length among 2nd operation. This is done

Input Data Analysis Input data such as customer order distributions, their respective inter-arrival times, processing times, and routings were all obtained based on the data provided by the company. The data provided was basically the total sales volume in vials for each part belonging to one of the three families for a period of nineteen months. Table 8 shows the customer order sizes and the inter-arrival time distributions for each product.

670

Comparison of Connected vs. Disconnected Cellular Systems

Table 6. Output rate of each routing combination for the disconnected system Family #

Family 1

Family 2

Family 3

Fabrication Area Cell (Output of the Bottleneck Machine in vials/ min) Cell 1 (114)

Packaging Area Cell (Output of the Bottleneck Machine in vials/min) Cell 1 (80)

Output Rate of Routing Combination (vials/min) 80

Cell 1 (114)

Cell 2 (100)

100

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 2 (100)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 2 (100)

100

Cell 1 (114)

Cell 1 (80)

80

Cell 1 (114)

Cell 2 (135)

114

Cell 1 (114)

Cell 3 (135)

114

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 2 (135)

68

Cell 2 (68)

Cell 3 (135)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 2 (135)

131

Cell 1 (114)

Cell 1 (80)

80

Cell 1 (114)

Cell 3 (135)

114

Cell 2 (68)

Cell 1 (80)

68

Cell 2 (68)

Cell 3 (135)

68

Cell 3 (131)

Cell 1 (80)

80

Cell 3 (131)

Cell 3 (135)

131

Table 7. Different family-cell feasibility options Cellular System

Cell Type

Cell

Options O1

Connected Cells

Fab. Cells

Pack. Cells

Disconnected Cells

Fab. Cells

Pack. Cells

O2

O3

O4

O5

O6

O7

C1

1

1,2,3

1,2

1,2,3

1

1,2

1,2

C2

2

1,2,3

2,3

1,2

2

2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,3

1,3

C1

1

1,2,3

1,2

1,2,3

1

1,2

1,2

C2

2

1,2,3

2,3

1,2

2

2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,3

1,3

C1

1

1,2,3

1,2

1,2,3

1

1,2,3

1,2

C2

2

1,2,3

2,3

1,2

2

1,2,3

2,3

C3

3

1,2,3

1,3

2,3

3

1,2,3

1,3

C1

1,2,3

1,2,3

1,2,3

1,2,3

1

1,2

1,2

C2

1,2,3

1,2,3

1,2,3

1,2,3

2

2,3

2,3

C3

1,2,3

1,2,3

1,2,3

1,2,3

3

1,3

1,3

671

Comparison of Connected vs. Disconnected Cellular Systems

Table 8. Inter-arrival time and customer order size distributions for products Family #

Family 1

Family 2

Family 3

672

Product #

Inter-arrival Time Distribution

Customer Order Size Distribution

1

0.999 + WEIB(0.115, 0.54)

1.09 + LOGN(1.56, 1.06)

2

0.999 + WEIB(0.0448, 0.512)

TRIA(18, 23.7, 52)

3

1.11 + EXPO(1.87)

9 + WEIB(7.66, 1.27)

4

2 + LOGN(3.19, 3.68)

2 + 17 * BETA(0.387, 0.651)

5

4 + LOGN(5.05, 14)

207 + LOGN(86.5, 139)

6

UNIF(0, 26)

TRIA(6, 12.5, 71)

7

-0.001 + 26 * BETA(0.564, 0.304)

UNIF(9, 80)

8

TRIA(0, 6.9, 23)

EXPO(25.3)

9

NORM(13.7, 7.49)

NORM(108, 30.8)

10

6 + WEIB(3.78, 0.738)

TRIA(98, 120, 187)

11

UNIF(0, 26)

UNIF(14, 34)

12

0.999 + WEIB(0.0126, 0.405)

5 + WEIB(7.51, 0.678)

13

1 + LOGN(0.99, 2.62)

2 + 11 * BETA(0.412, 0.527)

14

1.24 + EXPO(1.46)

30 + 26 * BETA(0.643, 1.08)

15

EXPO(7.06)

2 + 34 * BETA(0.321, 0.519)

16

0.999 + WEIB(0.0313, 0.503)

NORM(149, 57.1)

17

0.999 + WEIB(0.195, 1.12)

NORM(23, 14.2)

18

TRIA(0, 11.2, 25)

101 * BETA(0.822, 0.714)

19

26 * BETA(0.649, 0.42)

EXPO(154)

20

EXPO(7.4)

UNIF(0, 90)

21

UNIF(0, 26)

TRIA(0, 231, 330)

22

28 * BETA(1.11, 0.547)

TRIA(0, 224, 325)

23

27 * BETA(0.679, 0.429)

EXPO(119)

24

28 * BETA(0.468, 0.255)

TRIA(425, 1.05e+003, 2.5e+003)

25

1.16 + LOGN(2.48, 1.76)

NORM(867, 534)

26

EXPO(7.03)

NORM(68, 32.8)

27

TRIA(0, 4.44, 25)

EXPO(13.8)

28

9 + 17 * BETA(0.559, 0.0833)

24 * BETA(0.67, 0.969)

29

28 * BETA(0.466, 0.301)

NORM(420, 168)

30

28 * BETA(0.932, 0.479)

NORM(267, 110)

31

2 + 26 * BETA(0.314, 0.458)

TRIA(0, 274, 381)

32

UNIF(0, 26)

TRIA(0, 297, 368)

33

0.999 + WEIB(0.0117, 0.424)

TRIA(843, 1.19e+003, 2e+003)

34

1.33 + 1.96 * BETA(0.3, 0.636)

WEIB(6.83, 0.613)

35

1 + LOGN(5.23, 7.03)

37 + LOGN(147, 1.51e+003)

36

4 + 22 * BETA(0.305, 0.197)

TRIA(0, 543, 591)

Comparison of Connected vs. Disconnected Cellular Systems

because the second operation in each cell has been identified as the bottleneck operation based on trial runs conducted. In cell 1 and cell 3, the entities undergo operation 1 and go to operation 2 where there are two types of machines namely the slow (Type I) and fast (Type II) machines available for processing. The entities are routed to either type of machine based on a percentage which was decided after a number of simulation runs in order to minimize the queue lengths and hence the waiting time. In cell 1, 30% of the entities were routed to the Type I machine and the rest were routed to the Type II machine. In cell 3, 40% of the entities were routed to the Type I machine and the rest were routed to the Type II machine. Each of the entities leaving the fabrication cells enters the corresponding packaging cells. For example, entities from cell 1 in the fabrication area will enter cell 1 of the packaging area. The entities entering the packaging area undergo processing through operation 4. In the fifth operation, the vials are grouped based on the type of family they belong to. Family 1 consists of only 1 vial, family 2 consists of 2 vials and family 3 consists of 4 vials. Thus, the vials that are batched in Arena after operation 5 are processed in operations 6, 7 and 8 where they are boxed, sealed and coded. In the final batching, the vials are batched together in a box based on the final customer order sizes. The final batch sizes are the same as the input batch sizes. There is a waiting time associated since the entities might have to wait till the required batch size is reached and only then get disposed. The warm up time for the model was determined to be 2000 hours based on steady state analysis. The simulation was run for 2500 hours after the end of the warm-up period. Case 1: Disconnected Cells: The entities enter the fabrication area in batches as explained for the connected system. The batches of entities in disconnected system are routed differently as compared to the connected system. Here, the batches of entities are routed to cell 1, cell 2, or cell 3 of the fabrication area based on the shortest

queue length of the bottleneck operation which is operation 2 as explained earlier. The flexibility of routing the families to any of the cells in this type of system is the only major difference between the connected and disconnected systems in the fabrication area. The processing times of the machines and the sequence of operations for the entities for both systems are the same. Since the flow is disconnected in this system, the entities are batched again to the same customer order sizes at the end of the fabrication area. The batches of entities entering the packaging area are routed to specific packaging cells based on shortest queue length as shown earlier in Table 4. These batches are then split and the entities follow a one-piece flow. Also, there is an extra feeding operation at the start of the packaging cells in order to accommodate the transfer of entities from fabrication to packaging. The method in which the entities are transferred from fabrication to packaging and the extra feeding operation is the only major difference between the connected and disconnected systems in the packaging area. The processing times of the machines and the sequence of operations for the entities for both systems are the same. Case 2: It is very similar to case 1 except that the routings for products are varied as given in Table 7. In this table, Option 5 (O5) is the least flexible arrangement where each cell can process only one product family for both connected and disconnected cells. Option 2 (O2) is the most flexible arrangement with three cells capable of running all three product families both in connected cells and disconnected cells. The remaining options vary in flexibility between O5 and O2. In Option 1, the system is highly inflexible in connected cells whereas it is very flexible in packaging cells of disconnected arrangement (three product families for each cell). In options 3, 4, 6 and 7, each product family can be run at least in two cells. In option 3, packaging cells of disconnected arrangement is more flexible (once again three product families for each cell). In op-

673

Comparison of Connected vs. Disconnected Cellular Systems

tion 4, a little bit more flexibility is added to both connected and disconnected cells (cell 1 can run three families). In option 6, more flexibility is now added to fabrication cells of disconnected system (three product families for each cell). In option 7, each family can be run in two cells. However, models for options 1 and 5 didn’t stabilize and therefore they were not included in comparisons. Production order quantities for products 33 and 36 were both reduced by 40% and 50%, respectively to fit into existing capacity for case 1. Validation and verification are an inherent part of any computer simulation analysis. Models were verified and validated before statistical analysis was performed for all scenarios.

comparisons for the families for the same performance measures but the comparisons are made between different connected systems from cases 1 and 2. Table 13 also displays comparisons for the families for the same performance measures but the comparisons are made between different disconnected systems from cases 1 and 2. Results are denoted as significant (S) or not significant (NS) based on the conclusions reached. Also whenever significant, better option was denoted in a parenthesis. The significance of the results was based on the p-value obtained from the T-test conducted for an alpha level of 0.05. As mentioned earlier, no results for options 1 and 5 were obtained as the system did not stabilize. As observed in Table 11, for case 1, the flowtimes and work-in-process were observed to be different and the disconnected system had lower flowtimes and WIP for families F1 and F3 while the difference was significant for F1. On the other hand, WIP was significantly lower for F2 in the connected system. For case 2 with all the options considered, when there was a significant difference, this was always in favor of connected systems. For option 2, the flowtime for family 2 and the WIP for all three families for the connected system were significantly lower than those of in the disconnected system. For options 3, 6, and 7 which were the same for the connected system, the flowtimes and WIP for families 1 and 2 were significantly lower than the disconnected

RESULTS OBTAINED The results obtained from simulation analysis for average flowtime and average work-in-process inventory are summarized in Tables 9 and 10, respectively. The results are based on 100 replications. The statistical analysis was conducted using the statistical functions available in Excel. A t-test assuming unequal variances for two samples was conducted for a 95% confidence interval for each family under each system. Table 11 displays the comparison for each family with respect to flowtimes and work-in-process between connected and disconnected systems. Table 12 displays Table 9. Average flowtime results for all cases Cases and Options

Connected Cells Configuration F1

Disconnected Cells Configuration

F2

F3

F1

F2

F3

C1

42.66

50.52

87.53

31.19

54.39

71.61

C2-02

31.08

45.98

66.61

32.55

51.62

73.79

C2-03

24.91

39.84

67.06

27.24

46.93

83.48

C2-04

41.26

51.15

78.25

35.14

49.66

79.49

C2-06

Same as C2-03

31.88

51.17

73.80

C2-07

Same as C2-03

70.67

45.91

78.06

674

Comparison of Connected vs. Disconnected Cellular Systems

Table 10. Average work-in-process results for all cases Cases and Options

Connected Cells Configuration F1

Disconnected Cells Configuration

F2

F3

F1

F2

F3

C1

128.59

1403.77

1381.40

100.15

1622.52

1182.29

C2-02

90.00

1184.19

1052.06

99.70

1563.94

1267.13

C2-03

70.67

1046.90

1246.10

86.36

1425.42

1442.67

C2-04

126.10

1425.71

1269.42

111.27

1667.29

1409.31

C2-06

Same as C2-03

97.46

1555.34

1273.61

C2-07

Same as C2-03

80.34

1380.77

1532.79

Table 11. Connected vs. disconnected configuration for each family Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

C1

S (D)

NS

NS

S (D)

S (C)

NS

C2 – O2

NS

S (C)

NS

S (C)

S (C)

S (C)

C2 – O3

S (C)

S (C)

S (C)

S (C)

S (C)

NS

C2 – O4

NS

NS

NS

NS

S (C)

NS

C2 – O6

S (C)

S (C)

NS

S (C)

S (C)

NS

C2 – O7

S (C)

S (C)

NS

S (C)

S (C)

NS

Table 12. Comparison between connected systems Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

O2 VS O3

S (O2)

S (O2)

NS

S (O2)

S (O2)

NS

O2 VS O4

S (O2)

NS

NS

S (O2)

S (O2)

NS

O3 VS O4

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

C1 VS O2

S (O2)

NS

NS

S (O2)

S (O2)

NS

C1 VS O3, O6,O7

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

C1 VS O4

NS

NS

NS

NS

NS

NS

system. For option 4, the WIP for family 2 in the connected system was the only significant result. From Table 12, it can be observed that option 2 (O2) provided the best results when compared to rest of the options within the connected system with lower flowtimes and WIP followed by option 3 (O3). From Table 13, it can be observed that the flowtimes and WIP for options 3 and 7 (O3, O7)

were consistently and significantly better when compared to the rest of the options in the disconnected cells configuration. Also, when these two options were compared against each other there was no significant difference observed for any of the families and performance measures. A comparison between models C1 and O2 did not yield any significant results either and were definitely

675

Comparison of Connected vs. Disconnected Cellular Systems

Table 13. Summary table of results for disconnected system: cases 1 and 2 Cases and Options

FLOWTIME F1

WIP

F2

F3

F1

F2

F3

O2 VS O3

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

O2 VS O4

S (O4)

S (O4)

NS

S (O4)

S (O4)

NS

O2 VS O6

S (O6)

S (O6)

NS

S (O6)

S (O6)

NS

O2 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

O3 VS O4

S (O3)

NS

NS

S (O3)

S (O3)

NS

O3 VS O6

S (O3)

S (O3)

NS

S (O3)

S (O3)

NS

O3 VS O7

NS

NS

NS

NS

NS

NS

O4 VS O6

NS

NS

NS

S (O6)

NS

NS

O4 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

C1 VS O2

NS

NS

NS

NS

NS

NS

C1 VS O3

S (O3)

S (O3)

NS

S (O3)

S (O3)

S (C1)

C1 VS O4

NS

NS

NS

NS

NS

NS

C1 VS O6

NS

NS

NS

NS

NS

NS

C1 VS O7

S (O7)

S (O7)

NS

S (O7)

S (O7)

NS

less superior in performance when compared with the rest of the options.

CONCLUSION In this chapter, the performance of connected and disconnected cellular systems was compared under make-to-order strategy in a real cellular setting. In the existing system (case 1), it was observed that no cellular manufacturing design dominated the other, i.e., mixed results were obtained as to which system did better for each family. The flowtime and work-in-process for family 1 for the disconnected system were lower. On the other hand, the WIP for family 2 in the connected system was lower. The other comparisons did not yield any significant results and hence dominance could not be established in terms of better cellular system. In case 2, which is basically an extension of case 1, the impact of considering alternate cell routings for each part family was studied for both connected cells and disconnected cells. In most cases, connected cells outperformed disconnected

676

cells with respect to both average flowtime and WIP, especially for family 1 and family 2. This leads to the conclusion that the connected system is the better system in this situation since family 1 and family 2 make up for 32 of the 36 products and comprise of about 85% by volume of the production orders in the system. The average flowtime and WIP conclusions are similar but not identical, i.e. there were incidents where flowtime was significantly better but not necessarily corresponding WIP and vice versa. If one wanted to choose the best connected cell configuration, that would be option 2. This is possibly due to option 2 having the highest flexibility among all options as each family could be routed to any of the fabrication and packaging cells. Options 3, 4 and case 1 followed in the order of performance leading to the conclusion that increase in routing flexibility of the families resulted in significantly lower flowtimes and WIP. A similar comparison among all options developed for the disconnected system showed that options 3 and 7 performed better than the rest of the options. Option 3 had complete flexibility in

Comparison of Connected vs. Disconnected Cellular Systems

the packaging area but limited flexibility in the fabrication area and option 7 had limited flexibilities in both the areas. Limited flexibility as applicable to these two options means that each family could go to at least two specified cells. On the other hand, option 2 was the worst performing system among the options for case 2 even though it had the highest flexibility. This can be attributed to the fact that routing decisions are made based on queue sizes only. Family 3 products have the highest processing times and it is possible that queues in all cells may contain products from family 3 thus leading to higher lead times for the parts that join that queue. For case 1 and also option 2 from case 2, the disconnected system was modified to delete the extra feeding operation and the batching at the end of the fabrication area. This was done in order to determine the reason why the connected system performed better than the disconnected system in most of the comparisons made. The two modified simulation models were run and the results were statistically analyzed. In case 1, the flowtime for family 1 and the WIP for family 2 was significantly better for the disconnected system. In the original comparison, WIP and flowtime for family 1 in the disconnected system was better and the WIP for family 2 in the connected system was significantly better. The rest of the comparisons did not yield any significant results. For option 2, none of the comparisons yielded significant results as opposed to the original comparison when the connected system clearly performed better than the disconnected system. From these results it can be concluded that the extra operation and the extra batching increases the average WIP and flowtimes for each of the families and could be responsible for the disconnected system not performing as well as or better than the connected system.

REFERENCES Agarwal, A., & Sarkis, J. (1998). A review and analysis of comparative performance studies on functional and cellular layouts. Computers & Industrial Engineering, 34(1), 77–89. doi:10.1016/ S0360-8352(97)00152-6 Albino, V., & Garavelli, C. A. (1998). Some effects of flexibility and dependability on cellular manufacturing system performance. Computers & Industrial Engineering, 35(3-4), 491–494. doi:10.1016/S0360-8352(98)00141-7 Caprihan, R., & Wadhwa, S. (1997). Impact of routing flexibility on the performance of an FMS – A simulation study. International Journal of Flexible Manufacturing Systems, 9, 273–278. doi:10.1023/A:1007917429815 DeCroix, G. A., & Arreola-Risa, A. (1998). Make-to-order versus make-to-stock in a production inventory system with general production times. IIE Transactions, 30, 705–713. doi:10.1023/A:1007591722985 Federgruen, A., & Katalan, Z. (1999). The impact of adding a make-to-order item to a make-to-order production system. Management Science, 45(7), 980–994. doi:10.1287/mnsc.45.7.980 Flynn, B. B., & Jacobs, F. R. (1987). A comparison of group technology and process layout using a model of an actual shop. Decision Sciences, 18, 289–293. Gupta, D., & Benjaafar, S. (2004). Make-toorder, make-to-stock, or delay product differentiation? A common framework for modeling and analysis. IIE Transactions, 36, 529–546. doi:10.1080/07408170490438519

677

Comparison of Connected vs. Disconnected Cellular Systems

Johnson, J., & Wemmerlov, U. (1996). On the relative performance of functional and cellular layouts – An analysis of the model-based comparative studies literature. Production and Operations Management, 5(4), 309–334. doi:10.1111/j.1937-5956.1996.tb00403.x Morris, S. J., & Tersine, J. R. (1990). A simulation analysis of factors influencing the attractiveness of group technology cellular layouts. Management Science, 36(12), 1567–1578. doi:10.1287/ mnsc.36.12.1567 Nandi, A., & Rogers, P. (2003). Behavior of an order release mechanism in a make-to-order manufacturing system with selected order acceptance. Proceedings of the 2003 Winter Simulation Conference. Selen, J. W., & Ashayeri, J. (2001). Manufacturing cell performance improvement: A simulation study. Robotics and Computer-integrated Manufacturing, 17, 169–176. doi:10.1016/S07365845(00)00051-X

Shafer, S. M., & Charnes, J. M. (1993). Cellular versus functional layouts under a variety of shop operating conditions. Decision Sciences, 24(3), 665–681. doi:10.1111/j.1540-5915.1993. tb01297.x Süer, G. A., Huang, J., & Maddisetty, S. (2010). Design of dedicated, shared and remainder cells in a probabilistic environment. International Journal of Production Research, 48(19), 5613–5646. doi:10.1080/00207540903117865 Van Donk, D. P. (2000). Make to stock or make to order: The decoupling point in the food processing industries. International Journal of Production Economics, 69, 297–306. doi:10.1016/S09255273(00)00035-9 Yazici, J. H. (2005). Influence of flexibilities on manufacturing cells for faster delivery using simulation. Journal of Manufacturing Technology, 16(8), 825–841. doi:10.1108/17410380510627843

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 37-52, copyright 2012 by Business Science Reference (an imprint of IGI Global).

678

679

Chapter 39

AutomatL@bs Consortium:

A Spanish Network of Web-based Labs for Control Engineering Education Sebastián Dormido Universidad Nacional de Educación a Distancia, Spain Héctor Vargas Pontificia Universidad Católica de Valparaíso, Chile José Sánchez Universidad Nacional de Educación a Distancia, Spain

ABSTRACT This chapter describes the effort of a group of Spanish universities to unify recent work on the use of Web-based technologies in teaching and learning engineering topics. The network was intended to be a space where students and educators could interact and collaborate with each other as well as a meeting space for different research groups working on these subjects. The solution adopted in this chapter goes one step beyond the typical scenario of Web-based labs in engineering education (where research-groups demonstrate their engineering designs in an isolated fashion) by sharing the experimentation resources provided by different research groups that participated in this network. Finally, this work highlights the key points of this project and provides some remarks about the future use of Web-based technologies in school environments.

INTRODUCTION The evolution of the Internet has changed the education landscape drastically (Bourne et al. 2005, Rosen 2007). What was once considered distance education is now called online education. DOI: 10.4018/978-1-4666-1945-6.ch039

In other words, the method of teaching and learning is based on the use of the Internet to complete educational activities. A specific example of this new teaching model is the Spanish University for Distance Education (UNED). Compared to other Spanish universities, this institution has the largest number of students because distance education allows students to obtain a degree or improve

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

AutomatL@bs Consortium

their professional skills without having to change their lifestyles. UNED is not a unique institution; there are many universities around the world with an online presence, such as the Open University in Colombia, Open Universities in Australia, the Open University in UK, the Open University in Catalonian, Fern Universität in Germany, and many more. The existence of these institutions confirms the viability and importance of computerassisted teaching and learning through the Internet. The implementation of a distance learning model is not an easy task in engineering and sciences studies (Williams 2007). In addition to textual/multimedia information and other resources required to demonstrate theoretical aspects in an online course, hands-on laboratories should also be included. This requirement is particularly necessary for control engineering, which is an inherently interdisciplinary field in which progress is achieved through a mix of mathematics, modeling, computation, and experimentation (Astrom 2006). In this context, students should be able to • •



Understand the underlying scientific model of the phenomenon that was studied. Become acquainted with the limits of the model (i.e., how does the model accurately reflects real behavior and to what extent it remains a basic approximation). Learn how to manipulate the parameters of the model in order to fine-tune the behavior of the real system. (Dormido 2004)

To achieve these goals, the implementation of an effective Web-based educational environment for any engineering topic should cover three aspects of the technical education: concept, interpretation, and operation. The student should be provided with an opportunity to become an active player in the learning process (Dormido et al. 2005). In this context, the potential for Webbased experimental applications such as virtual laboratories (Valera et al. 2005), remote laboratories (Casini et al. 2004, Brito et al. 2009) and

680

games (Eikaas et al. 2006) as pedagogical support tools in the learning/teaching of control engineering has been presented in many works. In fact, in the last decade, several academic institutions have explored the World Wide Web (WWW) to develop their courses and experimental activities in a distributed context. However, most of these developments have focused only on the technical issues related to building Web-enabled applications for performing practical activities through the Internet (e.g., how to start up remote monitoring of a real device or how to build sophisticated virtual interfaces). At most, these implementations may include a set of Web pages with a list of activities that need to be carried out by the users. Some examples of these implementations are provided in the additional reading section at the end of the chapter. In general, these developments do not take into account the social context of the interactions and the collaboration that is typically generated in traditional hands-on laboratories (Nguyen 2007). Indeed, direct contact with teachers and interactions with classmates are valuable resources that may be reduced or even disappear when hands-on experimental sessions are conducted via Webbased laboratories. New trends in the use of Web-based resources for teaching and learning in the engineering disciplines include the use of Web 2.0 technologies such as social software in building virtual representations of face-to-face (f2f for short) laboratories in a networked, distributed environment (Gillet et al., 2009). This objective was grounded in the idea that educational institutions and many workplaces are equipped with a type of tool that connects people, contents and learning activities and can thus transfer information and knowledge. Learning to learn is the new challenge for the new generation of students. In other words, they have to learn to use Web resources to improve their teaching and learning. Commonly, a mix of Web-based technologies and software agents (Salzmann & Gillet 2008) is used to develop remote experimentation systems

AutomatL@bs Consortium

for pedagogical purposes. For this reason, most of the remote experimentation systems are custommade solutions. This means that the selection of software tools and global system architecture are not simple tasks due to the wide variety of software frameworks that are available. This chapter describes the structure of the remote experimentation system used in this study, which is based on the use of three software tools: Easy Java Simulations (Easy Java 2010), LabVIEW (LabVIEW 2010), and eMersion (eMersion 2010).

BACKGROUND In a typical scenario for remote experimentation, universities provide the overall infrastructure required for the remote experimentation services offered to students, including a set of didactic setups that are specially designed for hands-on laboratories, a set of server computers used to interface these processes and a main server computer providing the complementary Web-based resources necessary to use the remote labs. The system users (clients) can access experimentation services from any Internet connection. However, developing a complete environment for experimentation services is not an easy task. For this reason, this section presents a systematic approach for developing such systems. Although the development of an application with the previously described features can be structured in multiple ways, we have divided the problem into two levels or “layers.” The first layer is the experimentation layer, which includes all the necessary software and hardware components needed to develop the experimental applications for the Web-based virtual or remote laboratories. Because Web-based labs do not supply all the elements needed to provide remote experimentation services, complementary Web-based resources are needed to manage students’ learning. Thus, the e-learning layer incorporates the development of

functionalities required to support teaching and learning through the Internet (Vargas et al., 2008).

Layer 1: The Experimentation Layer The experimentation layer included the design methodology and construction of a graphical user interface for clients as well as the server application. This layer was developed with a client and server structure. The first step to start the development process was the analysis of the requirements and specifications for implementing the interface. The following subsections include some recommendations for developing this layer.

Requirements and Specifications for the Client •







The software should be multiplatform. For example, Java has the required characteristics for designing this kind of application; the user only needs a Web browser with Java support to access the Web-based lab. The protocol to communicate with the server should include low-level protocols to stream data through the Internet, such as TCP (Transfer Control Protocol) or UDP (User Datagram Protocol). These protocols allow for better control of data packet transmissions in networks. The graphical user interface must be simple and intuitive. It should be user-friendly and useful in different environments. Either virtual or remote access to the laboratory should be enabled with the same graphical interface. In simulation mode, the state of the system and its associated variables must be updated based on the evolution of a mathematical model of the process. Otherwise, in remote mode, these variables should be updated according to the real plant changes in the remote location. Video feedback should also be includ-

681

AutomatL@bs Consortium





ed to provide distance users with a sense of presence in the laboratory. Events scheduling for program faults in the system should be included. The systems could be enabled to analyze systems in the presence of noise or disturbance measurements. The robustness of the system could also be evaluated under anomalous operating situations. Finally, it is recommended that the users define experiments in an easy manner. For example, a programmed change of a setpoint value could be required to observe the process response at different operation points.







Requirements and Specifications for the Server From a software design point of view, the server is composed of a set of modules (some of them optional) that are described below: •



682

A data exchange module: this software remains in a listening state while it is waiting for remote connections from users. It receives commands and queries from clients and makes these inputs effective over a physical system. The responses are retrieved from the real plant through the instrumentation hardware and are sent back to the client. The link between the server and clients would be established via a TCP/IP protocol suite. An access management module: this software module would manage all of the information related to the management of users and timeslot bookings for the use of real plants. A database manager would be used to handle the users’ reservations and physical resources. Each register in this database would correspond to a booking scheduled for a specific time and date.

An instrumentation module: this module would incorporate all of the hardware needed to connect the physical system with the server. A remote visualization module: this module would allow users to examine what is happening with the physical system during its remote manipulation by a client. Video cameras would be used to facilitate this feature. This system must be able to transmit a sense of realism to encourage its usage and increase the motivation of the users while completing tasks. Each module in the server has a counterpart on the client side. For example, to read the video streaming captured by the server, the client application must implement a software module that allows for the retrieval and decoding of video streaming from the remote camera and then renders it in the interface.

Layer 2: The E-Learning Layer The previous section presented the main features that need to be taken into account when analyzing, designing, and building the experimentation layer. Additionally, a second key aspect that should be addressed is the development and/or use of a Web-based learning management system (LMS) to support a student’s learning process. This platform should organize user access to the experimentation modules that are available and allow for students/teachers to interact and collaborate with one another. The implementation phase would require the following: • • •

Simplification of the organization of user groups. Notification services by email, instant messaging, news, and other methods. Documentation such as practical guides, task protocols, instruction manuals and any other information needed to per-

AutomatL@bs Consortium





• •

form a remote experimentation session autonomously. A sequence of activities that students must carry out during an experimental session. There can be two types of tasks: 1) tasks in simulation mode and 2) tasks in remote mode. Tasks in simulation mode are tasks that students must carry out prior to performing the experiments in the real plant. These tasks should be completed with a graphical user interface that allows students to work in a simulated environment. The objective should be to gain adequate insight about the procedures that are involved in the experiment. These tasks would reduce the time students spend on activities using the real plant. Remote access should not be allowed if the student has not satisfactorily completed the required tasks in simulation mode. If the student’s work was evaluated positively by the teaching staff, then access to remote mode would be granted. A method for managing students and the assessment of their work as well as uploading reports. An automatic booking system to schedule access to the physical resources. At the end of the development process, the experimentation and e-learning layers have to be integrated to produce the final Web application for virtual and remote labs. This integration needs to establish certain links and channels between the Web modules from both layers. For example, in our particular framework, we made it possible to save data collected from an experimentation applet (experimentation layer) in a shared Web space that was part of the elearning layer. The data stored in this space could be retrieved later for analysis.

IMPLEMENTATION The implementation process of the remote experimentation system that is described in this chapter can be itemized into two independent processes that were combined to create the final environment. These developments can be summarized as follows: • •

Building hybrid laboratories for pedagogical purposes (the experimentation layer). Integrating the hybrid laboratories into a Learning Management System (LMS) to publish resources and provide mechanisms for accessing the real plants (the e-learning layer).

Building Hybrid Laboratories for Pedagogical Purposes A hybrid laboratory provides remote software simulations and real experiments in a single environment that can be accessed over the Internet. The client/server approach is commonly used for the technical implementation of both features (Callaghan et al. 2006, Zutin et al. 2008). Specifically, when a student is conducting an experiment in a virtual manner, he or she also works with a mathematical model of a process. When developing the simulated portion of a hybrid laboratory, developers not only need to create a technology that covers all the aspects related to the use of simulations in a local mode. The applications must also work well in a distributed environment. The graphical user interface, for instance, could be a pure HTML/JavaScript application, or it could require a plug-in such as Flash, Java or ActiveX that runs in a Web browser. Although one of the most relevant features of Java is the simplicity of the language, creating a graphical simulation in this programming language is not a straightforward task. Conceiving relatively complex Web-based applications requires advanced knowledge of object-oriented

683

AutomatL@bs Consortium

programming and other features of Java (Esquembre 2005). For this reason, the following subsection presents Easy Java Simulations (EJS), a software tool that was used to create the client interfaces for the hybrid laboratories in this chapter.



EJS as a Development Tool for Hybrid Laboratories



EJS is a freeware, open-source tool that was developed in Java and is specially designed for the creation of discrete computer simulations (Christian & Esquembre 2007). EJS was originally designed for users with little programming experience. However, users need to know the analytical model of the process and the design of the graphical interface in detail. The architecture of EJS was derived from the model-view-control (MVC) paradigm, a philosophy that is based on the fact that interactive simulations must include three parts:

Figure 1. MVC paradigm abstraction of EJS

684



The model, which describes the process under study in terms of (1) variables, which define the different possible states of the process, and (2) the relationships between these variables, which are expressed by computer algorithms. The control, which defines certain actions a user can perform on the simulation. The view, which shows a graphical representation (either realistic or schematic) of the process states.

EJS makes programming simple by eliminating the control element of the MVC paradigm and fusing one part in the view and the other part in the model, as shown in Figure 1. Thus, applications can be created in two steps: (1) defining the model to simulate with the built-in simulation mechanism of EJS and (2) building the view by showing the model state and incorporating the changes made by users. Figure 1 shows a simple virtual-lab created

AutomatL@bs Consortium

by EJS for teaching basic control concepts based on the well-known single-tank process. Although EJS was initially conceived as a software tool to create interactive simulations for teaching physics, it has been successfully applied to many other research areas, including physical systems, mechanical systems, control systems (as in this essay), and medical systems. Thus, EJS could be classified as a general purpose tool intended to create interactive simulations of scientific phenomena based on models. Finally, one of the most important features of EJS is that its applications can be easily distributed through the Internet in applet form. Applets are Java programs that can be executed in the context of a Web browser in a way similar to Flash or HTML/JavaScript applications. To find more information about the EJS mechanism, creating simulations, and additional features, please visit the EJS homepage (http://www.um.es/fem/ EjsWiki/).

LabVIEW Server for Developing Hybrid Laboratories Carrying out the remote operation of any physical device is challenging if we take into account the number of technical aspects that must be solved. In most cases, technical issues such as performance, interaction level, visual feedback,

real-time control, user perception, safety and fault tolerance require extensive research (Salzmann & Gillet 2002). In general, remote experimentation through the Internet requires an awareness of the current state of the distant real plant to change the value of any input parameter of the remote system and to perceive the effect of this change with a minimal transmission delay. Figure 2 shows the process in which a client application maintains a connection with a remote server to control a real plant remotely. The server-side sends a continuous flow of information, which is represented by the information blocks s(k) that reflect the current state of the plant, and the server receives information blocks c(k) containing changes in system input parameters that were carried out by a remote user. The client side of the system receives information on the state of the system that is sent by the server (contained in s(k) blocks) while simultaneously waiting for a user’s interaction to report the changes to the server-side with new information blocks c(k). Regarding the software solutions for the server-side, there are many options for programming the real-time control loop (Matlab/Simulink, C++, Scicos, etc.). In this chapter, we propose a working scheme for LabVIEW developers. The set of tasks that should be executed in the Lab-

Figure 2. Stream of information between client and server

685

AutomatL@bs Consortium

VIEW server to enable the remote experimentation are described below: •





Control task: This loop involves the execution of three sub-tasks: 1) recover control parameters from the communication task, 2) acquire data and closed-loop control, and 3) transmit the system state to the communication task. Video task: This loop involves the execution of two sub-tasks: 1) acquire images from the video camera and 2) transmit images to the communication task. Communication task: This loop involves the execution of three sub-tasks: 1) receive control data from clients and write the data to the control task, 2) read the system state from the control task and the images from the video task, and 3) link the state and images and then send them to the client.

Figure 3 shows a LabVIEW block diagram that corresponds to this communication architecture. The three loops in the diagram run concurrently to

perform the main tasks: control, communication, and video acquisition. The control task is a time-critical activity running at a sampling period of 20 ms with a priority greater than the other two threads. The Analog Input Block reads the analog input signal from the sensor, its output is compared to the setpoint input of the PID Block, and the result is fed into the Analog Output Block. Then the resulting value is sent to the actuator, which completes the control task. The data structure composed of the setpoint value, the PID control parameters, the command to the actuator, and other variables is known as the control vector. This vector is sent from the communication task to the control task through RT FIFO queue blocks (RT FIFO queues act as a fixed-size queue so that the writing of data to an RT FIFO does not overwrite previous elements). These variables are produced when users interact with the client interface. The data array formed by the values sent to the actuator, the measurement from the sensor, the current time, and other variables are known as the state vector, and these values are transferred from the control

Figure 3. Three loops running concurrently in the LabVIEW server

686

AutomatL@bs Consortium

task to the communication task through RT FIFO queue blocks. The video task is a non-time-critical activity because the loss of some video frames is generally acceptable for the user. For most applications, sending five images per second is enough to obtain adequate visual feedback of the remote system (Salzmann et al. 1999). The communication task concatenates the current measurements (state vector) and the video frame in a new vector. This resulting vector is sent to the client using a TCP Write Block. In parallel, the control vector is received through the TCP Read Block from the clients and is passed to the control task through RT FIFO queues. The TCP protocol is used in both implementations because it guarantees packet delivery and bandwidth adaptation, although there is the cost of extra transmission delays (Lim 2006). A possible alternative would be the UDP protocol, which provides better control of the transmission delay. However, UDP does not have a guaranteed packet delivery mechanism or a bandwidth adaptation mechanism. Thus, the designer is responsible for implementing these features. Once the server-side is completed, the EJS application on the client side must be modified to exchange information with the LabVIEW server. In other words, the virtual lab must be transformed into a remote lab by receiving data from the real system instead of the simulated one. The steps to allow a virtual lab connect to the server architecture will be explained in the next section. First, a set of Java methods were programmed in EJS to control the connection with the LabVIEW server. Table 1 shows an example of the implementation of the methods connect(), disconnect(), sender(), and receiver(). Specifically, the upper part of Table 1 shows the excerpts of Java code that are used for establishing and releasing the connection with a server computer. TCP sockets are used to access the network layer. In Java, the socket programming creates an object and generates calls to the

methods for the object. On the left part, the establishment of the connection is carried out in Line 4. To create a socket object, the domain name (or IP address) and the service port in the remote server are needed. Then, in Lines 5 and 6, the input/output stream buffers are created. These buffers act as FIFO storing queues whose filling and emptying depend on possible delays in network communication. The disconnection from the server is made by invoking the close() method in the socket and the input/output stream buffers (Lines 5, 6, 7 - right upper part of Table 1). Conversely, the receiver() and sender() methods should be launched on independent Java threads when the connect() method is started. The sender() method is used to report changes in the user view that affect the operation of the remote equipment (for example, a change in a controller parameter). The receiver() method recovers the incoming data sent from the LabVIEW server. As shown in both pieces of code, the format of variables for the exchange must be defined. For the receiver() method, the current time (t), liquid level (h), and input flow in automatic mode (qautomatic) are received. These values constitute the states of the system (measurements). In the case of the sender() method, the control mode (m/a), the input flow in manual mode (qman), the PID parameters (Kp, Ti and Td), and the setpoint value (ref) are sent to the server-side when any user interaction is detected. These data are rendered to the client with an EJS view. The interface contains the same graphical elements of the virtual lab, but now the dynamic behavior of the elements is updated using the measurements obtained from the server. Once the methods have been added to the EJS client, the programming logic that discriminates between working in a simulation or a remote mode has to be created. This logic depends on whether updating the variables in the hybrid lab is carried out based on the evolution of the mathematical model (virtual lab) or on real measurements that are obtained from the server when the remote working mode is active (remote lab).

687

AutomatL@bs Consortium

Table 1. Excerpt of Java code to communicate with the LabVIEW server from the EJS client Connect with the server 1 public boolean connect(){ 2 connected = false; 3 try{ 4 javaSocket = new Socket(“onetank.dia.uned.es”, 2055); 5 in = new DataInputStream(javaSocket.getInputStream()); 6 out = new DataOutputStream(javaSocket.getOutputStream()); 7 if (javaSocket != null) { // If connected ? 8 connected = true; // connection is ok... 9 _play(); // executing evolution 10 } 11 }catch (java.net.IOException io) { 12 System.out.println(“Problems connecting to host.”); 13 } 14 return connected; 15 }

Disconnect from the server 1 public void disconnect(){ 2 if (connected) { 3 if (javaSocket != null){ 4 try { 5 in.close();// close input stream 6 out.close();// close output stream 7 javaSocket.close();// close connection 8 javaSocket = null; 9 in = null; 10 out = null; 11 connected = false; 12 }catch (java.io.IOException e){ 13 System.out.println(“Close socket error.”); 14 } 15 } 16 } 17 }

Receive data from the server 1 public void receiver(){ 2 if (connected) { 3 try { 4 time = in.readFloat();//read time from server 5 level = in.readFloat();// read level from server 6 qautomatic = in.readFloat();// read input flow from server 7 }catch (java.io.IOException e) { 8 System.out.println(“Error receiving data.”); 9} 10 } 11 }

Send data to the server 1 public void sender(){ 2 if (connected) { 3 try { 4 out.writeBoolean(m/a);//write control mode 5 out.writeFloat(qman);//write input flow in manual 6 out.writeFloat(Kp);//write proportional gain 7 out.writeFloat(Ti);//write integral gain 8 out.writeFloat(Td);//write derivative gain 9 out.writeFloat(ref);//write setpoint 10 out.flush();// flush data to client 11 }catch (java.io.IOException e) { 12 System.out.println(“Error sending data.”); 13 } 14 } 15 }

Integrating Hybrid Laboratories into a Learning Management System Remote and virtual control laboratories do not provide all the necessary resources to teach students in a distributed scenario. This section describes the Web infrastructure used to support the learning process of students in a distributed scenario. eMersion (Gillet et al. 2005) is the LMS tool we chose for publishing the virtual and remote laboratories on the Internet. This environment was implemented based on emulating the social behavior of the interactions and collaborations that exist in an f2f laboratory.

688

eMersion Description Figure 4 shows a complete view of the experimentation environment for eMersion during a practical session with a DC motor system in remote mode. From a structural point of view, the environment is composed of five independent Web applications: navigation bar, eJournal, experimentation console, online information, and external applications. The navigation bar provides access to the other Web resources for the environment. From the link labeled “Access Protocol,” users can obtain a complete user’s guide for the environment.

AutomatL@bs Consortium

Figure 4. Learning Management System to publish web-based labs

The eJournal resource provides a shared workspace for users to communicate and collaborate during the learning process. The eJournal allows students to save, retrieve, and share their experimental results and documents. Furthermore, the presentation of results and discussions with teaching staff can be performed using the options that are provided. The users can also organize the information collected during the experimental sessions and through online repositories. Work tracking and awareness can be implemented based on this information. The experimentation console corresponds to the EJS interfaces in which students carry out their experimental activities. These interfaces can interchange data with the eJournal space (see Figure 4). Thus, students can use the results they obtained (through images of the system’s evolution or data registers) in the experimentation sessions to prepare their reports for the final assessment. Online information is a collection of HTML pages and PDF files that allow students to visualize all the documentation necessary to solve the laboratory assignments.

Finally, eMersion offers an ability to integrate external Web applications. In this context, the following subsection describes the automatic bookings and authentication system developed to organize students’ access to the physical resources. This application was successfully integrated into eMersion.

A Flexible Scheme for Authentication and Booking of Physical Resources A flexible scheme to let students book the use of a physical resource located in the laboratory was added to the LMS. Essentially, students can fill out a reservations database automatically from the client through a Web interface. The system includes three main modules. For the first module, a Java applet was developed to perform new bookings on the client-side (see Client applet for bookings in Figure 5). For the second module, a centralized server application was also developed in Java to manage reservations, synchronism, and communications between the client applet for bookings and the Lab-Server (see Bookings Main Server in Figure 5). Finally, an

689

AutomatL@bs Consortium

Figure 5. A flexible scheme for bookings and the authentication process

additional Java module located in the Lab-Server was developed (see Java Interface in Figure 5). This module informs the Bookings Main Server of the current state of the Lab-Server and other parameters that the central server requires. The full process for booking a physical resource in the laboratory is divided into two stages, which are described below (see Figure 5): Reservation Phase: A description of the states flow during a reservation occurs as follows: • •

• • • • •

690

The Applet for bookings requests a new reservation (step 1) The Bookings Main Server takes the request and saves it in a local database (DB) (step 2) The Bookings Main Server asks the Java Interface for its time zone (step 3) The Java Interface provides its time zone to the Bookings Main Server (step 3) The Bookings Main Server calculates the time lag and amends the timeslot The Bookings Main Server reports the new register to the Java Interface (step 4) The Java Interface receives the register and inserts it in the Lab-Server DB (step 5)



The Bookings Main Server tells the client that the new reservation has been made (step 1)

Authentication Phase: A description of the states flow during authentication occurs as follows: • •





The Applet for experimentation starts the process by sending user credentials (step 6) The Identity Checking Module receives the keys and checks whether the user exists in the local DB. If the user exists, it then checks whether the connection attempt is between the start-time and the end-time of the timeslot reserved (step 7) The Identity Checking Module sends the result of the checking to the Applet for experimentation (step 6) If the checking result is acceptable, then the Applet for experimentation receives free access to the Target Plant (steps 6 and 8)

The Applet for bookings in which students schedule their reservations for any experiment is shown in Figure 4. The process to make a booking requires that when the student requests a reservation, the response of the bookings system must

AutomatL@bs Consortium

indicate the date and time assigned for the student to use the remote plant. Other booking and authentication systems with similar features can be found in the literature. One of the most relevant is the booking and authentication mechanism for the iLabs Shared System (http://icampus.mit.edu/ilabs/), which was developed at the Massachusetts Institute of Technology (MIT). This program employs a middle tier Service Broker that manages the interaction between users and Lab-Servers. In this architecture, all of the reservations are hosted in the service broker and the users must pass through it each time they want to work with the real plants. Unlike the iLabs system, our system offers some additional features. When a user performs a new booking, the reservation is hosted and managed in a middle tier, which is similar to the iLabs Shared System, but the reservation is also stored in a simple database located in each Lab-Server. Thus, the post authentication process is carried out directly with the Lab-Server by passing this middle tier. Another advantage of this architecture is that in case a Lab-Server with valid bookings is damaged, these bookings could be retrieved later by the Lab-Server from the Central Server. Finally, the administrators of a Lab-Server could also manage the bookings locally. As such, if there were problems with the central servers, bookings could be made manually.

AUTOMATL@BS NETWORK The AutomatL@bs network (http://lab.dia.uned. es/automatlab) is a consortium of seven Spanish universities that decided to expand their efforts in the use of virtual and remote laboratories for engineering education to a national level. The universities taking part in this project are UNED, the University of Almería (UAL), the University of Alicante (UA), Polytechnic University of Valencia (UPV), Polytechnic University of Catalonia (UPC), Miguel Hernández University (UMH),

and the University of León (UNILEON). The main challenge of this work has been to manage and coordinate the integration of the hardware, software, and human resources in a Web-based experimentation environment hosted by the Department of Computer Science and Automatic Control of UNED in Madrid. The main aims of this project were •



Enabling students to access practical experiments that are not available at their universities. Increasing the quality and robustness of the network of virtual and remote laboratories for a higher number of students and teachers with different teaching concerns.

The Web-based laboratories were offered to a total of 112 master’s degree candidates at engineering schools in the consortium. Figure 6 shows the GUIs of the virtual and remote laboratories for each participant university. Each GUI has the same arrangement of graphical elements, which provides a uniform structure. Also, the Web-based laboratories of the AutomatL@bs network were documented following the same guidelines and criteria. In general, the documentation defines a set of tasks or activities that students should carry out so they can be evaluated effectively by the teaching staff. This sequence of activities was divided into two phases: PRE-Lab activities and Lab activities. PRE-Labs are based on the use of the experimentation console in the simulation mode. Thus, the teaching team could be assured that each student had prior knowledge of the system before using it for an actual experiment. Lab tasks are based on the use of the experimentation console in the remote mode. Remote access to the system is allowed by the teaching staff once students finish their PRE-lab work in simulation mode, and the work is considered satisfactory.

691

AutomatL@bs Consortium

Figure 6. The available remote systems in AutomatL@bs

Description of the Pedagogical Scenario Students from each university worked on three of the nine available remote systems (three from UNED and six from other universities) offered by the AutomatL@bs project (one lab from their university and two labs from other locations). Then students were required to use the system to learn how to operate the interfaces. Finally, after several sessions, the students could complement their work with the Web-based AutomatL@bs experimentation system at their convenience through an outside Internet connection. UNED students were first offered the chance to access the systems that were available at UNED (a servo-motor system, a heat-flow system, and a three-tank system). Later on, they could complete their work remotely through the Internet. During these experimental sessions, students were able to save their data measurements and parameters

692

for writing their final reports. The students placed their reports in the eJournal space for evaluation. Teaching assistants from each university were in charge of evaluation.

Outcomes To obtain feedback regarding the use of the system, the students were required to complete evaluation questionnaires. We designed questions based on the guidelines of Ma and Nickerson (2006) to evaluate the infrastructure and the technical quality of the system as well as the educational value and the experiences of students. Some of the more relevant questions are described below: Technical questions: 1. 2.

How would you describe the quality of the virtual laboratories? How would you describe the quality of the remote laboratories?

AutomatL@bs Consortium

3. 4. 5.

Have you experienced hardware or software problems? Did you appreciate the uniform structure of the client interfaces? How was the navigation experience for the global system options? Educational value questions:

1.

2.

3. 4.

5.

How would you evaluate the quality of your learning with Web-based laboratories compared to traditional methods? How would you describe your learning speed using remote and virtual labs compared to traditional methods? In general terms, are you satisfied with the usability of the system? What were the most important learning resources when you were learning to use the system? How would you evaluate the level of difficulty of using the system?

Although this assessment was not an exhaustive evaluation, it provided initial information on what was necessary, or unnecessary, to include in this methodology for future engineering courses. The outcomes obtained from this survey are summarized in Table 2.

Sub-scale Number 1 provided the first general view concerning whether students felt satisfied with this new method for performing their practical experiments. The results showed that 19% and 69% of students strongly agreed and agreed, respectively, that they were satisfied with the system. Other questions about the advantages of using remote experiments in the educational process were also reported. The results showed that the use of new technologies, especially the Internet, encouraged students to conduct most of their practical exercises using this resource. Subscale Numbers 2 and 3 show comparative information about learning with the new technological methods compared to traditional methods. In cases where students reported dissatisfaction (9%), the primary reason was that they were not able to work directly with the laboratory equipment. A way to solve this problem could be to apply an educational methodology based on blended learning. First, a face-to-face class where students could interact and experiment in situ with the real plant would be held. The students would then be allowed access to the experimental environment remotely to complete their practical exercises. Regarding the quality of the hybrid laboratories (Sub-scale Numbers 4 and 5), most students positively evaluated their development in terms of user functionality. Any negative results might

Table 2. Summary of the survey outcomes Sub-scale

A%

B%

C%

D%

E%

1. Satisfaction degree

19

69

7

5

0

2. Learning compared to traditional methods

15

51

25

8

1

3. Facility of using the system

19

62

11

8

0

4. Quality of virtual labs

33

48

15

4

0

5. Quality of remote labs

25

38

25

10

2

6. Most important learning resource

18

44

27

11

0

1. A: Strongly Agree B: Agree C: Neutral D: Disagree E: Strongly Disagree 2. A: Much better B: Better C: Equal D: less E: Much less 3. A: Strongly Agree B: Agree C: Neutral D: Disagree E: Strongly Disagree 4. A: Very good B: Good C: Acceptable D: Bad E: Very Bad 5. A: Very good B: Good C: Acceptable D: Bad E: Very Bad 6. A: Documentation B: Questions to teacher C: Simulation D: Connection to plant E: Others

693

AutomatL@bs Consortium

have been a consequence of the quality of the Internet connection speed because slow speeds lead to delays. Some of the students performed their experiments using old dialup connections (56 kbps), so the exchange of data with some processes was not fast enough and caused the user interfaces to update slowly. The experiments were also tested with low-speed ADSL lines (512/128 kbps), and the results were satisfactory. Finally, Sub-scale Number 6 shows how queries of the teaching staff and the documentation of the practical exercises were essential resources for positive student performance.

CONCLUSION Virtual and remote experimentation for engineering education can be considered a mature technology. However, the process of transforming a classic control experiment into an interactive Web-based laboratory is not an easy task. This essay provides a systematic approach for developing prototypes of remote laboratories from a pedagogical perspective using three tools: EJS, LabVIEW, and eMersion. This approach incorporates the development of online experimentation environments and provides an effective scheme to switch between the simulation and teleoperation of real systems. The AutomatL@bs project has yielded benefits to the universities that have participated in the project over the last three academic years. The results from the previous evaluation allowed us to debug the system and identify necessary improvements in the framework. First, the number and variety of available experiments will be increased by enrolling new universities in the AutomatL@bs project (with special interest in universities from South America). To cope with this challenge, other LMS, such as Moodle or Sakai, are currently being evaluated. Second, the applets and all the materials are being adapted to the SCORM standards to simplify porting to

694

another LMS. Additionally, the applets of the simulated physical processes are being integrated into the ComPADRE digital library (http://www. compadre.org) to gain visibility. These changes could help integrate and deploy our project in other institutions. We will also attempt to let students carry out their practical experiments using other devices (such as mobile phones and PDAs) and user interfaces (including e-mail, Web forms, and HTML/JavaScript thin interfaces).

REFERENCES Astrom, K. J. (2006, June). Challenges in control education. Paper presented at the 7th IFAC Symposium on Advances in Control Education (ACE), Madrid, Spain. Bourne, J., Harris, D., & Mayadas, F. (2005). Online engineering education: Learning anywhere, anytime. International Journal of Engineering Education, 91(1), 131–146. Brito, N., Ribeiro, P., Soares, F., Monteiro, C., Carvalho, V., & Vasconcelos, R. (2009, November). A remote system for water tank level monitoring and control - a collaborative case-study. Paper presented at the 3rd IEEE International Conference on e-Learning in Industrial Electronic (ICELIE), Porto, Portugal. Callaghan, M. J., Harkin, J., McGinnity, T. M., & Maguire, L. P. (2006). Client-server architecture for remote experimentation for embedded systems. International Journal of Online Engineering, 2(4), 8–17. Casini, M., Prattichizzo, D., & Vicino, A. (2004). The automatic control telelab. A web-based technology for distance learning. IEEE Control Systems Magazine, 24(3), 36–44. doi:10.1109/ MCS.2004.1299531

AutomatL@bs Consortium

Christian, W., & Esquembre, F. (2007). Modeling physics with Easy Java simulations. The Physics Teacher, 45(10), 475–480. doi:10.1119/1.2798358

LABVIEW. (2010). NI LabVIEW homepage. Retrieved November 10, 2010, from http://www. ni.com/labview/

Dormido, S. (2004). Control learning: Present and future. Annual Reviews in Control, 28(1), 115–136. doi:10.1016/j.arcontrol.2003.12.002

Lim, D. (2006). A laboratory course in real-time software for the control of dynamic systems. IEEE Transactions on Education, 49(3), 346–354. doi:10.1109/TE.2006.879243

Dormido, S., Canto, S. D., Canto, R. D., & Sánchez, J. (2005). The role of interactivity in control learning. International Journal of Engineering Education: Special Issue on Control Engineering Education, 21(6), 1122–1133. Easy Java. (2010). EJS wiki homepage. Retrieved November 10, 2010, from http://www.um.es/fem/ EjsWiki/ Eikaas, T. I., Foss, B. A., Solbjorg, O. K., & Bjolseth, T. (2006). Game-based dynamic simulations supporting technical education and training. International Journal of Online Engineering, 2(2), 1–7. EMERSION. (2010). eMersion project homepage. Retrieved November 10, 2010, from http:// lawww.epfl.ch/page28147.html Esquembre, F. (2005). Creación de simulaciones interactivas en Java. Madrid, Spain: Pearson, Prentice Hall. Gillet, D., El Helou, S., Marie, J., & Rosamund, S. (2009, September). Science 2.0: Supporting a doctoral community of practice in technology enhanced learning using social software. Paper presented at the 4th European Conference on Technology Enhanced Learning (EC-TEL), Nice, France. Gillet, D., Nguyen, A. V., & Rekik, Y. (2005). Collaborative web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. doi:10.1109/ TE.2005.852592

Ma, J., & Nickerson, J. V. (2006). Hands-on, simulated, and remote laboratories: A comparative literature review. ACM Computing Surveys, 38(3), 1–24. Nguyen, A. V. (2007, July). Activity theoretical analysis and design model for web-based experimentation. Paper presented at the 12th International Conference on Human-Computer Interaction, Beijing, China. Oppenheim, A., Willsky, A., & Hamid, S. (1996). Signals and systems (2nd ed.). Upper Saddle River, NJ: Prentice Hall. Rosen, M. A. (2007). Future trends in engineering education. In Aung, W. (Eds.), Innovations 2007: World innovations in engineering education and research (pp. 1–11). Arlington, VA: International Network for Engineering Education and Research/ Begell House Publishing. Salzmann, C., & Gillet, D. (2002, July). Real-time interaction over the Internet. Paper presented at the 15th IFAC World Congress, Barcelona, Spain. Salzmann, C., & Gillet, D. (2008). From online experiments to smart devices. International Journal of Online Engineering, 4(S1), 50–54. Salzmann, C., Gillet, D., & Huguenin, P. (1999). Introduction to real-time control using LabVIEW with an application to distance learning. International Journal of Engineering Education, 16(3), 255–272.

695

AutomatL@bs Consortium

Valera, A., Diez, J. L., Vallés, M., & Albertos, P. (2005). Virtual and remote control laboratory development. IEEE Control Systems Magazine, 25(1), 35–39. doi:10.1109/MCS.2005.1388798 Vargas, H., Sánchez, J., Duro, N., Dormido, R., Dormido-Canto, S., & Farias, G. (2008). A systematic two-layer approach to develop webbased experimentation environments for control engineering education. Intelligent Automation and Soft Computing, 14(4), 505–524. Williams, R. (2007). Flexible learning for engineering. In Aung, W. (Eds.), Innovations 2007: World innovations in engineering education and research (pp. 279–290). Arlington, VA: International Network for Engineering Education and Research/Begell House Publishing. Zutin, D. G., Auer, M. E., Bocanegra, J. F., López, E. R., Martins, A. C. B., Ortega, J. A., & Pester, A. (2008). TCP/IP communication between server and client in multi user remote labs applications. International Journal of Online Engineering, 4(3), 42–45.

ADDITIONAL READING Abdulwahed, M., & Nagy, Z. K. (2009). Applying Kolbs experiential learning on laboratory education, case study. Journal of Engineering Education, 98(3), 283–294. Aliane, N., Pastor, R., & Mariscal, G. (2010). Limitations of remote laboratories in control engineering education. International Journal of Online Engineering, 6(1), 31–33. Christian, W., Esquembre, F., & Mason, B. (2009, September). Easy Java Simulations and the ComPADRE library. Paper presented at the 14th International Workshop on Multimedia in Physics Teaching and Learning (MPTL14), Udine, Italy.

696

de la Torre, L., Sánchez, J., & Dormido, S. (2009, September). The Fisl@bs portal: A network of virtual and remote laboratories for physics education. Paper presented at the 14th International Workshop on Multimedia in Physics Teaching and Learning (MPTL14), Udine, Italy. Dormido, R., Vargas, H., Duro, N., Sánchez, J., Dormido-Canto, S., & Farias, G. (2008). Development of a web-based control laboratory for automation technicians: The three-tank system. IEEE Transactions on Education, 51(1), 35–44. doi:10.1109/TE.2007.893356 Duro, N., Dormido, R., Vargas, H., DormidoCanto, S., Sánchez, J., Farias, G., & Dormido, S. (2008). An integrated virtual and remote control lab: The three-tank system as a case study. Computing in Science & Engineering, 10(4), 50–59. doi:10.1109/MCSE.2008.89 Fakas, G. J., Nguyen, A. V., & Gillet, D. (2005). The electronic laboratory journal: A collaborative and cooperative learning environment for webbased experimentation. Computer Supported Cooperative Work, 14(3), 189–216. doi:10.1007/ s10606-005-3272-3 Gillet, D., Nguyen, A. V., & Rekik, Y. (2005). Collaborative web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. doi:10.1109/ TE.2005.852592 Gomes, L., & Bogosyan, S. (2007). Special section on e-learning and remote laboratories within engineering education - first part. IEEE Transactions on Industrial Electronics, 54(6), 3054–3056. doi:10.1109/TIE.2007.907007 Guzmán, J. L., Vargas, H., Sánchez, J., Berenguel, M., Dormido, S., & Rodríguez, F. (2007). Education research in engineering studies: Interactivity, virtual and remote labs. In Morales, A. V. (Ed.), Distance Education Issues and Challenges (pp. 131–167). Hauppauge, NY: Nova Science Publishers.

AutomatL@bs Consortium

ILOUGH-LAB. (2010). The Ilough-lab. A Process Control Lab in the Chemical Engineering Lab at Loughborough University, UK. Retrieved November 10, 2010, from http://www.ilough-lab.com. ISILAB. (2010). ISILab Internet Shared Instrumentation Laboratory University of Genoa. Retrieved November 10, 2010, from http://isilab. dibe.unige.it.

Tan, K. K., Wang, K. N., & Tan, K. C. (2005). Internet-based resources sharing and leasing system for control engineering research and education. International Journal of Engineering Education, 21(6), 1031–1038. TELELAB. (2010). Automatic Control TeleLab. University of Siena. Retrieved November 10, 2010, from http://act.dii.unisi.it.

Jara, C., Esquembre, F., Candelas, F., Torres, F., & Dormido, S. (2009, October). New features of Easy Java Simulations for 3D modelling. Paper presented at the 8th IFAC Symposium on Advances in Control Education (ACE09), Kumamoto, Japan.

Vargas, H., Salzmann, Ch., Gillet, D., & Dormido, S. (2009, October). Remote experimentation mashup. Paper presented at the 8th IFAC Symposium on Advances in Control Education (ACE09), Kumamoto, Japan.

LABSHARE. (2010). LabShare. University of Technology, Sydney. Retrieved November 10, 2010, from http://www.labshare.edu.au.

Vargas, H., Sánchez, J., Salzmann, Ch., Esquembre, F., Gillet, D., & Dormido, S. (2009). Web-enabled remote scientific environments. Computing in Science & Engineering, 11(3), 34–46. doi:10.1109/MCSE.2009.61

Lareki, A., Martínez, J., & Amenabar, N. (2010). Towards an efficient training of university faculty on ICTs. Computers & Education, 54(2), 491–497. doi:10.1016/j.compedu.2009.08.032 Martín, C., Urquía, A., & Dormido, S. (2007). Object-oriented modelling of virtual-laboratories for control education. In Tzafestas, S. G. (Ed.), Web-based Control and Robotics Education (pp. 103–125). Springer Verlag. doi:10.1007/978-90481-2505-0_5 Nguyen, A. V., Rekik, Y., & Gillet, D. (2006). Iterative Design and Evaluation of a Web-Based Experimentation Environment. In Lambropoulus, N., & Zaphiris, P. (Eds.), User-Centered Design of Online Learning Communities (pp. 286–313). Idea Group. doi:10.4018/978-1-59904-358-6.ch013 NUS. (2010). Internet Remote Experimentation. National University of Singapore. Retrieved November 10, 2010, from http://vlab.ee.nus.edu. sg/~vlab/. Restivo, M.T., & Silva, M.G. (2009). Portuguese Universities Sharing Remote Laboratories. International Journal of Online Engineering, 5(special issue IRF’09), 16-19.

KEY TERMS AND DEFINITIONS Control Engineering: This form of engineering is an engineering discipline that applies control theory to design systems with predictable behaviors. The practice of control engineering uses sensors to measure the output performance of the device that is being controlled (e.g., a process or a vehicle), and those measurements can be used to provide feedback to the input actuators that can make corrections toward the desired performance. Controller: This is a device that monitors and affects the operational conditions of a given dynamical system. The operational conditions are typically referred to as output variables in the system that can be affected by adjusting certain input variables. Hybrid Laboratories: A Web-based laboratory where students can work with a simulation of a dynamic system (virtual lab) or on the real counterpart (remote lab). LabVIEW (Laboratory Virtual Instrumentation Engineering Workbench): LabVIEW is 697

AutomatL@bs Consortium

a graphical programming environment from National Instruments used to develop sophisticated measurements, tests, and control systems with intuitive graphical icons and wires that resemble a flowchart. Model–View–Controller (MVC): This is a software architecture that is currently an architectural pattern used in software engineering. PID Controller: A proportional–integral– derivative controller is a generic control loop

feedback mechanism (controller) widely used in industrial control systems. Sharable Content Object Reference Model (SCORM): A collection of standards and specifications for Web-based e-learning. It defines communications between client-side content and a host system called the run-time environment, which is commonly supported by a learning management system.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 206-225, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

698

699

Chapter 40

An Estimation of Distribution Algorithm for Part Cell Formation Problem Saber Ibrahim University of Sfax, Tunisia Bassem Jarboui University of Sfax, Tunisia Abdelwaheb Rebaï University of Sfax, Tunisia

ABSTRACT The aim of this chapter is to propose a new heuristic for Machine Part Cell Formation problem. The Machine Part Cell Formation problem is the important step in the design of a Cellular Manufacturing system. The objective is to identify part families and machine groups and consequently to form manufacturing cells with respect to minimizing the number of exceptional elements and maximizing the grouping efficacy. The proposed algorithm is based on a hybrid algorithm that combines a Variable Neighborhood Search heuristic with the Estimation of Distribution Algorithm. Computational results are presented and show that this approach is competitive and even outperforms existing solution procedures proposed in the literature.

INTRODUCTION The principle objective of Group Technology is to reduce the intercellular flow of parts and to provide an efficient grouping of machines into cells. The main contribution in this chapter is to develop an efficient clustering heuristic based on

evolutionary algorithms and to apply the proposed heuristic for Machine Part Cell Formation Problem which includes the configuration and capacity management of manufacturing cells. We propose to apply a novel population based evolutionary algorithm called Estimation of Distribution Algorithm in order to form part families and machine cells simultaneously.

DOI: 10.4018/978-1-4666-1945-6.ch040

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

The objective of the proposed heuristic is to minimize exceptional elements and to maximize the goodness of clustering and thus the minimization of intercellular movements. In order to guarantee the diversification of solutions, we added an efficient technique of local search called Variable Neighborhood Search at the improvement phase of the algorithm. Many researchers have combined local search with evolutionary algorithms to solve this problem. However, they did not apply yet the Estimation of Distribution Algorithm for the general Group Technology problem. Furthermore, we have used a modified structure of the probabilistic model within the proposed algorithm. In order to quantify the goodness of the obtained solutions, we present two evaluation criteria namely the percentage of exceptional elements and the grouping efficacy. A comparative study was elaborated with the most known evolutionary algorithms as well as the well known clustering methods.

LITERATURE REVIEW A wide body of publications has appeared on the subject of Group Technology (GT) and Cellular Manufacturing Systems (CMS). The history of approaches that tried to solve this problem began with the classification and coding schemes. Several authors have proposed various ways trying to classify the methods of Cell Formation Problem. It includes descriptive methods, cluster analysis procedures, graph partitioning approaches, mathematical programming approaches, artificial intelligence approaches and other analytical methods. Burbidge (1963) was the first who developed a descriptive method for identifying part families and machine groups simultaneously. In his work “Production Flow Analysis” (PFA). Burbidge has proposed an evaluative technique inspired from an analysis of the information given in route cards

700

to find a total division into groups, without any need to buy additional machine tools. Then, researchers applied array based clustering techniques which used a binary matrix A called “Part Machine Incidence Matrix” (PMIM) as input data. Given i and j the indexes of parts and machines respectively, an entry of 1 (aij) means that the part i is executed by the machine j whereas an entry of 0 indicates that it does not. The objective of the array based techniques is to find a block diagonal structure of the initial PMIM by rearranging the order of both rows and columns. Thus, the allocation of machines to cells and the parts to the corresponding families is trivial. McCornick et al. (1972) were the first who applied this type of procedure to the CFP. They developed the Bond Energy Analysis (BEA) which seeks to identify and display natural variable groups and clusters that occur in complex data arrays. Besides, their algorithm seeks to uncover and display the associations and interrelations of these groups with one another. King (1980) developed the Rank Order Clustering (ROC). In ROC algorithm, binary weights are assigned to each row and column of the PMIM. Then, the process tries to gather machines and parts by organizing columns and rows according to a decreasing order of their weights. Chan and Milner (1981) developed the Direct Clustering Algorithm (DCA) in order to form component families and machine groups by restructuring the machine component matrix progressively. A systematic procedure is used instead of relying on intuition in determining what row and column rearrangements are required to achieve the desired result. King & Nakornchai (1982) improved the ROC algorithm by applying a quicker sorting procedure which locates rows or columns having an entry of 1 to the head of the matrix. Chandrasekharan & Rajagopalan (1986a) proposed a modified ROC called MODROC, which takes the formed cells by the ROC algorithm and applies a hierarchical clustering procedure to them. Later, other array based clustering techniques are proposed namely

An Estimation of Distribution Algorithm for Part Cell Formation Problem

the Occupancy Value method of Khator & Irani (1987), the Cluster Identification Algorithm (CIA) of Kusiak & Chow (1987) and the Hamiltonian Path Heuristic of Askin et al. (1991). McAuley (1972) was the first who suggested similarity coefficient to clustering problems. He applied the Single Linkage procedure to the CF problem and used the coefficient of Jaccard which is defined for any pair of machines as the ratio of the number of parts that visit both machines to the number of parts that visit at least one of these machines. Then, some other clustering techniques are developed namely Single Linkage Clustering (SLC), Complete Linkage Clustering (CLC), Average linkage Clustering (ALC) and Linear Cell Clustering (LCC). Kusiak (1987) proposed a linear integer programming model maximizing the sum of similarity coefficients defined between two parts The category that is the most used in literature in recent years is heuristics and metaheuristics. Such heuristics are based essentially on Artificial Intelligence approaches including Genetic Algorithms (GA), Simulated Annealing (SA), Tabu Search (TS), Evolutionnary Algorithms (EA), neural network and fuzzy mathematics. In what follows we present some research papers that used this type of heuristics for designing CM systems. Boctor (1991) developed the SA approach to deal with large-scale problems. Sofianopoulos (1997) proposed a linear integer formulation for CF problem and employed the SA procedure to improve the solution quality taking as objective the minimization of inter-cellular flow between cells. Caux et al. (2000) proposed an approach combining the SA method for the CF problem and a branch-and-bound method for the routing selection. Lozano et al. (1999) presented a Tabu Search algorithm that systematically explores feasible machine cells configurations determining the corresponding part families using a linear network flow model. They used a weighted sum of intra-cell voids and inter-cellular moves to evaluate the quality of the solutions. Solimanpur

et al. (2003) developed an Ant colony optimization algorithm to solve the inter cell layout problem by modelling it as a quadratic assignment problem. Kaparthi et al. (1993) proposed an algorithm based on neural network for the part machine grouping problem. Xu & Wang (1989) developed two approaches of fuzzy cluster analysis namely fuzzy classification and fuzzy equivalence in order to incorporate the uncertainty in the measurement of similarities between parts. They presented also a dynamic part-family assignment procedure using the methodology of fuzzy pattern recognition to assign new parts to existing part families. Recently many researchers have focused on the approaches based on AI for solving the part-machine grouping problem. Venugopal & Narendran (1992a) proposed a bi-criteria mathematical model with a solution procedure based on a genetic algorithm. Joines et al. (1996) presented an integer programming solved using a Genetic Algorithm to solve the CF problem. Zhao & Wu (2000) presented a genetic algorithm to solve the machine-component grouping problem with multiple objectives: minimizing costs due to intercell and intra-cell part movements; minimizing the total within cell load variation; and minimizing exceptional elements. Gonçalves & Resende (2002) developed a GA based method which incorporates a local search to obtain machine cells and part families. The GA is responsible for generating sets of machines cells and the mission of the local search heuristic is to construct sets of machine part families and to enhance their quality. Then, Gonçalves & Resende (2004) employed a similar algorithm to find first the initial machine cells and then to obtain final clusters by applying the local search. Mahdavi et al. (2009) presented a GA based procedure to deal with the CF problem with nonlinear terms and integer variables. Stawowy (2006) developed a non-specialized Evolutionary Strategy (ES) for CF problem. His algorithm uses a modified permutation with separators encoding scheme and unique concept of separators movements during mutation. Andrés

701

An Estimation of Distribution Algorithm for Part Cell Formation Problem

& Lozano (2006) applied for the first time the Particle Swarm Optimization (PSO) algorithm to solve the CF problem respecting the objective the minimization of inter-cell movements and imposing a maximum cell size.

ESTIMATION OF DISTRIBUTION ALGORITHM It was first introduced by Mühlenbein & Paaß (1996). The Estimation of Distribution Algorithm belongs to Evolutionary Algorithms family. It adopts probabilistic models to reproduce individuals in the next generation, instead of crossover and mutation operations. This type of algorithms uses different techniques to estimate and sample the probability distribution. The probabilistic model is represented by conditional probability distributions for each variable. This probabilistic model is estimated from the information of the selected individuals in the current generation and selects good individuals with respect to their fitness. This process is repeated until the stop criterion is met. Such a reproduction procedure allows the algorithm to search for optimal solutions efficiently. However, it considerably decreases the diversity of the genetic information in the generated population when the population size is not large enough. For this reason, the incorporation of a local search technique is encouraged in order to enhance the performance of the algorithm. As a result, the Estimation of Distribution Algorithm can reach best solutions by predicting population movements in the search space without needing many parameters. The main steps in this procedure are shown in the following pseudo code: Estimation of Distribution Algorithm 1. Initialize the population according to some initial distribution model. 2. Form P ' individuals from the current population using a selection method.

702

3.

4.

5.

Build a probability model p(x) from P ' individuals using both the information extracted from the selected individuals in the current population and the previously built model. Sample p(x) by generating new individuals from the probability model and replace some or all individuals in the current population. End the search if stop criteria are met, otherwise return to Step 2.

This method can be divided into two different classes. The first class assumes that there are no dependencies between variables of the current solution during the search. These are known as non-dependency Estimation of Distribution Algorithms: Population Based Incremental Learning (Baluja, 1994) and Univariate Marginal Distribution Algorithm (Mühlenbein & Paaß, 1996). The second class takes into account these variable dependencies: Mutual Information Maximization for Input Clustering (De Bonet et al., 1997), Bivariate Marginal Distributional Algorithm (Pelikan & Mühlenbein, 1999), Factorized Distribution Algorithm (Mühlenbein et al., 1999) and the Bayesian Optimization Algorithm (Pelikan et al., 1999a). Generally, non-dependency algorithms are expected to have a worse modelling ability than the ones with variable dependencies (Zhang et al., 2004). But combining heuristic information or local search with non-dependency algorithms can compensate for this disadvantage.

Univariate EDAs This category assume that each variable is independent; it means that the algorithm do not consider any interactions among variables in the solution. As a result, the probability model distribution,p(x), becomes simply the product of Univariate marginal probabilities of all variables in the solution and expressed as follows:

An Estimation of Distribution Algorithm for Part Cell Formation Problem

p(x ) = ∏ p(x i )

Univariate Marginal Distribution Algorithm

Due to the simplicity of the model of distribution used, the algorithms in this category are computationally inexpensive, and perform well on problems with no significant interaction among variables. In what follows, we present the well-known works related to this category.

Univariate Marginal Distribution Algorithm was proposed by Muhlenbein & Paaß (1996). We note that this category can be seen as a variant of Population Based Incremental Learning when λ=1 and μ=0 Different variants of Univariate Marginal Distribution Algorithm have been proposed, and the mathematical analysis of their workflows has been carried out (Muhlenbein, 1998; Muhlenbein et al., 1999; Gonzalez et al., 2002). The main steps in this procedure are shown in the following pseudo code:

1

i =1

Population Based Incremental Learning It was proposed by Baluja (1994). The algorithm starts with initialisation of a probability vector. In each iteration, it updates and samples the probability vector to generate new solutions. The main steps in this procedure are shown in the following pseudo code: Population Based Incremental Learning 1. Initialise a probability vector p={p1,p2,...,pn}with 0.5 at each position. Here, each pi represents the probability of 1 for the ith position in the solution. 2. Generate a population P of M solutions by sampling probabilities in p. 3. Select set D from P consisting of N promising solutions. 4. Estimate univariate marginal probabilities p(xi) for each xi. 5. Foreachi,updatepiusingpi=pi+λ(p(xi-pi) 6. For each i, if mutation condition passed, mutate pi using pi=pi(1-μ)+randon (0 or 1)μ. 7.

End the search if stop criteria are met, otherwise return to Step 2.

Univariate Marginal Distribution Algorithm 1. Generate a population P composed of M solutions. 2. Select a set P’ from P consisting of N promising solutions. 3. Estimate univariate marginal probabilities p(x) from P’ for each xi. 4. Samplep(x)to generate M new individual and replace P. 5. End the search if stop criteria are met, otherwise return to Step 2.

Bivariate EDAs In contrast with Univariate case, the probability model contains factors involving the conditional probability of pairs of interacting variables. This class of algorithms performs better in problems, where pair-wise interaction among variable exists. In what follows, we present the well-known works related to this category.

Mutual Information Maximization for Input Clustering The Mutual Information Maximization for input clustering uses a chain model of probability distribution (de Bonet et al., 1997) and it can be written as:

703

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1

p(x ) = ∏ p(x π x π )p(x π x π )...p(x π i =1

1

2

2

3

n −2

x π )p(x π ) n −1

n

where Π={π1,π2,...,πn} is a permutation of the numbers {1,2,...,n} used as an ordering for the pair wise conditional probabilities. At each iteration, the algorithm first tries to learn the linkage. Then, the algorithm uses a greedy algorithm to find a permutation Π that does not always give accurate model. Once the permutation Π is learnt, the algorithm estimates the pair wise conditional probabilities and samples them to get next set of solutions.

Combining Optimizers with Mutual Information Trees The Combining Optimizers with Mutual Information Trees proposed by Baluja & Davies (1997, 1998) also uses pair-wise interaction among variables. The model of distribution used by this algorithm can be written as follows: 1

p(x ) = ∏ p(x i x j ) i =1

where, xj is known as parent of xi and xi is known as a child of xj. This model is more general than the chain model used by Mutual Information Maximization for input clustering as two or more variables can have a common parent.

Bivariate Marginal Distribution Algorithm It was proposed by (Pelikan & Muhlenbein, 1999) as an extension to Univariate Marginal Distribution Algorithm. The model of distribution used by Bivariate Marginal Distribution Algorithm can be seen as an extension to the Combining Optimizers with Mutual Information Trees model and can be written as follows:

704

p(x ) =

∏ p(x

x k ∈Y

k

)

∏ {

xi ∈ X Y

}

p(x i x j )

where, Y⊆X represents the set of root variables. As a result, Bivariate Marginal Distribution Algorithm is a more generalised algorithm in this class and can cover both univariate interaction as well as bivariate interaction among variables.

Multivariate EDAs The model of probability distribution becomes more complex than the one used by univariate and bivariate Estimation of Distribution Algorithms. Any algorithm considering interaction between variables of order more than two can be placed in this class. As a result, the complexity of constructing such model increases exponentially to the order of interaction making it infeasible to search through all possible models. In what follows, we present the well-known works related to this category.

Extended Compact Genetic Algorithm The Extended Compact Genetic Algorithm has been proposed by Harik (1999) as an extension to the Compact Genetic Algorithm. The model of distribution used in the Extended Compact Genetic Algorithm, is distinct from other previously described models as they only consider the marginal probabilities and do not include conditional probabilities. Also, it assumes that a variable appearing in a set of interacting variables cannot appear in another set. The model of distribution used by the Extended Compact Genetic Algorithm can be written as follows: p(x ) = ∏ p(x k ) k ∈m

An Estimation of Distribution Algorithm for Part Cell Formation Problem

where, m is the set of disjoint subsets in n and p(xk) is the marginal probability of set of variables xk in the subset k.

Factorised Distribution Algorithm The Factorised Distribution Algorithm was proposed by Muhlenbein et al. (1999) as an extension to the Univariate Marginal Distribution Algorithm. The probability p(x), for such linkage, can be expressed in terms of conditional probabilities between sets of interacting variables. In general, the Factorised Distribution Algorithm requires the linkage information in advance, which may not be available in a real world problem.

Bayesian Optimization algorithm The Bayesian Optimization algorithm was proposed by Pelikan et al. (1999a). The probabilistic model p(x) is expressed in terms of a set of conditional probabilities as follow: n

p(x ) = ∏ p(x i πi ) i =1

where, πi is a set of variables having conditional interaction with xi. Also no variable in πi can have xi or any children of xi as their parent. An extension to the Bayesian Optimization algorithm called hierarchical Bayesian Optimization algorithm has also been proposed by Pelikan & Goldberg (2000). The idea is to improve the efficiency of algorithm by using a Bayesian network with a local structure (Chickering et al., 1997) to model the distribution and a restricted tournament replacement strategy based on work of Harik (1994) to form the new population.

Estimation of Bayesian Network Algorithm The Estimation of Bayesian Network Algorithm was proposed by Etxeberria & Larranaga (1999) and Larranaga et al., (2000) and also uses Bayesian networks as its model of probability distribution. The algorithm has been applied for various optimisation problems, such as graph matching (Bengoetxea et al., 2000, 2001b,), partial abductive inference in Bayesian networks (de Campos et al., 2001), job scheduling problem (Lozano et al., 2001b), rule induction task (Sierra et al., 2001), travelling salesman problem (Robles et al., 2001), partitional clustering (Roure et al., 2001), Knapsack problems (Sagarna & Larranaga, 2001).

Learning Factorised Distribution Algorithm The Learning Factorised Distribution Algorithm was proposed by Muhlenbein & Mahnig (1999b) as an extension to the Factorised Distribution Algorithm. The algorithm does not require linkage in advance. In each iteration, it computes a bayesian network and samples it to generate new solutions. The main steps in the Bayesian Optimization algorithm (BOA), the Estimation of Bayesian Network Algorithm (EBNA) and the Learning Factorised Distribution Algorithm (LFDA) procedures are shown in the following pseudo code: BOA, EBNA and LFDA 1. Generate population P of M solutions 2. Select N promising solution from P. 3. Estimate a Bayesian network from selected solutions. 4. Sample Bayesian network to generate M new individual and replace P. 5. End the search if stop criteria are met, otherwise return to Step 2.

705

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Markov Network Factorised Distribution Algorithm and Markov Network Estimation of Distribution Algorithm The Markov Network Factorised Distribution Algorithm and the Markov Network Estimation of Distribution Algorithm were proposed by Santana (2003a, 2005). They used Markov network (Pearl, 1988; Li, 1995) as the model of distribution for p(x). The first algorithm uses a technique called junction graph approach, while the second one uses a technique called Kikuchi approximation to estimate a Markov network.

PROBLEM STATEMENT Manufacturing Cell Formation consists of grouping, or clustering, machines into cells and parts into families according to their similar processing requirements. The most known and efficient idea to achieve the objective of cell formation is to convert the initial Part Machine Incidence Matrix to a matrix that has a diagonal block structure. Among this process, entries with a ‘1’ value are grouped to form mutually independent clusters, and those with a ‘0’ value are arranged outside these clusters. Once a block diagonal matrix is obtained, machine cells and part families are clearly visible. However, the process engenders intercellular movements Figure 1. King & Nakornchai (1982) initial matrix

706

that require extra cost or time due to the presence of some parts that are processed by machines not belonging to its corresponding cluster. These parts are called Exceptional Elements. As a result, the objective of the block diagonalization is to change the original matrix into a matrix form minimizing Exceptional Elements and maximizing the goodness of clustering. For cell formation problem, this matrix can be regarded as a binary matrix A which shows the relationship between any given m machines and p parts. Rows and columns represent respectively machines and parts. Each element in the matrix is usually represented by the binary entries aij where an entry of 1 indicates that a part i is processed by the corresponding machine j while an entry of 0 means a contrary situation. In Figure 1, we illustrate an (5×7) incidence matrix of King & Nakornchai (1982). Figure 2 provides a block diagonal form for the initial matrix illustrated above. The obtained matrix has not any intercellular movement which means that it represents the optimal solution for the given matrix with 2 cells and 3 machines per cell. In this chapter, we will deal with two efficient evaluation criteria namely the Grouping Efficacy (GE) and the Percentage of Exceptional Elements (PE). The Grouping Efficacy, proposed by Kumar & Chandrasekharan (1990), is considered one of

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Figure 2. A block diagonal matrix with no exceptional elements

the best criteria which distinguish ill-structured matrices from well-structured ones when the matrix size increases and it is expressed as follows: GE =

e(X ) − e0 (X ) e(X ) + ev (X )

Where: e0(X): Number of Exceptional Elements in the solution X, e: Number of 1’s in the Part Machine Incidence Matrix, ev(X): Number of voids in the solution X. The second evaluation criterion is called the “Percentage of Exceptional Elements (PE)” is developed by Chan & Milner (1982) and expressed as follows: PE =

e0 (X ) e

× 100.

Some other performance measurements can be used to evaluate manufacturing cell design results. In what follows, we presents some of them. The Grouping Efficiency which is developed by Chandrasekaran & Rajagopalan (1989). It

expresses the goodness of the obtained solutions and depends on the utilization of machines within cells and inter-cell movements. This indicates that there are no voids and no exceptional elements in the diagonal blocks which imply a perfect clustering of parts and machines. Although grouping efficiency was widely used in the literature, it has an important limit which is the inability of discrimination of good quality grouping from bad one. Indeed, when the matrix size increases, the effect of 1’s in the off-diagonal blocks becomes smaller, and in some cases, the effect of inter-cell moves is not reflected in grouping efficiency. The Machine Utilization Index (MUI) which is defined as the percentage of the time that the machines within cells are being utilized most effectively and it is expressed as follows: MUI =

e

∑ (m × p ) i

i

i

where mi indicates the number of machines in cell i and pi indicates the number of parts in cell i. The Group technology efficiency which is defined as the ratio of difference between maximum number of inter-cell travels possible and number of inter-cell travels actually required by the system to the maximum number of inter-cell travels possible.

707

An Estimation of Distribution Algorithm for Part Cell Formation Problem

The Group efficiency which is defined as the ratio of difference between total number of maximum external cells that could be visited and total number of external cells actually visited by all parts to total number of maximum external cells that could be visited. The Global efficiency is defined as the ratio of the total number of operations that are performed within the suggested cells to total number of operations in the systems.

PROPOSED EDA FOR MPCF PROBLEM (EDA-CF) Solution Representation and Initial Population Generally, for a Cell Formation Problem, a solution is represented by an m-dimensional vector X=[x1,x2,...,xm] where xi represents the corresponding assignment of the machine i to the specified cell. The problem consists in creating partitions of the set of the m machines assignments into a given number of cells. The created solutions must respect all the constraints defined in Section 3.3. We choose to generate the initial population randomly following a uniform distribution.

Selection The goal is to allow individuals to be selected more often to reproduce. We adopt the truncated selection procedure to create new individuals: in each iteration, we select randomly P1 individuals from the 50% of the best individuals in the current population. These P1 individuals will

be reproduced in the next generation using the probabilistic model to form new individuals.

Probabilistic Model and Creation of New Individuals After the selection phase, a probabilistic model is applied to the P1 selected individuals in order to generate new individuals. The probabilistic model provides the assignment probability of the machine i to cell j and expressed in Box 1 where, ε>0 is a factor which guarantees that the model provides a probability Pij≠0.

Replacement The replacement represents the final step in our search procedure. It is based on the following idea: when a new individual is created, we compare it to the worst individual in the current population and we retain the best one.

Fitness Function A fitness function is used for evaluating the aptitude of an individual to be kept or to be used for reproducing new individuals in the next generation. In the proposed algorithm, we used two fitness functions F1 and F2 to perform the objectives of minimizing the percentage of Exceptional Elements and maximizing the Grouping Efficacy respectively. Let mi be the number of machines assigned to the cell i. we define F1 and F2 as follows: F1(X)=e0(X)+Pen(X)

Box 1. Pij =

708

number of times where machine i appears in cell j + ε number of selected individuals + C × ε

An Estimation of Distribution Algorithm for Part Cell Formation Problem

and

1.

Shaking: generate a point X’ at random f r o m k th n e i g h b o r h o o d o f X (X ' ∈ N k (X ))

2.

Local Search: apply some local search method with X’ as initial solution; denote with X’’ the obtained local optimum. Move or not: if this local optimum X’’ is better than the incumbent, or if some acceptance criterion is met, move there (X ← X ") , and set k=1 ; otherwise, set k←k+1.

F2(X)=GE(X)-Pen(X). where: Pen(X ) = C

C

i =1

i =1

α1 ∑ max {0, mi - k max } + α2 ∑ max {0, 1 − mi }

expressed the distance between the solution X and the feasible space. This penalty under-evaluate the fitness of solution X when X violate the constraint of the problem. i.e a penalty value is encountered either when the number of assigned machines exceeds the capacity of a cell or when machines are assigned to a number of cells that exceeds the fixed number of cells C.

Variable Neighborhood Search Algorithm Variable Neighborhood Search is a recent metaheuristic for combinatorial optimization developed by Mladenović & Hansen (1997). The basic idea is to explore different neighborhood structures and to change them within a local search algorithm to identify better local optima with shaking strategies. The main steps in this procedure are shown in the following pseudo code: Variable Neighborhood Search Select the set of neighborhood structures Nk,k={1,2,...,nmax} that will be used in the search, find an initial solution X, choose a stopping condition. Repeat the following steps until the stopping condition is met: Set k=1 Repeat the following steps until all neighborhood structures are used:

3.

Local Search Procedure Generally, obtaining a local minimum following a neighborhood structure does not imply that we obtain a local optimum following another one. For this reason, we choose to use two local search procedures which are based on two different neighborhood structures. The first neighborhood structure consists to select one machine and to insert it in a new cell. The second consists to select two machines from two different cells and to swap them. Then, we apply these two local search procedures iteratively until there is no possible improvement to the current solution.

Shaking Phase The main idea consists to define a set of neighbourhood structures that allow to obtain a distance equal to k between the solution X and the new neighbour solution X’. This distance can be defined by the number of differences between the two vectors X and X’. Then, we define Nk as the neighbourhood structure given by applying randomly k insertion moves.

709

An Estimation of Distribution Algorithm for Part Cell Formation Problem

COMPARATIVE STUDY In order to show the competitiveness of the proposed EDA-CF algorithm, we provide in this section a comparative study with the well known approaches that treated Cell Formation problem. During all experiments, the proposed algorithm is coded using C++ and run on a computer Pentium IV with 3.2 GHz processor and 1GB memory.

Test Data Set In order to evaluate the goodness of clusters obtained from the clustering heuristic for MPCF problem, 30 problems taken from the literature were tested. These data sets include a variety of sizes, a range from 5 machines and 7 parts to 40 machines and 100 parts, difficulties, and well structured and ill structured matrices. For all instances, the initial matrix is solved by Estimation of Distribution Algorithm method and then improved by the Variable Neighborhood Search procedure. Then, the cells are formed and the machine layout in each cell is obtained optimally. Table 1 shows the different problems and their characteristics. The columns illustrate respectively the sources of data sets, the problem size, the number of cells C, the maximum number per cell, kmax and the matrix density. All problems can be easily accessed from the references and they are transcribed directly from the original article they appeared. The appendix gives the block diagonal matrices for the improved solutions by the proposed algorithm. The maximum number of permissible cells C has been set equal to the best known number of cells as found in literature. The following equation expressed the density of the initial binary matrix and which informs about haw the one’s elements are distributed inside the matrix.

710

m

n

i

j

∑ ∑a

ij

m ×n

Comparative Study In this section, we evaluate the proposed algorithm by comparing it with the best results obtained by several well known algorithms respecting to the Grouping Efficacy and the Percentage of Exceptional Elements measures. In all tests, the proposed EDA-CF algorithm has proved its competitiveness against the best available solutions respecting to the same required number of cells. As a stop condition to our algorithm, we fixed the maximal computational time to 5 seconds and the maximal number of iteration of Variable Neighborhood Search algorithm to 3. The values of the following parameters are fixed as: ε=0,1; α1=50; α2=500; P=200 and P1=3.

Comparison Respecting the Grouping Efficacy Measure In this subsection we perform a comparative study with the best algorithm presented in the literature. These algorithms can be classified into two categories. The first category corresponds to the based population algorithm including Genetic Algorithm (GA) of Onwubolu & Mutingi (2001), Grouping Genetic Algorithm (GGA) of Brown & Sumichrast (2001), Evolutionary Algorithm (EA) of Gonçalves & Resende (2004) and Hybrid Grouping Genetic Algorithm (HGGA) of James et al. (2007). The second category represents the clustering based methods including ZODIAC of Chandrasekharan & Rajagopalan (1987), GRAFICS of Srinivasan & Narendran (1991), MSTClustering Algorithm of Srinivasan (1994). Table 2 reports the results obtained by the proposed algorithm and these algorithms such that their results were taken from the original citations.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 1. Test problems from cellular manufacturing literature No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

References King & Nakornchai, 1982 Waghodekar & Sahu, 1984 Seifoddini, 1989 Kusiak & Cho, 1992 Kusiak & Chow, 1987 Boctor, 1991 Seifoddini & Wolfe, 1986 Chandrasekharan & Rajagopalan, 1986 Chandrasekharan & Rajagopalan, 1986 Mosier & Taube, 1985 Chan & Milner, 1982 Stanfel, 1985 McCormick et al., 1972 King, 1980 Mosier & Taube, 1985 Carrie, 1973 Boe & Cheng, 1991 Chandrasekharan & Rajagopalan, 1989 - 1 Chandrasekharan & Rajagopalan, 1989 - 2 Chandrasekharan & Rajagopalan, 1989 - 3 Chandrasekharan & Rajagopalan, 1989 - 4 Chandrasekharan & Rajagopalan, 1989 - 5 Chandrasekharan & Rajagopalan, 1989 - 6 McCormick et al., 1972 Kumar & Vanelli, 1987 Stanfel, 1985 Stanfel, 1985 King & Nakornchai,1982 McCormick et al., 1972 Chandrasekharan & Rajagopalan, 1987

As seen in Table 2, in all the benchmark problems, the grouping efficacy of the solution obtained by the proposed method is either better than that of other methods or it is equal to the best one. We note that the solutions obtained by the GA method for problems 1, 7, 13, 24, 28 and 29 were not available. In five problems, namely 20, 21, 22, 23 and 24, the grouping efficacy of the solution obtained by the proposed method is better than that of all other methods. In other words, the proposed method outperforms all the other methods and the best solutions for these problems are reported in this paper for the first time. In eleven problems, namely 2, 3, 9, 15 and 17, the solution obtained by the proposed method is as good as the best solution available in the literature. In five problems, namely 4, 8, 10, 18 and 19, all the methods have obtained the same grouping efficacy.

Size 5×7 5×7 5×18 6×8 7×11 7×11 8×12 8×20 8×20 10×10 10×15 14×14 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

C 2 2 2 2 3 3 3 3 2 3 3 5 6 5 5 4 5 7 7 7 9 9 9 4 11 12 11 9 2 10

kmax 4 5 12 6 4 4 5 9 11 4 5 6 7 13 5 10 8 8 8 8 8 7 7 12 6 7 7 27 35 6

Density 0.400 0.5714 0.5111 0.2987 0.2250 0.2044 0.6100 0.2400 0.3067 0.3223 0.3646 0.1726 0.2240 0.1831 0.2775 0.1957 0.2186 0.1365 0.1354 0.1437 0.1365 0.1375 0.1365 0.2977 0.1041 0.1033 0.1113 0.0935 0.4895 0.1041

Comparing with clustering methods, it is clear that the results obtained by the proposed algorithm are either equal or better than ZODIAC, GRAFICS and MST methods in all cases except for the problems 25 and 30. More specifically, the EDA-CF obtains for 6 (23%) problems values of the grouping efficacy that are equal to the best ones found in the literature by the three compared clustering methods and improves the values of the grouping efficacy for 19 (73%) problems.

Comparison Respecting the Percentage of Exceptional Elements Measure Table 3 provides a comparison of the proposed algorithm against the best reached results available in literature. The comparison was done respecting to the Percentage of Exceptional Elements

711

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 2. Summary of GE performance evaluation results No 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Size

C

GA

5×7 5×7 5×18 6×8 7×11 7×11 8×12 8×20 8×20 10×10 10×15 14×24 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

2 2 2 2 2 3 5 3 2 3 3 4 6 4 5 4 5 7 7 7 7 7 7 6 14 13 14 17 2 10

62.50 77.36 76.92 50.00 70.37 85.25 55.91 72.79 92.00 63.48 86.25 34.16 66.30 44.44 100.00 85.11 73.51 37.62 34.76 34.06 40.96 48.28 37.55 83.90

GGA 82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25 55.32 75.00 92.00 72.06 51.58 55.48 40.74 77.02 57.14 100.00 85.11 73.51 52.41 46.67 45.27 52.53 61.39 57.95 50.00 43.78 52.47 82.25

EA 73.68 52.50 79.59 76.92 53.13 70.37 68.30 85.25 58.72 69.86 92.00 69.33 52.58 54.86 42.96 76.22 58.07 100.00 85.11 73.51 51.97 47.06 44.87 54.27 58.48 59.66 50.51 42.64 56.42 84.03

HGGA 82.35 69.57 79.59 76.92 60.87 70.83 69.44 85.25 58.72 75.00 92.00 72.06 52.75 57.53 43.18 77.91 57.98 100.00 85.11 73.51 53.29 48.95 47.26 54.02 63.31 59.77 50.83 46.35 60.64 84.03

criteria. PEa represents the best-known Percentage of Exceptional Elements found in the literature. We note that the compared solutions for problems 15, 16, 17, 24, 26, 27, 28 and 29 were not available. The results shows that in all the benchmark problems, the number of exceptional elements of the solution obtained by the proposed method is either better than the best reached values or it is equal to the best ones. In 11 problems, namely 3, 6, 7, 12, 13, 14, 19, 20, 21, 22 and 23 the PE of the solution obtained by the EDA-CF is better than that of all other methods. In other words, the proposed method outperforms all the other methods. In nine problems, namely 1, 4, 5, 8, 9, 10, 11, 18 and 25, all the methods have obtained the same Percentage of Exceptional elements.

712

ZODIAC 73.68 56.52 39.13 68.30 85.24 58.33 70.59 92.00 64.36 32.09 53.76 21.63 75.14 100.00 85.10 37.85 20.42 18.23 17.61 52.14 33.46 46.06 21.11 32.73 52.21 83.92

GRAFICS 73.68 60.87 53.12 68.30 85.24 58.33 70.59 92.00 64.36 45.52 54.39 38.26 75.14 100.00 85.10 73.51 43.27 44.51 41.67 47.37 55.43 56.32 47.96 39.41 52.21 83.92

MST 85.24 58.72 70.59 64.36 48.70 54.44 75.14 100.00 85.10 73.51 51.81 44.72 44.17 51.00 55.29 58.70 46.30 40.05 83.66

EDA-CF 73.68 69.57 79.59 76.92 58.62 70.37 68.30 85.25 58.72 70.59 92.00 70.51 51.96 54.86 43.18 76.27 57.98 100.00 85.11 76.97 72.92 53.74 48.95 54.98 45.22 59.43 50.78 45.94 55.43 83.81

CPU 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.015 0.015 0.046 0.031 1.232 0.078 0.093 0.031 0.092 5.171 0.732 0.670 0.233 7.260 0.562 0.447 1.406 5.094 4.318 7.421

CONCLUSION Cellular manufacturing is a production technique that leads to increase productivity and efficiency in the production floor. In this chapter, we have presented the first Estimation of Distribution Algorithm (EDA) method to solve the Machine Part Cell Formation Problem. Detailed numerical experiments have been carried out to investigate the EDAs’ performance. Although the EDA approach does not require any problem-specific information, the use of sensible heuristics can improve the optimisation and speed up convergence. For this reason, we used the Variable Neighborhood Search (VNS) procedure in the improvement phase of the algorithm. The results from test cases presented here have shown that the proposed EDA-CF algorithm is very a competitive algo-

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 3. Comparison between the obtained results and the best-known results respecting to the PE criterion No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

size

C

5×7 5×7 5×18 6×8 7×11 711 8×12 8×20 8×20 10×10 10×15 14×24 16×24 16×43 20×20 20×35 20×35 24×40 24×40 24×40 24×40 24×40 24×40 27×27 30×41 30×50 30×50 36×90 37×53 40×100

2 2 2 2 2 3 5 3 3 3 3 4 8 4 6 5 5 7 7 7 7 7 7 6 14 13 14 17 3 10

Problem Source King & Nakornchai, 1982 Waghodekar & Sahu, 1984 Seifoddini, 1989 Kusiak & Cho, 1992 Kusiak & Chow, 1987 Boctor, 1991 Seifoddini & Wolfe, 1986 Chandrasekharan & Rajagopalan, 1986 Chandrasekharan & Rajagopalan, 1986 Mosier & Taube, 1985 Chan & Milner, 1982 Stanfel, 1985 McCormick et al., 1972 King, 1980 Mosier & Taube, 1985 Carrie, 1973 Boe & Cheng, 1991 Chandrasekharan & Rajagopalan, 1989 - 1 Chandrasekharan & Rajagopalan, 1989 - 2 Chandrasekharan & Rajagopalan, 1989 - 3 Chandrasekharan & Rajagopalan, 1989 - 4 Chandrasekharan & Rajagopalan, 1989 - 5 Chandrasekharan & Rajagopalan, 1989 - 6 McCormick et al., 1972 Kumar & Vanelli, 1987 Stanfel, 1985 Stanfel, 1985 King & Nakornchai,1982 McCormick et al., 1972 Chandrasekharan & Rajagopalan, 1987

rithm comparing with the previously published metaheuristics applied to the same problem. It has been shown that the EDAs provide efficient and accurate solutions for the test cases. The results are promising and encourage further studies on other versions of the Group Technology problems where we can introduce sequence data, machine utilization and routings.

REFERENCES Andrés, C., & Lozano, S. (2006). A particle swarm optimization algorithm for part–machine grouping. Robotics and Computer-integrated Manufacturing, 22, 468–474. doi:10.1016/j. rcim.2005.11.013

PE 0.000 0.150 0.000 0.0909 0.1304 0.0952 0.1714 0.1475 0.2967 0.000 0.000 0.0328 0.3721 0.2063 0.3693 0.1985 0.1764 0.000 0.0308 0.1087 0.0992 0.2652 0.2824 0.2350 0.1094 0.2754 0.1225 0.1254 0.000 0.0907

CPU 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.015 0.015 0.031 0.031 0.078 0.031 0.062 0.031 0.451 5.171 0.732 0.670 0.233 0.203 0.219 3.109 0.406 0.969 0.109 7.421

PE a 0.000 0.125 0.1957 0.0909 0.1304 0.1905 0.2857 0.1475 0.2967 0.000 0.000 0.1639 0.4302 0.2222 0.000 0.0769 0.1527 0.1527 0.3740 0.4214 0.1094 0.0857

Askin, R. G., Creswell, J. B., Goldberg, J. B., & Vakharia, A. J. (1991). A Hamiltonian path approach to reordering the part-machine matrix for cellular manufacturing. International Journal of Production Research, 29, 1081–1100. doi:10.1080/00207549108930121 Baluja, S. (1994). Population-based incremental learning: A method for integrating genetic search based function optimization and competitive learning. (Technical Report CMU-CS, 94-163). Computer Science Department, Carnegie Mellon University. Baluja, S., & Davies, S. (1997). Using optimal dependency-trees for combinatorial optimization: Learning the structure of the search space. In Proceedings of the 1997 International Conference on Machine Learning.

713

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Baluja, S., & Davies, S. (1998). Fast probabilistic modeling for combinatorial optimization. In AAAI-98. Bengoetxea, E., Larranaga, P., Bloch, I., & Perchant, A. (2001b). Solving graph matching with EDAs using a permutation–based representation. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_12 Bengoetxea, E., Larranaga, P., Bloch, I., Perchant, A., & Boeres, C. (2000). Inexact graph matching using learning and simulation of Bayesian networks. An empirical comparison between different approaches with synthetic data. In Workshop Notes of CaNew2000: Workshop on Bayesian and Causal Networks: From Inference to Data Mining, fourteenth European Conference on Artificial Intelligence, ECAI2000. Berlin. Boctor, F. (1991). A linear formulation of the machine-part cell formation problem. International Journal of Production Research, 29(2), 343–356. doi:10.1080/00207549108930075 Brown, E., & Sumichrast, R. (2001). CFGGA: A grouping genetic algorithm for the cell formation problem. International Journal of Production Research, 36, 3651–3669. doi:10.1080/00207540110068781 Burbidge, J. L. (1963). Production flow analysis. Production Engineering, 42, 742–752. doi:10.1049/tpe.1963.0114 Caux, C., Bruniaux, R., & Pierreval, H. (2000). Cell formation with alternative process plans and machine capacity constraints: A new combined approach. International Journal of Production Economics, 64(1-3), 279–284. doi:10.1016/ S0925-5273(99)00065-1

714

Chan, H. M., & Milner, D. A. (1982). Direct clustering algorithm for group formation in cellular manufacture. Journal of Manufacturing Systems, 1, 65–75. doi:10.1016/S0278-6125(82)80068-X Chandrasekharan, M. P., & Rajagopalan, R. (1986a). MODROC: An extension of rank order clustering for group technology. International Journal of Production Research, 24(5), 1221– 1264. doi:10.1080/00207548608919798 Chandrasekharan, M. P., & Rajagopalan, R. (1987). ZODIAC: An algorithm for concurrent formation of part-families and machine-cells. International Journal of Production Research, 25(6), 835–850. doi:10.1080/00207548708919880 Chandrasekharan, M. P., & Rajagopalan, R. (1989). Groupability: Analysis of the properties of binary data matrices for group technology. International Journal of Production Research, 27(6), 1035–1052. doi:10.1080/00207548908942606 Chickering, D., Heckerman, D., & Meek, C. (1997). A Bayesian approach to learning Bayesian networks with local structure. In Proceedings of Thirteenth Conference on Uncertainty in Artificial Intelligence, (pp. 80–89). (Technical Report MSRTR- 97-07), Microsoft Research, August, 1997. De Bonet, J., Isbell, C. L., & Viola, P. (1997). MIMIC: Finding optima by estimating probability densities. Advances in Neural Information Processing Systems, 9, 424–430. De Campos, L. M., Gamez, J. A., Larranaga, P., Moral, S., & Romero, T. (2001). Partial abductive inference in Bayesian networks: An empirical comparison between GAs and EDAs. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_16

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Etxeberria, R., & Larranaga, P. (1999). Optimization with Bayesian networks. In Proceedings of the Second Symposium on Artificial Intelligence. Adaptive Systems. CIMAF 99, (pp. 332-339). Cuba. Gonçalves, J., & Resende, M. (2004). An evolutionary algorithm for manufacturing cell formation. Computers & Industrial Engineering, 47, 247–273. doi:10.1016/j.cie.2004.07.003 Goncalves, J. F., & Resende, M. (2002). A hybrid genetic algorithm for manufacturing cell formation. Technical report. Rapport. Gonzalez, C., Lozano, J. A., & Larranaga, P. (2002). Mathematical modelling of UMDAc algorithm with tournament selection: Behaviour on linear and quadratic functions. International Journal of Approximate Reasoning, 31(3), 313–340. doi:10.1016/S0888-613X(02)00092-0 Harik, G. (1994). Finding multiple solutions in problems of bounded difficulty. Tech. Rep. IlliGAL Report No. 94002, University of Illinois at Urbana-Champaign, Urbana, IL. Harik, G. (1999). Linkage learning via probabilistic modeling in the ECGA. Tech. Rep. IlliGAL Report No. 99010, University of Illinois at Urbana-Champaign. Harik, G., Lobo, F., & Goldberg, D. E. (1998). The compact genetic algorithm, (pp. 523-528). (IlliGAL Report No. 97006). James, T. L., Brown, E. C., & Keeling, K. B. (2007). A hybrid grouping genetic algorithm for the cell formation problem. Computers & Operations Research, 34, 2059–2079. doi:10.1016/j. cor.2005.08.010 Joines, J. A., Culbreth, C. T., & King, R. E. (1996). Manufacturing cell design: An integer programming model employing genetic algorithms. IIE Transactions, 28(1), 69–85. doi:10.1080/07408179608966253

Kaparthi, S., Suresh, N. C., & Cerveny, R. P. (1993). An improved neural network leader algorithm for part-machine grouping in group technology. European Journal of Operational Research, 69, 342–355. doi:10.1016/03772217(93)90020-N Khator, S. K., & Irani, S. A. (1987). Cell formation in group technology: A new approach. Computers & Industrial Engineering, 12, 131–142. doi:10.1016/0360-8352(87)90006-4 King, J. R. (1980). Machine-component grouping formation in group technology. International Journal of Management Science, 8(2), 193–199. King, J. R., & Nakornchai, V. (1982). Machinecomponent group formation in group technology: Review and extension. International Journal of Production Research, 20(2), 117–133. doi:10.1080/00207548208947754 Kumar, K. R., & Chandrasekharan, M. P. (1990). Grouping efficacy: A quantitative criterion for block diagonal forms of binary matrices in group technology. International Journal of Production Research, 28(2), 233–243. doi:10.1080/00207549008942706 Kusiak, A. (1987). The generalized group technology concept. International Journal of Production Research, 25, 561–569. doi:10.1080/00207548708919861 Kusiak, A., & Chow, W. S. (1987). Efficient solving of the group technology problem. Journal of Manufacturing Systems, 6(2), 117–124. doi:10.1016/0278-6125(87)90035-5 Larranaga, P., Etxeberria, R., Lozano, J. A., & Pena, J. M. (2000). Combinatorial optimization by learning and simulation of Bayesian networks. In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence, (pp. 343–352). Stanford.

715

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Li, S. Z. (1995). Markov random field modeling in computer vision. Springer-Verlag. Lozano, J. A., Sagarna, R., & Larranaga, P. (2001b). Solving job scheduling with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation (pp. 231–242). Kluwer Academis Publishers. doi:10.1007/978-1-4615-1539-5_11 Lozano, S., Adenso-Diaz, B., Eguia, I., & Onieva, L. (1999). A one step tabu search algorithm for manufacturing cell design. The Journal of the Operational Research Society, 50, 509–516. Mahdavi, I., Paydar, M. M., Solimanpur, M., & Heidarzade, A. (2009). Genetic algorithm approach for solving a cell formation problem in cellular manufacturing. Expert Systems with Applications, 36, 6598–6604. doi:10.1016/j. eswa.2008.07.054

Muhlenbein, H., & Mahnig, T. (1999b). FDA - A scalable evolutionary algorithm for the optimization of additively decomposed functions. Evolutionary Computation, 7, 353–376. doi:10.1162/ evco.1999.7.4.353 Muhlenbein, H., Mahning, T., & Ochoa, A. (1999). Schemata, distributions and graphical models in evolutionary optimization. Journal of Heuristics, 5, 215–247. doi:10.1023/A:1009689913453 Muhlenbein, H., & Paaß, G. (1996). From recombination of genes to the estimation of distribution. Binary parameters. Lecture Notes in Computer Science, 1411. Parallel Problem Solving from Nature, PPSN, IV, 178–187. Onwubolu, G. C., & Mutingi, M. (2001). A genetic algorithm approach to cellular manufacturing systems. Computers & Industrial Engineering, 39(1–2), 125–144. doi:10.1016/ S0360-8352(00)00074-7

McAuley, J. (1972). Machine grouping for efficient production. Production Engineering, 51(2), 53–57. doi:10.1049/tpe.1972.0006

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto, CA: Morgan Kaufman Publishers.

McCormick, W. T. Jr, Schweitzer, P. J., & White, T. W. (1972). Problem decomposition and data reorganization by a cluster technique. Operations Research, 20(5), 993–1009. doi:10.1287/ opre.20.5.993

Pelikan, M., Goldberg, D. E., & Cantu-Paz, E. (1999a). BOA: The Bayesian optimization algorithm. In Banzhaf, W., Daida, J., Eiben, A. E., Garzon, M. H., Pelikan, V., & Goldberg, D. E. (Eds.), Hierarchical problem solving by the Bayesian optimization algorithm. IlliGAL Report No. 2000002. Urbana, IL: Illinois Genetic Algorithms Laboratory, University of Illinois at Urbana-Champaign.

Mladenoviç, N., & Hansen, P. (1997). Variable neighborhood search. Computers & Operations Research, 24, 1097–1100. doi:10.1016/S03050548(97)00031-2 Muhlenbein, H. (1998). The equation for response to selection and its use for prediction. Evolutionary Computation, 5(3), 303–346. doi:10.1162/ evco.1997.5.3.303

716

Pelikan, P., & Muhlenbein, H. (1999). The bivariate marginal distribution algorithm. In Roy, R., Furuhashi, T., & Chandhory, P. K. (Eds.), Advances in soft computing-engineering design and manufacturing (pp. 521–535). London, UK: Springer.

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Robles, V., de Miguel, P., & Larranaga, P. (2001). Solving the travelling salesman problem with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. Roure, J., Sanguesa, R., & Larranaga, P. (2001). Partitional clustering by means of estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. Sagarna, R., & Larranaga, P. (2001). Solving the knapsack problem with estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academis Publishers. Santana, R. (2003a). A Markov network based factorized distribution algorithm for optimization. Proceedings of the 14th European Conference on Machine Learning (ECMLPKDD 2003); Lecture Notes in Artificial Intelligence, 2837, (pp. 337–348). Berlin, Germany: Springer-Verlag. Santana, R. (2005). Estimation of distribution algorithms with Kikuchi approximation. Evolutionary Computation, 13, 67–98. doi:10.1162/1063656053583496 Sierra, B., Jimenez, E., Inza, I., Larranaga, P., & Muruzabal, J. (2001). Rule induction using estimation of distribution algorithms. In Larranaga, P., & Lozano, J. A. (Eds.), Estimation of distribution algorithms. A new tool for evolutionary computation. Kluwer Academic Publishers. doi:10.1007/978-1-4615-1539-5_15 Sofianopoulou, S. (1997). Application of simulated annealing to a linear model for the formation of machine cells in group technology. International Journal of Production Research, 35, 501–511. doi:10.1080/002075497195876

Solimanpur, M., Vrat, P., & Shankar, R. (2003). Ant colony optimization algorithm to the inter-cell layout problem in cellular manufacturing. European Journal of Operational Research, 157(3), 592–606. doi:10.1016/S0377-2217(03)00248-0 Srinivasan, G. (1994). A clustering algorithm for machine cell formation in group technology using minimum spanning trees. International Journal of Production Research, 32, 2149–2158. doi:10.1080/00207549408957064 Srinivasan, G., & Narendran, T. T. (1991). GRAFICS - A non hierarchical clusteringalgorithm for group technology. International Journal of Production Research, 29(3), 463–478. doi:10.1080/00207549108930083 Stawowy, A. (2006). Evolutionary strategy for manufacturing cell design. OMEGA: The International Journal of Management Science, 34(1), 1–18. doi:10.1016/j.omega.2004.07.016 Venugopal, V., & Narendran, T. T. (1992a). A genetic algorithm approach to the machine component grouping problem with multiple objectives. Computers & Industrial Engineering, 22(4), 469–480. doi:10.1016/0360-8352(92)90022-C Xu, H., & Wang, H. P. (1989). Part family formation for GT applications based on fuzzy mathematics. International Journal of Production Research, 27(9), 1637–1651. doi:10.1080/00207548908942644 Zhang, Q., Sun, J., Tsang, E., & Ford, J. (2004). Hybrid estimation of distribution algorithm for global optimisation. Engineering Computations, 2(1), 91–107. doi:10.1108/02644400410511864 Zhao, C., & Wu, Z. (2000). A genetic algorithm for manufacturing cell formation with multiple routes and multiple objectives. International Journal of Production Research, 38(2), 385–395. doi:10.1080/002075400189473

717

8

4

16

1

1

19

1

1

21

1

1

28

1

37

7

14

23

9

10

17

2

5

11

19

1

1

39

1

1

8

12

18

3

20

1

13

21

22

1

1

1

1

1

1

1

1

1

1 1

6

1

1

1

7

1

1

1

1

1

20 29

1

40

1

1

1 1

1

10

1

13

1

14

1

22

1

1

1

1

1

1

1

1

1

1

1

1

35

1

1

1

36

1

1

1

4 5

1

18 26

1

27 30 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1 1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

3 1

15

1

25

2

6

1

38

32

24

APPENDIX

718

Table 4. Problem 20

4

16

7

14

23

24

9

10

17

2

5

11

19

6

8

3

20

1

13

21

22

11

1

1

12

1

1

15

1

1

23

1

1

1

1

1

1

1

1

1 1

1

1

1

9

1

1

1

1

16

1

1

1

1

17

1

1

1

1

24

12

15

18

1

31

1

34

33

1

1 1

1

Table 5. Problem 21 3

20

2

1

1

11

1

1

12

1

1

15

1

1

23

1

1

24

1

1

1

1

31 34

6

8

12

15

18

1

13

21

22

2

5

11

19

4

16

9

10

17

7

14

23

24 1

1 1 1

1

1

719

4

1

1

1

1

1

5

1

1

1

1

1

18

1

1

1

1

26

1

1

1

1

1 1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 4. Continued

720

Table 5. Continued 6

8

12

15

18

27

3

20

1

1

1

1

1

30

1

1

1

1

1

13

21

1

22

2

5

11

19

1

1

1

1

1

1

1

1

16

1

1

1

1

1

1

17

1

1

33

1

1

1

1 1 1 1

22

1

35 36

7

14

23

24

1

1

1 1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

1

8 1

21 28

1

1

1

1

1

1

1

37

1

1

1

39

1

1

1

1

6 7

1

1

1

1

1

1

1

1

1

29 40

17

1

14

20

10

1

13

38

9

1 1

1

1 1

1

1

3

1

1

1

25

1

1

1

1

1

1

32

1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

10

1

16

1

9

19

4

2

3

20

1

1

11

1

1

12

1

1

15

1

1

23

1

1

24

1

1

31 34 10

2

5

11

19

8

12

15

18

1

13

21

16

7

24

9

1

17

14

23

1

1 1

1

1

1

1

1

1 1

22

1

1 1

1

1

1

1

1

1

35

1

1 1

1

1

4

1

5

1

1

18

1

26

1

1

1

1

1

1

1

1

1

1

1

1

1

1

27

1

1

1

30

1

1

1

1

1

1

1 1

1 1

1

9

1

1

1

16

1

1

1

1

1

1

1

1

17 1

6

721

8

4

1

1

20

10

1

13

33

22

1

14

36

6

1 1

1

1 1

1

1 1

1

1

1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 6. Problem 22

722

Table 6. Continued 3

4

16

1

1

21

1

1

28

1

19

20

2

5

11

19

6

8

12

15

18

1

1

13

21

22

10

1

37

1

38

7

24

1

1

17

1

1

1

1

1

1

14

23

1

1

39

1

1

1

1

32

1 1

29 1

1

3

1

1

25

1

1

1

1

1

13

22

Table 7. Problem 23 1

21

9

1

1

33

1

1

2

3

20

7

14

23

24

9

4

16

10

1 1

11

1

2

5

11

1

1

15

1

1

1

23

1

1

1

34

1

1

32 6

1

12

15

18

1

1

1 1

17

1

1

1

8

1

1

3

6

1

12

25

19

1 1

1

1

1

1 1

1 1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

7 40

9

1

21

3

20

7

14

23

29

9

4

16

10

2

5

11

19

6

8

1

39 1 1

1

1

1

1

28

1

37

1 1 1

1

1 1 1

14

1

1

22 35

1

36

1

26

1

1 1

1

1

1

1

1

1

1

1

1

1 1 1

1 1

1

1

1

1

30

1

1

1

1

7

1 1

1

1

31

1

1

4 5

1

18

1

1

1

1

1

1

1

1

1

27

1

1

1

1

1

16

723

17

1 1

1

13

22

1

20 10

13

1

1

24

18

1

21

38

15

1 1

1

12

1

1

8

17 1

1

40 19

24

1 1

1 1 1

1 1

1 1

1

1

1

1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

Table 7. Continued

724

Table 8. Problem 24 6 4 5

8

18

1

1

1

18

1

26

1

1

30

1

1

38

1

1

2

15

19

9

13

22

1

23

24

10

1

16

5

11

1

1 1 1 1

1

1

1

1

1

1 1 1

1

1 1

1

1 1

1

4

1

1

1

1

1 1

25

1

28

1

1

1

1

1

1

1

1 1

1

35

1

9

1

1

1

1

1

1 1

1

1

1 1

1

1

1

3

1 1

1 1

1

1

1

7

1 1

1

1

1

1

1 1

1 1

1

continued on following page

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1 1

11

12

1

29

2

20

1

1

31

3

1

6

17

17

1

36

32

21

1

1

27

33

14

1

1

14

16

7

1

13

39

4

15

1

1

1 23

1

1

1 1

1

1 1 37

1 1 21

1 1 1 19

1 1 8

1

1

1 1 1

5 16 12 10

1

1 1 10

1 22

1 1

1

An Estimation of Distribution Algorithm for Part Cell Formation Problem

1

34

1 1 24

11 20 1 1

1 1 20

1

3 17 24 23 21 1 22 13 14 7 4 9 19 15 2 18 8 6

1 1 12

725

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 164-188, copyright 2012 by Business Science Reference (an imprint of IGI Global).

Table 8. Continued

726

Chapter 41

A LabVIEW-Based Remote Laboratory:

Architecture and Implementation Yuqiu You Morehead State University, USA

ABSTRACT Current technology enables the remote access of equipment and instruments via the Internet. While more and more remote control solutions have been applied to industry via Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet, there exist requirements for the applications of such technologies in the academic environment (Salzmann, Latchman, Gillet, and Crisalle, 2003). One typical application of remote control solutions is the development of a remote virtual laboratory. The development of a remote-laboratory facility will enable participation in laboratory experiences by distance students. The ability to offer remote students lab experiences is vital to effective learning in the areas of engineering and technology. This chapter introduces a LabVIEW-based remote wet process control laboratory developed for manufacturing automation courses. The system architecture, hardware integration, hardware and software interfacing, programming tools, lab development based on the system, and future enhancement are demonstrated and discussed in the chapter.

INTRODUCTION As distance learning has progressed from basic television broadcasting into web-based Internet telecasting, it has become a very effective teaching tool (Kozono, Akiyama and Shimomura, DOI: 10.4018/978-1-4666-1945-6.ch041

2002). Laboratory experiences are important for engineering and technology students to reinforce theories and concepts presented in class lectures. The development of a remote-laboratory facility will enable participation in laboratory experiences by distance students. The ability to offer remote students these lab experiences is vital to effective learning. The development of a remote

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

A LabVIEW-Based Remote Laboratory

virtual laboratory is also motivated by the fact that presently, as never before, the demand for access to the laboratory facilities is growing rapidly in engineering and technology colleges. Being able to make the laboratory infrastructure accessible as virtual laboratories, available 24 hours a day and 7 days a week, goes far in addressing these challenges, and would also contribute to lowering the costs of operating laboratories. Additionally, remote virtual laboratories will provide the opportunity for students to explore the advanced technologies used in manufacturing remote control/monitor systems, and therefore, to prepare them for their future careers. This chapter introduces a LabVIEW-based remote process control system, which is established to provide web based online virtual laboratory for an online computer-integrated manufacturing course. The physical setup of the system includes a wet process system, a FieldPoint control system with a NI cFP-2000 intelligent controller and eight I/O modules, a desktop computer, an Ethernet hub, and an Internet DLink camera. The wet process system is composed of three water tanks, pumps, discrete valves, continuous valves, temperature sensors, level sensors, and pressure transmitters. The software used for the system interfacing is LabVIEW 8.0 from National Instruments and the HyperText Markup Language (HTML). The desktop computer works as a control server as well as a web server for the system, providing a local interface for system control and maintenance and a remote interface for students to control and monitor the wet process through the Internet. The desktop computer, the intelligent controller, and the Internet camera communicate with each other through the Ethernet hub, and also connect to the Internet. All the sensors, valves, and pumps in the wet process system are wired to the I/O modules of the intelligent controller. Details of the system wiring will be examined later in this chapter. This system has been used in the lab of a computerintegrated manufacturing course for graduate students in the Manufacturing Technology option

of the Technology Management program. Students are introduced to system integration, process control, LabVIEW FieldPoint programming, and the development of web-based manufacturing applications by involvement in the lab activities through the Internet. This chapter explores the integration of the mechatronic equipment, computer software, and networking techniques to achieve a remotely controllable system. It demonstrates the development of a LabVIEW-based FieldPoint control system for a virtual laboratory. The implementation of the laboratory, future enhancement, and related researches are discussed in this chapter.

BACKGROUND As in engineering and technology fields, the laboratory experiences associated with a technology curriculum are vital to understanding concepts (Saygin & Kahraman, 2004). They are also typically limited to a short group session each week due to time and space constraints. Increasingly practical and popular distance courses are hard-pressed to provide realistic lab experience at all. Simulation, which has seen increased use in education, is an especially valuable tool when it precedes instruction, but does not provide the problem-solving realism of actual hands-on experience (Deniz, Bulancak & Ozcan, 2003). Completing a project by remote operation of real equipment more nearly replicates problem solving as it would occur in the workplace and lends itself to teaching the processes and practice that are involved in true experimentation (Cooper, Donnelly & Ferreira, 2002). With the rapid developments of computer networks and Internet technologies along with dramatic improvements in the processing power of personal computers, remote virtual laboratories are now a reality. In the early 1990’s, the first remotely shared control system laboratory was proposed in the 1991 American Society of Engineering Educa-

727

A LabVIEW-Based Remote Laboratory

tion (ASEE) Frontiers in Education Conference. The system enabled sharing of laboratory data between universities using networked workstations. By its nature, automated manufacturing lends itself to remote access for education, and education that incorporates remote experimentation may better prepare students for the workplace of the future. With the development of standards for online lab sessions, the Accreditation Board for Engineering and Technology validated remote labs as educational tools (Carnevale, 2002). Saygin and Kahraman (2004) have successfully implemented lab exercises for manufacturing education and systems related courses using remote technology. Asumadu and Tanner (2006), who developed a remote wiring and measurement laboratory that utilizes a “virtual breadboard,” acknowledge the flexibility and spontaneity of the tool which has potential for global access. Gurocak (2001) describes Programmable Logic Controller (PLC) and robot labs which are delivered via the remote lab concept by Internet connection to the PLCs and closed circuit TV of the labs. Thamma, Huang, Lou, and Diez (2004) integrated Computer-integrated Manufacturing equipment into a remote lab system via a Java based web site. The virtual laboratory demonstrated in this chapter was developed by using LabVIEW FieldPoint control technology and a web-based application. The remote control panel provides real-time control and monitor function to remote clients and a live video of the real system operation. Remote clients can access the control panel from a web browser on their computers with the LabVIEW Runtime Engine installed as a plug-in. LabVIEW, developed by National Instruments, is a graphic programming language to build virtual instruments (VIs) for control systems. A VI developed in the LabVIEW environment provides an interface between a user and a control process. The main concept of such an interface is to provide a general view of the process and facilitate full control of the operations. LabVIEW is widely used in developing automatic control solutions in real

728

world industries, research studies and academic laboratories. The locally controlled setup can be turned into a remotely controlled one by moving the user interface away from the physical setup with web-based functions. LabVIEW also provides advanced communication methods for the integration of LabVIEW VIs with other applications, such as ActiveX containers, File Input and Output, and.NET constructor nodes. FieldPoint is a proprietary method for interfacing devices to computers developed by National Instruments, but it is very similar in principle to the concept of a fieldbus interfacing method used by many process control equipment suppliers. The idea of fieldbus grew out of the problem of interfacing hundreds or thousands of sensors and actuators to Programmable Logic Controllers (PLCs) and process control computers in large industrial plants. Instead of connecting each sensor or actuator to a central plant computer with hundreds or thousands of kilometers of wiring, the idea of fieldbus was to connect related groups of sensors and actuators to a local microcomputer that communicates with the central plant computer via an Ethernet local area network (LAN). The result was an enormous reduction in wiring and a corresponding increase in reliability. FieldPoint control in the LabVIEW environment is composed of four components, FieldPoint interface hardware, the FieldPoint Object Linking and Embedding for Process Control (OPC) server, the LabVIEW FieldPoint handler, and the Measurement and Automation Explorer (MAX). The FieldPoint interface hardware includes an intelligent controller, the I/O modules, and devices that connected to the modules. The FieldPoint OPC server is an invisible element of the software that can be invoked whenever the FieldPoint connection is setup with a LabVIEW application programmed for the control system. The communication with the FieldPoint unit does not necessarily occur at the precise instant when the application instructs it to happen. Instead, communication occurs both as a direct result of requests from the application

A LabVIEW-Based Remote Laboratory

and also as a result of the configuration for the intelligent controller. Any LabVIEW application that uses FieldPoint can be conveniently structured by incorporating all FieldPoint operations into a single module -- a FieldPoint handler. This module performs four different operations: initialization, close, read, and write. This handler directs all the communications between the FieldPoint unit and the LabVIEW application. The MAX provides an interface for the setup and configuration of the FieldPoint hardware. From the MAX interface, the LabVIEW application can locate and recognize the FieldPoint hardware, and the devices in the FieldPoint unit can be tested through data communication directly from the MAX. In the system introduced in this chapter, a virtual interface programmed in the LabVIEW graphical language provides a control panel for users to interact with the control process through FieldPoint Ethernet communication and the communication between the FP controller and I/O Modules. The remote laboratory introduced in this chapter provides an approach to implementing LabVIEW interfacing technology in manufacturing process control systems to provide remote

virtual lab activities for students in a manufacturing engineering program. The development of this laboratory also has students explore the integration of computer and networking technologies into manufacturing control systems for higher flexibility and productivity. Researches and experiments related to web-based manufacturing control systems and remote virtual laboratories are being conducted based on this system.

SYSTEM OVERVIEW Physical Setup of the System The technology used for the development of the virtual laboratory system in the LabVIEW environment is FieldPoint. As mentioned earlier in this chapter, the physical setup of the system includes a wet process system, a FieldPoint control system with a NI cFP-2000 intelligent controller and eight I/O modules, a desktop computer, an Ethernet hub, and an Internet DLink camera. As shown in Figure 1, the intelligent controller, the Internet camera, and the desktop computer are connected to an

Figure 1. The system setup

729

A LabVIEW-Based Remote Laboratory

Internet hub, and then connected to the Internet. Clients (students) can access the remote control panel of the system over the Internet connection. The wet process control system is comprised of three tanks, two pumps, five discrete valves, and two continuous valves, as shown in Figures 2 & 3. The sensors used in the system include three temperature sensors, three level sensors, one flow rate sensor, and two pressure sensors. Temperature sensors and level sensors were used to monitor each tank’s water level and temperature. Two pressure sensors were installed to monitor the incoming flow pressure of tank 1 and tank 2. A flow sensor was installed to measure the incoming flow rate of the main tank. Figure 2 shows a picture of the real wet process system setup. In order to better demonstrate the level changes of the water in each tank, the water was dyed with green color. Also, a physical control panel with pushbuttons and a panel with light indicators was added to the system for local maintenance and control. The physical intelligent controller and its I/O modules can be seen on the picture as a blue panel mounted on the wall. Figure 3 demonstrates the components of the physical setup to give you a clearer idea of the system setup with pipeline

Figure 2. Picture of the wet process control system

730

connections running between tanks and the locations of all the system components. All the values, pumps and sensors were wired to the Input/Output modules of the intelligent controller. There are four Input/Output modules used in this system, which include the analog input module cFP-AI-110, the analog output module cFP-AO-200, the digital output module cFP-DO-400, and the temperature module cFPRTD-124. The National Instruments cFP-AI-110 is an 8-channel single-ended input module for direct measurement of milli-volt, low voltage, or milli-ampere current signals from a variety of sensors and transmitters. It delivers filtered lownoise analog inputs with 16-bit resolution, and features overranging, HotPnP (plug-and-play) operation, and onboard diagnostics. The National Instruments cFP-AO-200 is an 8-channel analog output module for 4 to 20 mA and 0 to 20 mA current loops. The module includes opencircuit detection for wiring and sensor troubleshooting and short- circuit protection for wiring errors. It features HotPnP operation so it is automatically detected and identified by the configuration software. The National Instruments cFPDO-400 module features eight sourcing digital

A LabVIEW-Based Remote Laboratory

Figure 3. Physical setup diagram of the wet process system

output channels. Each channel is compatible with voltages from 5 to 30 VDC and can source up to 2 A per channel with a maximum of 9 A squared per module (The sum of the squares of the output currents from all eight channels must be no greater than 9). Each channel has an LED to indicate the channel on/off state. The module features 2300 V transient isolation between the output channels and the backplane. It also features HotPnP operation and is automatically detected and identified by the configuration software. The National Instruments cFP-RTD-124 is an 8-channel input module for direct measurement of 2 and 4-wire RTD temperature sensor signals. With current excitation, signal conditioning, doubleinsulated isolation, input noise filtering, and a high-accuracy delta-sigma 16-bit analog-to-digital converter, it delivers reliable, accurate temperature or resistance measurements. Table 1 provides a detailed list of the input and output devices in the physical setup of the system and the type of FieldPoint I/O module they were wired with. As shown in Table 1, the discrete valves and the pumps were wired to the digital output module; the continuous control valves were wired to the analog output module;

the temperature sensors were wired to the RTD module for temperature readings; and the level sensors, the flow rate sensor, and the pressures sensors were wired to the analog input module.

LabVIEW Interfacing As mentioned in the previous section, the devices on the wet process system including all the valves, pumps, and sensors, were wired to the four different Input/Output modules of the FieldPoint controller unit. All these devices were setup and configured through the Measurement & Automation software (MAX), and each of them was assigned a unique ID starting with an IP address, as shown in Table 1. The FieldPoint controller unit is recognized by the computer as a network node with its IP address. For the system demonstrated here, 139.102.29.56 was assigned to the controller. The four Input/Output modules then were recognized as different communication ports under this IP address. In this system, they are recognized as port 1, 2, 5, 7 respectively. Then, each device wired to the same Input/Output module was identified by a unique channel number. The unique ID for each device therefore is a combination of the IP

731

A LabVIEW-Based Remote Laboratory

Table 1. I/O addressing of the control system LabVIEW Addressing in Programming

FP@139_102_29_56\cFP-DO-400@7 (Digital Output Module)

FP@139_102_29_56\cFP-AO-200@5 (Analog Output Module) FP@139_102_29_56\cFP-RTD-124@2 (Temperature Module)

FP@139_102_29_56\cFP-AI-110@1 (Analog Input Module)

Device #

Device Description

\Channel 0

FZ-101

Pump 1

\Channel 1

FZ-102

Pump 2

\Channel 2

FV-201

Discrete valve 1

\Channel 3

FV-202

Discrete valve 2

\Channel 4

FV-203

Discrete valve 3

\Channel 5

FV-204

Discrete valve 4

\Channel 6

FV-205

Discrete valve 5

\Channel 0

ZZ-301

Continuous control valve 1

\Channel 1

ZZ-302

Continuous control valve 2

\Channel 0

TIT-301

Temperature Sensor 1

\Channel 1

TIT-302

Temperature Sensor 2

\Channel 2

TIT-303

Temperature Sensor 3

\Channel 2

LIT-101

Level Sensor 1

\Channel 3

LIT-102

Level Sensor 2

\Channel 4

LIT-103

Level Sensor 3

\Channel 5

PIT-201

Flow Rate Sensor

\Channel 6

PIT-202

Pressure Sensor 1

\Channel 1

PIT-203

Pressure Sensor 2

address (identifying the controller unit), the port number (identifying the specific module), and the channel number (identifying the device). Once the physical system was connected and configured for the LabVIEW communication, a virtual interface programmed by using the LabVIEW graphical language can provide a control panel for users to interact with the control process on the local server as well as over the Internet from a client computer as shown in Figure 4. The virtual interface is called a virtual instrument (VI) in the LabVIEW environment. As shown in Figure 4, this virtual interface has five major areas for users to interact with the real control process including a process simulator, a mode control panel with digital indicators, a stop button panel, a live video window, and waveform graphics for data tracking. The process simulator simulates the real wet control process by using control icons and indicator icons. In manual mode, users can control each individual device of the

732

real process by clicking on the control icons, such as the valves and motors. Indicator icons will display the status of those devices by changing the color of icons to green or red. The mode control panel is used to change control mode, and can also provide real data readings from sensors. The stop button panel provides different buttons to stop major devices and the system itself. The current value of tank levels, temperatures, incoming flow rate for main tank, and incoming pressure for tank 1 and tank 2 are displayed by the graphic and digital indicators on the interface. A video window was integrated into the interface for users to monitor the real process through an Internet camera. Two waveform graphics windows provide history data tracking the temperature and incoming pressure of each tank. Behind the Frontpanel of the virtual interface is the block diagram, programmed to provide the data flow, mathematic operations, and logic operations of the virtual instrument. The application

A LabVIEW-Based Remote Laboratory

Figure 4. The virtual interface programmed in LabVIEW

programming utilized Case Structure functions to provide three different modes, auto mode, manual mode, and the supervision mode. A While Loop function was used to establish continuous data retrieving and command sending cycles from and to the physical wet process system. Part of the block diagram programmed for this interface is shown in Figure 5 which is a live video retrieving diagram from the Internet camera. Real time control from the virtual interface on the real system is through the FieldPoint Ethernet communication and the communication between

the FP controller and I/O Modules. FP controller, FB OPC Server, and FP manager are installed and configured through Measurement & Automation software (MAX). The communication between the FP controller and I/O Modules is similar among different types of network modules. The FP controller communicates with each module through the Ethernet module using the TCP/IP protocol. It uses the.iak file to determine which resource to communicate with. Each I/O module cycles through its internal routine of sampling all channels, digitizing the values and updating the values

Figure 5. Block diagram programmed in LabVIEW

733

A LabVIEW-Based Remote Laboratory

on the module channel registers (buffer). This cycle time is set for each module and is specified as the all channel update rate. FieldPoint Ethernet communication uses an asynchronous communication architecture called event-driven communication. The network module automatically sends updates to a client when data changes. The server then caches the data from I/O modules and uses it to respond to read requests from the virtual interface. The network module scans all I/O channels with subscriptions to determine if a value has changed by comparing the current value to the cached value for each channel. If a change has occurred, the network module puts the difference between the two values in the transmit queue. The FP Server receives this information and sends an acknowledgement to the network module. The network module periodically sends and receives a time-synchronization signal so that it can adjust its clock and provide proper timestamping. When signals do not change over long periods of time, the client sends periodic re-subscribe messages to verify that the system is still online. LabVIEW’s architecture allows for easy integration of the laboratory environment for remote manipulation. The main concept of turning the locally controlled setup into a remotely controlled one is moving the user interface away from the physical setup. The local computer works as the web server as well as the control server. A number of clients can log onto the server, but only one user can be granted the control right. Other users can monitor the control process. They can monitor the process from their remote front panel (VI) while the one that has the control right can actually control the process from the panel. There is a waiting queue for users. When the control right is available, it will be granted to the next user in the queue. The remote client can be any computer with Internet access. The only tool that the client needs to use is a web browser with the LabVIEW Runtime Engine installed. The LabVIEW Runtime Engine is plug-in software

734

provided by National Instruments to support the web application. Normally it is installed automatically on the client’s machine the first time the user tries to view a front panel. The client can browse to the webpage integrated with the remote control panel by entering the Uniform Resource Locator (URL) address of the web server in the browser. The client only updates the screen and gets information from front panel interactions. The client cannot make changes to VIs. Execution happens only on the server machine. The local server hosts a LabVIEW web server, which publishes the VI to the Internet. Through the LabVIEW Real Time Engine (RTE), the local server can communicate with the remote client. It controls the process according to the data from the remote control panel and sends the updated data back to the remote control panel. Remote clients are not required to have the whole LabVIEW software installed to view VIs for control and monitoring. They just require the LabVIEW RTE plug-in. The security of the control system is ensured by management from the server side. On the server side, the user’s permission to access the LabVIEW control panel is managed through editing the allowed list of IP addresses for clients. Also, access to the LabVIEW control panel can be limited to a specific domain or a group of domains. The virtual interface running on the server can be configured to be available to or be hidden from certain users. While in the process of remote control and monitoring, the IP address of the active client will be shown on the server. The lab instructor can always monitor the usage of the remote control panel, and make sure only authorized clients have access permissions. In the process of remote control, the lab instructor can take over the control right on the server side at any time in case of system malfunctions, user errors, or any unusual situations.

A LabVIEW-Based Remote Laboratory

System Operation Figure 6 shows the process simulator on the LabVIEW virtual interface. This process diagram simulates the real process of the system with controls/indicators corresponding to the components of the real system. Three tank indicators represent the main tank, tank 1 and tank 2 in the wet process trainer respectively. The green bar of each tank indicates the current water level which can vary from 0 percent to 100 percent. In this diagram, valves and pumps are controls for their corresponding parts in the real process. FV-201, FV-202, FV-203, FV-204, and FV-205 are controls for discrete valves. These five discrete valves and two pumps are controlled by ON/OFF Boolean signals; their status can be changed by a mouseclick on them. The color of each control represents the status of the control. Red represents OFF status while green represents ON status. ZZ-301 and ZZ-302 represent the two continuous control valves in the real process. The status of them is controlled by the digital control below the valve

icons. The value of each digital control can be changed by clicking the arrows beside it from 0 to 100 with the increment of 10. The value of 0 represents closed status of the valve, and the value of 100 represents the totally open status of the valve. The color of ZZ-301 and ZZ-302 will be changed to green color when the value of their digital control is equal or greater than 10 to indicate an ON status. Otherwise, it will be changed to red indicating an OFF status. This process simulator provides a direct visual view of the whole wet process system for students to understand the process and identify the function of each component of the system. It can be used in the manual control mode for control testing and in the supervision mode for system maintenance and trouble-shooting. Figure 7 displays the part of the virtual interface with power control, mode controls and digital indicators. The power control is used to turn on/off the system, which will be in green color when the system is running. The six indicators to the right of the mode buttons are digital

Figure 6. Process simulator

735

A LabVIEW-Based Remote Laboratory

Figure 7. Mode controls and digital indicators

indicators displaying current values of tank levels, liquid temperatures, incoming flow rate, and incoming pressures. The buttons with red labels are used to select a control mode for the system operation. This virtual interface provides three different modes for process control which include supervision mode, manual mode, and auto mode. Clients (students) are assigned different control capabilities when different modes are selected. In supervision mode, all the valves and pumps in the wet process system can be turned on and off by users disregarding the readings from the sensors. This mode can be enabled only for maintenance and trouble-shooting purposes, and it is not available to remote clients for security reasons. In manual mode, the status change of valves and pumps will depend on both the current situation of the system and the commands from the user. For example, if the water level of the main tank is lower than 20 percent or valve FV-201 is closed, pump 1 cannot be activated even if the user intends to do so by clicking on the pump icon on the process simulator of the virtual interface. When certain conditions are met from the sensor readings (which means it is a safe situation), the user can manually control any device of the wet process system. This mode is available for both local and remote users. It helps users to test device status, get familiar with the control interface, and adjust control parameters when necessary. In auto mode, the system demonstrates a fully automated control process to users, depending on the water levels of each tank. The user can only control continuous control valves by changing the per-

736

centage values without changing the on/off status of the valves. The user can adjust the percentage of the continuous valves by clicking the toggle switch beside the digital control or typing values in the digital controls directly. These three modes provide flexibility for students to explore the wet control process, and also ensure the security to protect the system physical setup. The LabVIEW interface panel provides three emergency stop buttons and one reset button in the stop button panel. E-Stop button will stop the whole system when pressed. Stop 1 and Stop 2 buttons will disable pump 1 and pump 2 respectively when pressed. The Reset button is used only in auto mode to reset the system when the main tank level reaches its limits. This stop button panel is available for both local and remote users. Lab instructor can also disable the system by clicking the stop button on the LabVIEW window from the server side, or press the emergency stop button located on the physical system in any emergency situations. The live video window integrated in the LabVIEW virtual interface displays the live video from the Internet camera to remote users. Remote users can view the liquid levels and the light indicators clearly from a remote interface. There are seven green light indicators on the indicator panel mounted on the physical system. The seven green lights were wired to the five discrete valves and two pumps to indicate the On/Off status. This will help remote users to compare the status of devices on the virtual interface and the ones on the physical system when necessary for their op-

A LabVIEW-Based Remote Laboratory

eration. This is a great tool to help remote users observe the operations of the real system.

LABORATORY IMPLEMENTATION There are six labs that have been developed and implemented in the computer-integrated manufacturing course as part of the required lab activities.

Lab 1: Introduction to the LabVIEW Environment The purpose of this lab is to introduce the LabVIEW graphic programming environment to students. A simple single-axis motor control system is used to help students get familiar with the LabVIEW front panel and block diagram. First, students will remotely access the single-axis motor control system developed in LabVIEW to control and monitor a stepper motor over the Internet. The

LabVIEW interface for this motor control system is shown below in Figure 8. A LabVIEW program consists of two parts, the front panel and the block diagram. The front panel is used to design graphic interfaces; a control palette provides various controls and indicators to be used for a control interface. The program which runs behind the graphic interface is called the block diagram. A function palette associated with this block diagram provides all kinds of functions and operations. In the second part of this lab, students are required to install a LabVIEW student version on their computers. Students will open the LabVIEW front panel window and block diagram window to explore each control and indicator used in this simple program, so that they can get familiar with the LabVIEW programming environment.

Figure 8. Single-axis stepper motor control interface

737

A LabVIEW-Based Remote Laboratory

Lab 2: Programming the Motion Control The purpose of this lab is to help students understand the major components in a motion control system and functions of basic motion control VIs, and gain skills in programming motion control systems. Students are required to develop a simple single-axis motor control program by following instructions provided by the instructor, send the program to the lab instructor, and run their own program for remote motor control. The board ID and the axis number will be assigned to each student for their programming process. The lab instructor configures the motor and monitors students’ remote operations on the control server.

Lab 3: Introduction to Remote Process Control Using FieldPoint The purpose of this lab is to help students understand the integration of input and output devices in FieldPoint, examine the available devices and technologies used for remote process control applications, and explore the implementation of remote process control applications. Students are required to operate the virtual process control system through the Internet (as shown in Figure 2), examine each mode available on the virtual interface, and understand the mechanism for the system integration. The lab instructor will monitor the control process while students are using the virtual control interface to access the wet process setup in the lab.

Lab 4: Programming a Simple Process Control Program in FieldPoint The purpose of this lab is to gain skills in design and to program a process control system with digital input/output signals and Boolean operations. Students are required to design a process control system with part of the components available in

738

the physical system setup including valves and pumps, send their program to the lab instructor, and test their program through the Internet after the program is loaded into the control server.

Lab 5: Programming for Remote Measurements The purpose of this lab is to help students understand the mechanism for retrieving analog data remotely, examine the available devices and technologies for remote data acquisitions, and gain skills in developing a remote data retrieving system. Students are required to design a LabVIEW VI for remote analog data retrieving from temperature sensors, level sensors, and pressure sensors. These sensors are already installed as part of the wet process control system, as shown in Figure 3 and Table 1. The lab instructor will send the data sheets of the sensors to students. Students will then design the interface, and program the block diagram by using the technical data provided by the lab instructor to retrieve signals from sensors and display correct data on their program interfaces. Students will send their programs to the lab instructor and test their programs after the program is loaded into the system.

Lab 6: Design a Virtual Manufacturing Work Cell This lab is to encourage students to apply their knowledge, skills and experiences gained from the lectures and lab activities to design a virtual manufacturing cell for remote control and monitoring. Students are required to use sensors for measurements and use motors to simulate machine status. The machines in the manufacturing cell include three Computer Numeric Controlled (CNC) machines, three industrial robots, and one conveyor system. Based on what they have learnt, students need to integrate machines and devices into a connected network by using a FieldPoint system, assign an Input/Output address to each

A LabVIEW-Based Remote Laboratory

device and machine, design the control interface, and program the block diagram. Then students will send their programs to the lab instructor, test their programs by operator and monitor the sensors, valves, and motors of the process control system through the Internet. The educational value of these online lab activities has been assessed through students’ feedback. It is shown that most students find these labs are very interesting, convenient to access and easy to follow. They consider the lab necessary for them to understand the concepts of mechatronic system integration, remote control, and Human-Machine Interface (HMI). Students gain experience by exploring, operating, and programming the system. In addition to helping students understand the concepts and principles of remote control applications in manufacturing, these lab activities provide the following major benefits collected from students’ feedback. • • •

• •

Hands-on experience with LabVIEW programming. Great hands-on experience with online control and monitoring. Broader view on the future of industrial networking in implementing computerintegrated manufacturing. A convenient way to access lab facilities. Flexible schedule to work on lab activities.

However, some of the students did mention that the time delays in the control process have caused problems for their remote operation, and the programming difficulties with remote measurements have made them frustrated when testing is not available until the program has been completed. These problems could be solved in the future with system enhancement by implementing more web-based applications. Also, the online laboratory is not available 24/7 for online students now due to the safety and security concerns. But it does provide convenient access to labs for online students. They can schedule their

lab activities in evenings and weekends with the lab instructor when either the lab instructor or a lab assistant can sit by the server or monitor the process through the Internet.

ISSUES OF RELEVANCE TO THE LABORATORY There are several issues related to this virtual laboratory that will affect the future development of the laboratory according to feedback from students, lab instructors, and faculty. The issues include the influence of network bandwidth on information transmission for remote control, the user management system, and the limited functions of the LabVIEW web server to support online programming. In this virtual process control system, time delays exist in data transmission especially when the client accesses through a dial-in network connection. Obviously it is caused by the different bandwidth of networks. In the development of the remote laboratory, not only parameter and administrative data but also audio and video data need to be transmitted via network connections. Web cameras will bring live images of the physical setups to remote clients. Undoubtedly, it is critical to use the available bandwidth efficiently in data transmission in the project; otherwise, time delays will mar the whole remote experimentation execution. Several networking techniques can be used in solving this problem. For example, setting respective data priorities for the transmission rate of different data can ensure critical data is transmitted without delay. Another technique that can be used is data compression, but it involves a trade-off because of the additional delay resulted from the compression and decompression processes. This delay should be kept much smaller than the transmission delay. Data compression is especially useful for audio/video transmission which involves a huge amount of data. At the same time, the server needs to adapt to different

739

A LabVIEW-Based Remote Laboratory

bandwidth requirements of remote clients. For example, some might be on the same local LAN with the server on the same campus while others may be connected from home using a dial-up line. When implemented in the lab of the graduate course, there were only 12 students in the class. There was not any complaint from students about the access method. If implemented in classes with more students, user management becomes an issue. Not all students would like to wait in the queue for their online lab activities. A user management system will be developed using Visual Studio 2005 to integrate an interface and an ACCESS 2007 database, and communicate with the LabVIEW web server to realize a user reservation system. This will increase the flexibility for students allowing them to login and schedule their lab activity online. It will also provide a more secure way to manage users with permission assignments. The function of the LabVIEW web server is the second issue in the remote virtual laboratory. In this remote control process, the LabVIEW web server publishes the VIs to the Internet, but clients can only update and get information from front panel interactions. Clients cannot make changes to VIs directly from the remote interface. In order to let students to learn and practice programming in the LabVIEW environment, clients must have the capability to re-program the process and redownload their programs to the controller for testing purpose. This takes time for students to complete one program-test cycle, and requires more work for the lab instructor to load students’ programs to the server manually. To achieve a remote programming function, the LabVIEW web server must be separated from the control server. Some programming languages with powerful web-based function are recommended to extend the LabVIEW web server, such as JavaScript and VB. NET. To better address the requirements from students on virtual laboratories and improve the performance of the system, a form will be developed for students and lab instructors to evaluate

740

the performance of the system. As laboratories are implemented in more classes with more students, the evaluation and feedback will provide more ideas about system improvement and better implementation.

CONCLUSION Remote virtual laboratories accessed through the Internet are feasible for long-distance applications. Experiences from developing this virtual laboratory and implementing it in a Computer-integrated manufacturing course show that multiple aspects must be taken into consideration to obtain adequate performance of the online laboratory. These include the connection and communication between a web server and the physical setup (machines and processes) and the connection and communication of the web server and the Internet. In the next step in adding more systems into this laboratory, data acquisition, motion control, FieldPoint controllers, Programmable Logic Controllers (PLCs), and industrial robots will be integrated together to achieve a virtual flexible manufacturing cell that can be operated and monitored through the Internet. Technologies for system integration and web-based human-machine interfaces (HMI) need to be applied for future development. The future system will also provide an ideal research platform for studying the performance of those advanced web-based technologies in manufacturing environments, and the efficiency of system integration for improving flexible manufacturing systems.

REFERENCES Asumadu, J. A., & Tanner, R. (2006). A serviceoriented educational laboratory for electronics. Industrial Electronics, 56(12), 4768-4775.

A LabVIEW-Based Remote Laboratory

Carnevale, D. (2002). Engineering accreditors struggle to set standards for online lab sessions. The Chronicle of Higher Education, 1-3. Retrieved 25 July, 2007, from http://chronicle.com/ free/2002/02/2002020101u.htm Chen, S. Chen, R., Ramakrishnan, V., Hu, S. Y., & Zhang, Y. (2002). Development of remote laboratory experimentation through Internet. CMSU online library. Retrieved from http://cyrano.cmsu. edu:2048 Cooper, M., Donnelly, A., & Ferreira, J. M. (2002). Remote controlled experiments for teaching over the Internet: A comparison of approaches developed in the PEARL project. The ASCILITE Conference 2002. Auckland, New Zealand. UNITEC Institution of Technology, M2D.1-M2D.9 Deniz, D. Z., Bulancak, A., & Ozcan, G. (2003). A novel approach to remote laboratories. 33rd Annual Frontiers in Education (FIE’03), (pp. T3E8-12). Gillet, D., Latchman, A. H., Salzmann, C., & Crisalle, O. D. (2003). Hands-on laboratory experiments in flexible and distance learning. The Journal of Engineering Education. Gurocak, H. (2001). E-lab: Technology assited delivery of a laboratory course at a distance. Proceedings of the 2001 ASEE Annual Conference. Irwin, G. W. (2005). Nonlinear identification and control of a turbogenerator - An on-line scheduled multiple model/controller approach. IEEE Transactions on Energy Conversion, 20(1), 237–245. doi:10.1109/TEC.2004.827708 Kozono, K., Akiyama, H., & Shimomura, N. (2003). Development of distance real laboratory system. CMSU online library. Retrieved from http://cyrano.cmsu.edu:2048

Liou, P. S., Soelaeman, H., Leung, P., & Kang, J. (2002). A distance learning power electronics laboratory. CMSU online library. Retrieved from http://cyrano.cmsu.edu:2048 Salzmann, C., Latchman, H. A., Gillet, D., & Crisalle, O. D. (2003). Requirements for real-time laboratory experimentation over the Internet. The Journal of Engineering Education. Saygin, C., & Kahraman, F. (2004). A web-based programmable logic controller laboratory for manufacturing engineering education. International Journal of Advanced Manufacturing Technology, 24, 590–598. doi:10.1007/s00170-003-1787-7 Thamma, R., Huang, L. H., Lou, S., & Diez, C. R. (2004). Controlling robot through Internet using Java. Journal of Information Technology, 20(3).

ADDITIONAL READING Hua, J., & Ganz, A. (2003) A New Model for Remote Laboratory Education Based on Next Generation Interactive Technologies. Frontiers in Education Conference. Hutzel, W. (2001) Creating a Virtual HVAC Laboratory for Continuing/Distance Education. International Conference on Engineering Education.

KEY TERMS AND DEFINITIONS FieldPoint: FieldPoint is a proprietary method for interfacing devices to computers developed by National Instruments but it is very similar in principle to the concept of a fieldbus interfacing method used by many process control equipment suppliers. LabVIEW: NI LabVIEW is the graphical development environment for creating flexible and scalable test, measurement, and control applications rapidly. It is the major programming tool in developing virtual control interfaces.

741

A LabVIEW-Based Remote Laboratory

Process Control: Process control is a statistics and engineering discipline that deals with architectures, mechanisms, and algorithms for controlling the output of a specific process.

VIs: VIs are virtual instruments. In this chapter, VIs represent the graphical user interfaces programmed in LabVIEW environment for the purpose of motion control and process control.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 1-17, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

742

Section 4

Utilization and Application

This section discusses a variety of applications and opportunities available that can be considered by practitioners in developing viable and effective Industrial Engineering programs and processes. This section includes 14 chapters that review topics from case studies in Cyprus to best practices in Africa and ongoing research in the United States. Further chapters discuss Industrial Engineering in a variety of settings (air travel, education, gaming, etc.). Contributions included in this section provide excellent coverage of today’s IT community and how research into Industrial Engineering is impacting the social fabric of our present-day global village.

744

Chapter 42

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context Souleiman Naciri Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Min-Jung Yoo Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland Rémy Glardon Laboratory for Production Management and Processes, Ecole Polytechnique Fédérale de Lausanne, Switzerland

ABSTRACT Computer simulation is often used for studying specific issues in supply chains or for evaluating the impact of eligible design and calibration solutions on the performance of a company and its supply chain. In computer simulations, production facilities and planning processes are modeled in order to correctly characterize the supply chain behavior. However, very little attention has been given so far in these models to human decisions. Because human decisions are very complex and may vary across individuals or with time, they are largely neglected in traditional simulation models. This restricts the models’ reliability and utility. The first thing that must be done in order to include human decisions in simulation models is to capture how people actually make decisions. This chapter presents a serious game called DecisionTack, which was specifically developed to capture the human decision-making process in operations management (the procurement process). It captures both the information the human agent consults and the decisions he or she makes.

DOI: 10.4018/978-1-4666-1945-6.ch042

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

INTRODUCTION In fast-paced markets, companies try to improve product service level and quality while decreasing costs in order to get high market shares. Whereas some companies implement solutions to improve performance without prior verification, a wiser approach is to use computer simulation to evaluate the impact of potential solutions on the performance of the company and its supply chain. In these simulations, production facilities and planning processes are modeled in order to capture company behavior. However, the main drawback of this approach is that little attention is given to the human decisions that take place in this context. Human decisions are very complex, varying across individuals and even across time for a single individual. Traditional simulation models thus neglect this component. But this in turn limits the models’ reliability; in fact, modeled companies often exhibit different behavior than their real-world counterparts. The challenge is to be able to capture how human decisions are made, use this knowledge to develop reliable human decision-making models, and then implement these models in computer simulations. For this purpose, the first task is to capture human decisions as they are made rather than how they should be made. Capturing actual human decisions is not straightforward, however, because people are not very good at verbalizing what they know (Vermersch, 2006). The utility of the conventional simulation approach in studying system behavior in general has been proved (Robinson, 2005) even though it does not involve active user participation during simulation runs. However, for the purpose of knowledge elicitation (Edwards et al., 2004) as well as user training or education, using more advanced simulation technique that integrates visual simulation and user interaction (Van der Zee & Slomp, 2009) should be a promising approach. This chapter presents a serious game called DecisionTack that was specifically designed to

capture the human decision-making process in a procurement context. The main motivation for developing the game is to be able to take full advantage of simulations that include active user interaction for the purpose of quantitatively analyzing decision-making behavior. This serious game captures both the information consulted by the player and the decision he or she makes. This is done repetitively during the game, because an operational decision (procurement) is required from the player on a daily basis. This leads to the capture of a series of decision versus consulted information pairs that can later be used to develop human decision-making models. Subsequently, the outputs of the game are analyzed using four metrics that characterize each player’s behavior in terms of data consultation and decisions. In the rest of this chapter, we lay out the basic concepts of Supply Chains, decision-making in a supply chain context, and describe previously developed serious games (Background); describe current weaknesses and define the goal of the research (Motivation & Goals); describe the details of our serious game (DecisionTrack Game); outline our analysis and interpretation approach (Analysis and Interpretation); illustrate a case study (Application case); discuss strengths, weaknesses and further challenges of our serious game (Issues & Controversies); outline potential further development (Research Directions) and draw final conclusions (Conclusions).

BACKGROUND 1. Supply Chain In the current global economy, enterprises do not act as isolated companies, but are integrated in complex networks involving many entities (manufacturing, transportation, warehousing, etc…) that are linked by complex material flows (such as products and components) and informa-

745

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 1. Schematic representation of an enterprise network (supply chain)

tion flows (such as customer orders or production orders). This is schematically illustrated in Figure 1. Terms such as ‘Supply Chain’ or ‘Value Adding Network’ are used to describe these complex networks. For simplicity, we will just use the term ‘supply chain’. Within a manufacturing company (one constitutive entity of a supply chain, see “Manufacturer” on Figure 1), the main material and information flows can be schematically described as illustrated in Figure 2.

Customer orders are received. They are entered into the order book of the company. The order book serves as the basis, together with market demand forecasts, to create a plan called Master Production Schedule (MPS). The MPS contains confirmed and expected customer orders, listed according to their desired delivery dates. Based on the MPS, a procedure is run to anticipate the need for product assembly, part production and component procurement. This procedure, called Manufacturing Resources Planning (MRP), is widely used in repetitive manufacturing.

Figure 2. Schematic representation of the material and information flows within a manufacturing company

746

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

2. Decision Making in a Supply Chain Context In each entity of a supply chain, people are constantly making decisions; these decisions are the cornerstone of the company management and have very important and direct consequences on company performance. Human decisions can be classified according to the time horizon affected by the decision as strategic (long-term), tactical (medium-term) or operational (short-term). In supply chain management, strategic decisions are tied to the network’s configuration (for example, the selection of a manufacturing location). Tactical decisions involve the calibration of the network’s main management parameters (for example the level of safety stocks). Finally, operational decisions are related to the execution of repetitive tasks (for example launching production orders). Simulations are often used in tools that support strategic and tactical decisions. If these simulation models are to help humans to make strategic and tactical decisions, they must be able to reliably reproduce the behavior of the actual Supply Chain. But the supply chain is strongly affected by operational decisions that are continuously being made by humans. Paradoxically, operational human decision-making has hardly been taken into account in Supply chain simulation models,

thus limiting the reliability of these tactical and strategic decision-making tools. Operational decisions in a supply chain context are characterized by their repetitive nature; i.e. the same decision type (for example launching a production order) must be frequently made (for example daily). The decision situation may change for each decision occurrence, however. The decision made is thus dependent on both the decision context and on the human decision-making behavior, as schematically represented in Figure 3. Many operational decisions are made in a supply chain context, from shipping to planning and procurement. In particular, one output of the MRP procedure described above is a set of lists of time-paced propositions for launching assembly, production and procurement orders. A human decision is then required to execute the MRPproposed orders. This decision involves confirming the MRP propositions, modifying them (date and/or quantity) or grouping some of them. In the specific case of the procurement process considered here, the operational decision can be illustrated as shown in Figure 4. The decision elicitation problem can therefore be formalized and formulated as follows: Each planner j makes decisions Dij at time i, according to the decision context he/she perceives at the time i (DCij). This decision context

Figure 3. Schematic representation of an operational human decision making process in a supply chain context

747

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 4. Illustration of the procurement decision-making process

encompasses the updated proposed procurement plan (MRPi), as well as the collected information (CIij) gathered by the planner j at time i. Thus, the decisions Dij made by the planner j at time i can be expressed as: Dij = fj (DCij)

(1)

Dij = fj (MRPi, CIij)

(2)

Consequently, the elicitation process consists of identifying the decision-making behavior fj of planner j in order to predict planner j’s decisions according to the information at hand. The first task in identifying fj is to capture the decision inputs and outputs: •

• •

748

Decision inputs: ◦ MRP proposed orders (MRPi), which create the decision alternatives; ◦ The information collected by the planner j (CIij). Decision outputs: MRP proposals modified and validated by the planner’s orders (Dij).

3. Serious Games and Dynamic Decision Making Serious games have been used for several decades for studying dynamic decision-making (DDM). DDM encompasses the decision-making processes that take place within an environment characterized by dynamics, complexity, opaqueness and dynamic complexity (Gonzalez et al., 2005). In this paper, the authors review the ten most significant serious games that have been developed since the 1980s. One of these is of particular interest here, as it involves operations and supply chain management. Beer Game was initially developed in the 1960s as a board game before being converted into a serious game in the late 1980s by Sterman (Sterman, 1989). The Beer Game has 4 entities (factory, distributor, wholesaler and retailer) that provide the market with beer. The goal is to achieve the highest service level with the lowest costs. Players are separated in four groups, each group being in charge of the replenishment decisions of a single supply chain entity. This game was popular with users (master’s students and managers) because it illustrates supply chain dynamics and the virtues of collaboration across supply chain partners very well. However, one disadvantage of this game is that it is difficult

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

to use it to understand player decision-making behavior. Indeed, the Beer Game does not track the information consulted by players, and thus the causal relations between the system state and the decisions made by the players cannot be explicitly rendered. This drawback is also observable in a simulation game developed to study maintenance decisions in Ford assembly lines (Robinson et al., 2005). In this game, a production process is simulated until a breakdown occurs, at which point the simulation stops and players (maintenance operators) are presented with the breakdown characteristics. Then they are asked to make a decision among a discrete set of alternatives (repair now, repair later, ask somebody else to do it, etc.). Each breakdown instance is recorded including both the breakdown characteristics and the corresponding decision. The authors then used this set of instances to model and reproduce maintenance operator decision-making behavior. However, by presenting the entire set of breakdown characteristics to maintenance operators at each breakdown, the assumption is made that all of the breakdown characteristics are equally important in the decision making process, while some of them would probably not have been consulted if the operator had had to look for relevant information by himself. Serious games are a very efficient way of representing complex and dynamic decision making contexts (time dependency, feedback loops, endogenous and exogenous variations), but often fail to handle a very important phase of dynamic decision making - the situation assessment -- that can vary greatly from person to person.

MOTIVATION AND GOALS Conventional data collection techniques such as observation, questionnaires and interviews are not well suited for the elicitation process described above (Naciri, 2010). Several hurdles (the time required, difficulty in designing suitable ques-

tionnaires, difficulty that interviewees have in explicitly describing their actions and decisions) make it difficult to rely on such techniques to collect relevant data for analyzing and modeling human decision-making behavior. On the contrary, serious games seem to be better suited for the elicitation of human decisions in an operational context. They have several demonstrated benefits: •







First, serious games require players to “act” and not to “explain,” therefore, “action satellites” can be avoided. According to Vermersch (Vermersch, 2006), “action satellites” refer to the four dimensions of the action (context, judgments, theoretical knowledge, goals) that interviewees often cite instead of discussing the action (or decision) itself; Second, in serious games several player actions can be recorded, giving results at a faster pace than using conventional techniques; Third, it is possible to analyze how decisions made at time t influence the environment at time t+1, making it possible to capture the dynamic aspects of decisions and, in particular, the notion of feedback; Finally, serious games create an environment in which player actions can be recorded, as well as the context in which decisions occur. Thus, once the simulation is finished, it is possible to pair the simulation context with the decisions that were made.

Participatory simulation via serious games thus appears to be a well-adapted tool for capturing the various dimensions of human decision-making, and thus for obtaining a quantitative understanding of human decision making behaviors. The goal of this work is to develop a serious game that will elicit operational human decisions

749

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

in a supply chain context, more specifically, in procurement. Because the objective is to reliably generate human decision-making situations that are representative of what happens in the industry, the game must fulfill the following criteria: •





Interfaces similar to typical industrial tools: this ensures that players (procurement agents) act (make decisions) as they would in their working environments. It thus ensures that the information collected in the virtual environment is representative of procurement agent behavior in the “real world.” Available information similar to that in actual ERP systems: this ensures that players are familiar with the information at hand in the simulator and that no specific training is required to explain how the information is displayed. Decision pace to avoid player stress: one of the main drawbacks of real-time serious games is that the pace is often accelerated, giving the player little time to make decisions. Consequently, players must control the simulation pace in order to get enough time for making relevant decisions.

The main function of the developed serious game, DecisionTrack, is to collect data to identify: 1. 2.

What information procurement agents are interested in, and What kinds of decisions procurement agents make.

According to Van der Zee & Slomp (2009), a framework for game design has four phases: • •

750

Initialization: definition of the scope and objectives of the game. Design: Detailed development of the basic ideas formulated in the initialization phase.

• •

The outcome of this phase is a simulation game concept. Construction: construction of the game using software or other physical elements. Operation or game running: actual use of the game, which may include a test of the game for its intended purpose.

We have covered the “Initialization” phase of the game design framework in this section. The next section explains development methodology of the DecisionTrack serious game.

DECISIONTRACK GAME: FROM DESIGN TO IMPLEMENTATION 1. Definition of Game Concept The DecisionTrack game was designed according to the following objectives: 1.

2. 3. 4.

To provide the player with a realistic decision making context (similar to the one he/ she is usually involved with), To allow the player to consult the information he/she feels relevant, To allow the player to make the decisions he/she feels relevant, To capture player actions (consulted information and decisions) and to save this information as readable data (log file).

The underlying motivation behind these four objectives is to build a decision making context that is similar to the one players are used to working with, in order to capture decision making situations that are as close as possible to those they encounter in reality. This can be done not only by creating realistic decision-making situations, but also by not constraining players to limited sets of data or decisions. Finally, keeping track of player’s actions makes it possible to subsequently identify the relation-

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

ship between the decision-making context and the decision made.

2. Virtual Decision Making Environment The virtual decision making environment chosen for this game is a two-tier supply chain, in which the player is in charge of the procurement process. His/her task is to modify (if needed) and validate the procurement orders based on MRP propositions, as described in the previous section. In order to provide players with a familiar decision making environment, a virtual supply chain with a commonly used production management policy (make to stock) is used. It is illustrated in Figure 2. Several entities representing departments of the company such as production, warehousing, planning (i.e., MPS, MRP and the procurement process under study) are included in this virtual environment. It also includes some external entities such as customers and suppliers. The circles (containing the letter “i”) on the top of entity icons in Figure 2 indicate that information concerning the corresponding entity is available for player consultation.

Table 1 summarizes the game elements that are included in DecisionTrack. The following subsections describe in detail how the game elements are constructed.

3. DecisionTrack interfaces The game is developed in Java 1.6 (Java SE 6) fully benefiting from the built-in Java Swing libraries in order to implement complex user interactions. The main goal of DecisionTrack interfaces is to provide information about the company and its supply chain. This information, which is modified daily, enables players to update their knowledge of the system.

Displayed Information The relevant information to be displayed was identified by conducting an analysis of the procurement process with experts in the procurement field. This analysis provided valuable insight into the kinds of information procurement agents consult, and the kinds of decisions they make. Two ERPs (Enterprise Resource Planning) widely used in Switzerland, SAP (SAP, 2011)

Table 1. Summary of DecisionTrack main game elements Game elements

Definition

Model and Scenarios

- A 2-tier supply chain - Context: a manufacturing company, MRP information to consult - Decision making options: accept the MRP propositions, postpone, anticipate, group orders by modifying the order launching date

Game process

1. Presentation of the game to each player 2. Each player plays the game 3. Data collection and analysis 4. Player decision making modeling

Events

New customer orders, new forecasts, component deliveries from suppliers (on time and late deliveries)

Periods

One period per simulation day A complete run with a single player lasts at least 30 periods

Roles

Procurement agent

Results

Performance indicators such as inventory levels and service level

Indicators

MRP data, supplier-related data, customer-related data Above-mentioned performance indicators (see “Results”)

Symbols, Materials

Various user interfaces (windows) that mimic real ERP systems in companies

751

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

and Proconcept ERP (Proconcept, 2011) were investigated, in order to design DecisionTrack’s interfaces to mimic the interfaces procurement agents are accustomed to working with.

Tracking Methods As stated before, players can navigate through DecisionTrack interfaces in order to update their knowledge of the decision context. Because different people search for information in different ways, it is essential to track which specific information is consulted. This is accomplished by isolating the information related to each supply chain entity in a separate tab of the game window. Each tab is associated with a mouse-listener that is activated once the tab is selected. The record of the activated tab is stored in a “log file”. In this way it is possible to track which supply chain entity a players is interested in. Because several pieces of information may be needed to describe a supply chain entity, a single interface

may not have enough space to correctly display the whole set of information. In these cases, tabs may contain two or more sub-tabs between which the entity-related information is split. In the case where a single sub-tab contains heterogeneous information, checkboxes are added to help track the consulted information. These checkboxes are by default unchecked, which makes the corresponding information unavailable. When a player is interested in a piece of information, he/she checks it, releasing the corresponding information. In sub-tabs and their checkboxes, the technique of tracking consultations is similar to the one described for the tabs. Using mouse-listeners which are attached to each graphic element, the name and the value (when relevant) of the consulted information is recorded in the log file. Tabs, sub-tabs, information panels and checkboxes are organized in a hierarchical way as illustrated in Figure 5.

Figure 5. Illustration of the hierarchical structure of the information within a DecisionTrack window

752

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Interfaces for Collecting Data The central part of Figure 6 illustrates the window corresponding to the Customers tab. This interface provides various pieces of information. The first information panel (1) is the evolution of the service level (ratio of the orders delivered on time) since the beginning of the simulation. The second information panel (2) contains only the current service level, and the third panel (3) contains the list of pending orders, (orders that have not yet been delivered). To get to this specific interface, the player must go through the following steps: 1. 2. 3. 4.

Select the Customers tab Select the Service level sub-tab Activate the Service level checkbox in the upper information panel Activate the Service level checkbox in the second information panel

The right hand side of Figure 6 shows the corresponding data recorded in the log file.

Interfaces for Making Decisions All the interfaces except Purchaser contain information for consultation that cannot be directly

modified by the player. The information contained in the Purchaser tab (proposed procurement orders generated by the MRP algorithm) can be updated by the player with or without modifications. The Purchaser interface is illustrated in Figure 7. As shown in the bottom panel, several procurement orders with a proposed launching date and quantity are displayed. According to his/her knowledge of the system state, the player can modify the propositions by clicking on an order, and then on the line corresponding to the new launching date. Four kinds of modifications can be made: •

• •



Anticipation (the new order date is closer to the current simulation date than the former one), Postponement, Grouping (when the new launching date of the order corresponds with the launching date of existing orders), Do Nothing (no modification is made to the proposed order).

Once the player has chosen one of these alternatives, he/she can click on the padlock to validate the order and send it to the supplier. All the decisions made by the player are recorded in the log file, so that the correspondence

Figure 6. Illustration of a specific interface (left) and the corresponding log file records (right)

753

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Figure 7. Snapshot of the “purchaser” tab

between player knowledge of the system state and the decisions he/she has made can be identified.

4. DecisionTrack Scenario Game scenarios refer to the evolution of game variables across time. When designing a new game, two categories of variables must be differentiated. The first category encompasses variables that cannot be modified by the player -- exogenous variables. The second category relates to variables that can be modified by the player through his/her decisions -- monitoring variables. The latter help the player monitor the impact of his/her decisions on company performance. The exogenous variables are identified through a literature review and discussions with domain experts. These variables must vary across time

754

to create decision-making situations that require specific attention. In this research, the goal is to identify how procurement agents make decisions according to the evolution of the environment (suppliers and customers). Table 2 provides the list of all exogenous and monitoring variables used in this study. Among the monitoring variables, it is worth noting that the service level value depends on the way the player updates the proposed procurement plan. The selected exogenous variables that should vary across the simulation are those that are related to the supply chain environment, such as component delivery times, market behavior (customer orders), and forecasts (see Table 2). By making the above-mentioned exogenous variables change across time in the game scenario, a difference appears between planning algorithm

Using Serious Games for Collecting and Modeling Human Procurement Decisions in a Supply Chain Context

Table 2. Exogenous and monitoring variables Displayed information

Location

Exogenous variable

Monitoring variable

Customer orders and forecasts

- Customers tab - Status sub-tab

YES

NO

Actual and predicted component delivery time

- Supplier tab - Graphics sub-tab

YES

NO

MRP tables

- MRP tab - Status sub-tab

NO

YES

Inventory levels

- Stock tab - Status sub-tab - Graphics sub-tab

NO

YES

Work in process

- Production tab - Status sub-tab

NO

YES

Service level

- Customers tab - Service Level sub-tab

NO

YES

propositions (that are based on MRP theoretical parameters), and the current simulated situation. Such gaps create critical decision-making situations that require the player to make decisions that impact the company performance. The gaps between the planning algorithm’s recommendations and the evolution of company’s environment correspond to realistic and very common situations that routinely appear in companies in which the MRP parameters are not updated according to variations in the environment.

Supplier Delivery Time Variations Supplier delivery times are set in the scenario to differ from the theoretical delivery time introduced in the MRP algorithm. In this way, actual delivery time may be either 1) longer than the theoretical one (which leads to delivery delays), or 2) shorter than the theoretical one (which leads to deliveries that enter the stock earlier than expected). The scenario is set so that situation 1) occurs most often, encouraging the player to make decisions instead of choosing the status-quo alternative. Supplier-predicted delivery time data are provided in the form of a table in the Supplier tab. In addition to the predicted delivery time, the actual delivery time is reported after each order delivery. All three delivery times (theoretical, predicted and actual) are reported as shown in Figure 8.

Forecast Patterns In addition to customer orders that represent the actual impact of the market on the “central” company, forecasts are designed to anticipate market requirements. The remainder of this section discusses forecast patterns (evolution of the forecasted demand over the planning periods) and whether they adequately predict customer order patterns. Forecasted demand and customer order patterns are designed so as to lead to either overestimation (Delta>0) or underestimation (Delta0, and to the Case 2, Delta 1. For both linear demand and multiplicative demand, we consider x as a nonnegative random variable with mean μ and standard deviation σ, with a density function f(∙) and a cumulative distribution function F(∙). We define the inverse function of F(∙) by F−1 (∙), and F(⋅) = 1−F(∙). Noting that the above fashion retailing model is similar to the newsvendor model with pricedependent demand. Therefore, we can apply the price-dependent demand newsvendor problem to capture the fashion retailing problem. Now, we consider the case that the fashion retailer is risksensitive and it follows the VaR approach to determine the joint optimal retail price and order quantity. We first define Πi (q, r ) as the fashion retailer’s profit under demand function i=L,M. For a given confidence level α, where 0< α 1000

9

No Info.

27

Total

74

Total

74

Production Information Systems Usability in Jordan

Table 2. Detailed type of information systems used in Jordanian factories #

Number of factories

Item (question asked)

1

Do you use any type of Information Technology

66

2

Do you use accounting information systems

60

3

Do you use special sales systems

42

4

Do you use production information systems

45

5

Do you use inventory and warehousing systems

47

6

Do you use computer-aided design systems

23

7

Do you use human resource and salary systems

52

8

Do you use quality assurance/control systems

36

9

Do you use Distribution systems

25

10

Do you use procurement information systems

37

11

Do you use manufacturing aiding systems

25

systems, 13 factories employ distributed systems, but integrated together. Also, results indicated that 20 factories employed enterprise systems that are utilized in many functions and tasks. Finally, only 8 factories have extended systems that reach suppliers, distributors and customers (SCMS or ERP). The second part of the survey included items related to the systems used in these factories. 47 managers indicated that they are interested in

extending and using part or all of the systems listed in Table 2. Also, the distribution of the source and type of these systems were as follows: 32 factories used locally designed systems, and 33 factories used ready-made exported systems (off-shelf systems). One of the objectives of this study was to see the relationships between variables like computer diffusion, sales and employee size. This was done through the correlations between those variables to test if any relationship exists between those. Also, to relate demographics with the main objective of this work, we estimated a new construct based on the count of the number of systems employed by the firm (example: manager of firm XYZ checked yes for using three systems (accounting information systems, HR systems and sales systems), so the total number of systems employed were three). This number (or set of data) was correlated against each of the variables mentioned. The correlations matrix is depicted in Table 3. The results indicated significant correlations between the three variables and the total number of systems deployed. Also, it is shown that significant correlations existed between all three variables.

The Intentions of Managers to Adopt PIS The second major objective of this study was to explore managers’ intentions to adopt or continue using PIS systems utilizing Rogers’ IDT model. In a separate study, the researcher employed a

Table 3. Correlations matrix of the demographics against the number of systems employed Number of employees

Sales size

Number of computers

Number of employees

1

Sales size

0.551**

1

Number of computers

0.636**

0.810**

1

Total number of systems

0.437**

0.394**

0.398**

** Correlation is significant at the 0.001 level

983

Production Information Systems Usability in Jordan

survey that introduced a description of PIS and asked questions related to the different constructs of the model. The main source for the items used in the study was Moore and Benbasat (1991) work. The items were translated to Arabic language and tested using 10 experts for language. The nature of this exploratory study makes it convenient to use such method and the size of data allows for such test. Data used a seven point Likert scale, where 1 indicates a high disagreement to the statement and 7 indicates a high approval to the statement. The total number of surveys collected was 91 surveys from factories in Al-Hasan Industrial Zone (total number distributed = 100). The survey included 3 items for measuring rate of adopting, 5 items for measuring relative advantage, 3 items for measuring compatibility, 3 items for measuring image, 2 items for measuring voluntariness, 2 items for measuring trialability, 2 items for measuring visibility, 4 items for measuring results demonstrability, and 4 items for measuring ease of use. Table 4 shows some descriptive statistics related to constructs. On the other hand, correlations between all variable depicted in the IDT model were calculated and they are shown in Table 5. All correlations were significant except two and as shown in the matrix (with different levels of significance). Also, all variables were entered to calculate the regression coefficients between the rate of adop-

tion and all variables. Results indicated that a significant correlation exists between the variables and the rate of adoption, where the coefficient of determination R2 = 27.6%, with a p value less than 0.001 (F8,82 = 5.285, p < 0,001). Results are shown in Table 6.

DISCUSSION OF RESULTS This exploratory work tried to answer two major questions using multiple studies and methods. The first objective was to explore the extent to which PIS are used and adopted in manufacturing companies in Jordan. Through a descriptive survey, opinions were collected from managers of 74 factories in an industrial zone in Jordan. Results indicated that 66 factories (89%) used at least one system related to production and operations. The most popular systems used in Jordan were accounting information systems (60 factories, 81%), and the least used systems were computer aided design systems (23 factories, 31%). Results indicated also that PIS were used in 45 factories (61%), and inventory and warehousing systems were used in 47 factories (63.5%). The results indicated a fair adoption rate for such systems. Part of the reason for that is the influence of partnership with global firms and international organizations that outsource part of their production within Al-Hasan

Table 4. Descriptive statistics related to constructs in the IDT model Variable

Number of Surveys

Min

Max

Mean

Standard Deviation

Rate of adoption

91

1

7

4.762

1.775

Relative advantage

91

2

7

5.777

1.123

Ease of use

91

1

7

5.409

1.334

Image

91

1

7

4.538

1.505

Compatibility

91

2

7

4.597

1.362

Result demonstrability

91

1

7

4.797

1.517

Visibility

91

1

7

4.637

1.540

Trialability

91

1

7

5.588

1.303

Voluntariness

91

1

7

4.654

1.615

984

Production Information Systems Usability in Jordan

Table 5. Correlation matrix showing the IDT variables RoA

RA

EoU

I

C

RD

Rate of adoption (RoA)

1

Relative advantage (RA)

.271**

1

Ease of use (EoU)

.030

.332**

1

Image (I)

.386**

.397**

.481**

1

Compatibility (C)

.351**

.269*

.419**

.637**

1

Result demonstrability (RD)

.440**

.281**

.489**

.579**

.598**

1

Visibility (V)

.310**

.256*

.367**

.465**

.578**

.495**

V

T

V

1

Trialability (T)

.289**

.244*

.301**

.292**

.460**

.337**

.635**

1

Voluntariness (V)

.207**

.156

.390**

.274**

.455**

.422**

.526**

.453**

1

**. Correlation is significant at the 0.01 level (2-tailed). *. Correlation is significant at the 0.05 level (2-tailed).

Industrial Zone. It also seems that accounting and human resources systems were more popular as it is a major function for any firm regardless of their size of operations. This conclusion might be as a result of another test we did on the size of firms with respect to their sales and number of employees. Using supply chain management systems (SCMS) was not that popular as only 8 firms used an extended system (11%), and the reason for that are the distinctiveness of the sample, as the sample in the first study came from an industrial

zone, where international contracts are more common and a closed system from the local market is forced in this free zone. Such situation might be in a lesser need to an extended integrated system (SCMS). Finally, enterprise systems were not that popular as only 20 factories indicated using such integrated type of systems (27%) When trying to explain the results of the correlations between the total number of systems (a measure of usability in this study) and the number of computers and employees and the total sales, it seems obvious that the size of the firm is a

Table 6. Coefficients table for the multiple regression test Variable

Beta

Std Error

Std Beta

t

Sig

Constant

1.334

1.026

1.301

0.197

Relative advantage

0.246

0.158

0.156

1.556

0.124

Ease of use

-0.506

0.148

-0.380

-3.411

0.001

Image

0.269

0.156

0.228

1.720

0.089

Compatibility

0.016

0.178

0.013

0.092

0.927

Result demonstrability

0.444

0.146

0.380

3.038

0.003

Visibility

-0.010

0.156

-0.009

-0.067

0.947

Trialability

0.210

0.164

0.154

1.283

0.203

Voluntariness

0.041

0.125

0.037

0.329

0.743

Dependent variable: Rate of adoption, method: enter

985

Production Information Systems Usability in Jordan

direct influencer (predictor) of the usability of IT. The larger firms will have higher numbers of employees and larger sales and thus they tend to utilize technology to improve operations and gain competitive advantage in the market. Also, it is logical to conclude that the firm size is directly correlated to complexity of operations and thus firms adopt IT to better control operations and improve flow of material and information. Finally, we can conclude that firms with higher sales will have larger tendency to invest in IT and thus buy more computers and adopt more types of systems. The second objective was to explore managers’ intention to adopt such systems. Results indicated a high intention to adopt PIS because of two main reasons: The first was the high mean values of predictors, which indicates the high perceptions of managers towards the adoption process. All means were above 4.5 out of 7, which indicate a high acceptance rates with respect to all variables used. The second reason for this conclusion is the high significant bivariate correlations with the rate of adoption, and this indicates that the method used and the large number of predictors was a limitation. The only variable with none significant correlation is ease of use and this supports the limitation of the method used. The highest correlation was between rate of adoption and results demonstrability (0.440**). On the other hand, when summed together, the set of variables competed on the variance and only two variables showed significant prediction of rate of adoption. The two variables are: results demonstrability; the ability to see tangible results out of the system, and ease of use; where the complexity of the system is a huge obstacle to using it. The IDT model explained 27.6% of the variance in the rate of adoption. Results might have some limitations as the IDT have 8 predictors competing on the variance in the rate of adoption and this might limit the ability to explain the dependent variable well. The regression method used was to enter all variables forcefully and this might be the reason behind this surprising result. As this study is

986

an exploratory one, we can conclude that a larger sample size and a thorough conceptual analysis of the predictors will lead to better utilization of variables and better and accurate results.

CONCLUSION This paper aimed at exploring the status of using IT and specifically PIS in the area of production and manufacturing in Jordan. The study utilized two samples for two separate studies; the first was a sample of managers mainly related to IT in a group of factories in Al-Hasan Industrial Zone and other areas mainly in the Northern part of Jordan to explore the usage of PIS and other types of systems in the industrial area. The second study utilized another sample (after four months and from a different set of factories), from the same area and from the Northern part of the country to test the IDT using an instrument translated from Moore and Benbasat work (1991). Results indicated that systems like accounting information systems and HR and payroll systems were the mostly used among firms and distribution and manufacturing aiding systems were the least used among the sample used. The role of IS in production area was highly appreciated and a major conclusion is that the size of firm indicates the high computer usage and the diversity of systems used. The second study resulted in high and significant indicators in predicting the adoption rate, and most of the constructs used in the IDT were significantly correlated to rate of adoption. But when regressing all indicators on rate of adoption, only two competed on the variance and yielded significant explanation of the variability of the dependent variable and they were results demonstrability and ease of use. One of the limitations of this research, which makes its generalizability limited, is the usage of two separate samples. This research utilized two different samples, and to relate the real usage of PIS to the adoption rate, the same sample would

Production Information Systems Usability in Jordan

have been used. Still the inferred results of this work are valid, but researchers are encouraged to use one sample and extend the size to improve the statistical generalizability. The second limitation of this study is the instrument used; this study used a translated instrument from the original one used in Moore and Benbasat (1991) in English, and thus researchers are encouraged to use the instrument in Arabic to improve the language and improve content and face validity of the instrument. Finally, research related to PIS and the factors influencing the adoption of such systems is not highly popular, which resulted in high competition between variables. The IDT needs a larger sample or dropping some of the variables based on conceptual bases. When exploring systems like ERP or PIS systems, as they are considered complicated and comprehensive systems, one needs to keep relative advantage and ease of use for sure, but further exploration needs to be done to try to deduct the scale size and improve predictability of the model.

FUTURE RESEARCH DIRECTIONS This research is needed in this area and considered a first step in validating the instrument and testing factors influencing the rate of adoption. It is highly important to continue such research using longitudinal settings to explore the adoption and check the validity of results. Future research is needed to validate the instrument and apply it to more settings and environments. Another direction that is needed is the multi-stage process applied by Moore and Benbasat (1991), where the adoption rate is investigated with time and also, a better conceptual perspective is reached through the reduction of variable. One idea is to compare other models predictability with the IDT like the Technology Acceptance Model (TAM), the theory of Reasoned Action (TRA), the Theory of Planned Behavior

(TPB) and its extension the Decomposed Theory of Planned Behavior (DTPB). As we now know the situation of PIS usability in industrial zones, would that knowledge facilitate better research in other environments like local industrial areas and other major factories in Jordan? Also, would it result in a different conclusion if we explored other types of systems? Finally, results indicated a weakness in utilizing computer aided design systems, future research can explore the reasons behind such phenomenon and would that be related to the industrial development of the sector in general or because of this global partnership with local factories specifically in the industrial zone.

REFERENCES Agarwal, R. (2000). Individual acceptance of information technologies. In Zmud, R. (Ed.), Framing the domains of IT management (pp. 85–104). Cincinnati, OH: Pinnaflex Education Resources, Inc. Agarwal, R., & Prasad, J. (1998). A conceptual and operational definition of personal innovativeness in the domain of information technology. Information Systems Research, 9(2), 204–215. doi:10.1287/isre.9.2.204 Brancheau, J. C., & Wetherbe, J. C. (1990). The adoption of spreadsheet software: testing innovation diffusion theory in the context of end-user computing. Information Systems Research, 1(2), 115–143. doi:10.1287/isre.1.2.115 Carton, F., & Adam, F. (2008). ERP and Functional Fit: How Integrated Systems Fail to Provide Improved Control. The Electronic Journal Information Systems Evaluation, 11(2), 51 – 60. Retrieved from http://www.ejise.com

987

Production Information Systems Usability in Jordan

Ciurana, J., Garcia-Romeu, M., Ferrer, I., & Casadesus, M. (2008). A Model for Integrating Process Planning and Production Planning and Control in Machining Processes. Robotics and Computer-integrated Manufacturing, 24, 532–544. doi:10.1016/j.rcim.2007.07.013 Davis, F. D. (1989). Perceived usefulness perceived ease of use, and user acceptance of information technology. Management Information Systems Quarterly, 13(3), 319–340. doi:10.2307/249008 Deloitte Consulting. (1999). ERPs Second Wave [Report]. Deloitte Consulting. Retrieved from http://www.deloitte.com DeLone, W., & McLean, E. (1992). Information Systems Success: The Quest for the Dependent Variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60 Department of Statistics. Jordan. (2008). Statistics related to the Jordanian Industrial Sector. Retrieved from http://www.dos.gov.jo/dos_home_a/ gpd.htm Ende, J., Jaspers, F., & Gerwin, D. (2008). Involvement of system firms in development of complementary products: The influence of novelty. Technovation. Fan, J., & Fang, K. (2006). ERP Implementation and Information Systems Success: A Test of DeLone and McLean’s Model. In PICMET2006 Conference proceedings, July 2006,Turkey (pp. 9-13). Fichman, R. G., & Kemerer, C. F. (1999). The illusory diffusion of innovation: an examination of the assimilation gaps. Information Systems Research, 10(3), 255–275. doi:10.1287/isre.10.3.255 Fitzgerald, L., & Kiel, G. (2001). Applying a consumer acceptance of technology model to examine adoption of online purchasing. Retrieved February 2004, from http://130.195.95.71:8081/ WWW/ANZMAC2001/anzmac/AUTHORS/ pdfs/Fitzgerald1

988

Gnaim, K. (2005). Innovation is one of Quality aspects. Quality in Higher Education, 1(2). Gupta, S., & Keswani, B. (2008). Exploring the Factors That Influence User Resistance to the Implementation of ERP. Hyderabad, India: The ICFAI University Press. Hardgrave, B. C., Davis, F. D., & Riemenschneider, C. K. (2003). Investigating determinants of software developers to follow methodologies. Journal of Management Information Systems, 20(1), 123–151. Hssain, A., Djeraba, C., & Descotes-Genon, B. (1993). Production Information Systems Design. In Proceedings of Int Conference on Industrial Engineering and Production Management (IEPM33), Mons, Belgium, June 1993. Hsu, C., & Rattner, L. (1990). Information Modeling for Computerized Manufacturing. IEEE Transactions on Systems, 20(4). Hunton, J., Lippincott, B., & Reck, J. (2003). Enterprise Resource Planning Systems: Comparing Firm Performance of Adopters and Nonadopters. Accounting Information Systems, 4, 165–184. doi:10.1016/S1467-0895(03)00008-3 Jordan Industrial Cities. (2008). Statistics from the website of the JIC. Retrieved from http:// www.jci.org.jo Lo, C., Tsai, C., & Li, R. (2005, January). A Case Study of ERP Implementation for OptoElectronics Industry. International Journal of The Computer. The Internet and Management, 13(1), 13–30. Lu, K., & Sy, C. (2008). A real-time decisionmaking of maintenance using fuzzy agent. Expert Systems with Applications. McCrea, B. (2008). ERP: Gaining Momentum. Logistic Management, November 2008, pp. 44-46.

Production Information Systems Usability in Jordan

Microsoft. (2003). Microsoft Business Solutions. Retrieved from http://www.microsoft.com/business solutions Mirchandani, D. A., & Motwani, J. (2001). Understanding small business electronic commerce adoption: an empirical analysis. Journal of Computer Information Systems, 41(3), 70–73. Moore, G., & Benbasat, I. (1991). Development of an instrument to measure the perceptions of adopting an information technology innovation. Information Systems Research, 2(3), 192–222. doi:10.1287/isre.2.3.192 Mourtzis, D., Papakostas, N., Makris, S., Xanthakis, V., & Chryssolouris. (2008). Supply chain modeling and control for producing highly customized products. Manufacturing Technology Journal. Plouffe, C., Hulland, J., & Vandenbosch, M. (2001). Research report: richness versus parsimony in modeling technology adoption decisions-understanding merchant adoption of a smart card-based payment system. Information Systems Research, 12(2), 208–222. doi:10.1287/ isre.12.2.208.9697 Report, S. S. A. (2006). SSA ERP on SOA Platform. SSA Global and IBM. Retrieved from http://www. ssaglobal.com Rogers, E. M. (1983). The diffusion of innovations. New York: Free Press. Rogers, E. M. (1995). The Diffusion of Innovation (4th ed.). New York: Free Press. SAP. (2006). SAP Customer Success Story. Retrieved from http://www.sap.com Singla, A. (2005). Impact of ERP Systems on Small and Mid Sized Public Sector Enterprises. Journal of Theoretical and Applied Information Technology, 119-131.

Smadi, S. (2001). Employees’ Attitudes Towards the Implementation of the Japanese Model Kaisen for Performance Improvement and Meeting Competitive Challenges in The Third Millennium: The Jordanian Private Industrial Sector. Abhath Al-Yarmouk, 313-335. Smith, F. O. (2008 May). Oracle Says It Will Leapfrog Competitors in Manufacturing Intelligence. Manufacturing Business Technology, 26-29. Speier, C., & Venkatesh, V. (2002). The hidden minefields in the adoption of sales force automation technologies. Journal of Marketing, 65, 98–111. doi:10.1509/jmkg.66.3.98.18510 Theodorou, P., & Giannoula, F. (2008). Manufacturing strategies and financial performanceThe effect of advanced information technology: CAD/CAM systems. The International Journal of Management Science, 36, 107–121. Trari, A. (2008). ‫كومريلا ةعماج ةبتكم‬. Retrieved from http://library.yu.edu.jo/ Tsai, W., & Hung, S. (2008). E-Commerce Implementation: An Empirical Study of the Performance of Enterprise Resource Planning Systems Using the Organizational Learning Model. International Journal Of Management, 25(2). Turban, E., Leidner, D., McLean, E., & Wetherbe, J. (2008). Information Technology for Management (6th ed.). Hoboken, NJ: John Wiley. Wang, T., & Hu, J. (2008). An Inventory control systems for product with optional components under service level and budget constraints. European Journal of Operational Research, 189, 41–58. doi:10.1016/j.ejor.2007.05.025 Wang, W., Hsieh, J., Butler, J., & Hsu, S. (2008). Innovative Complex Information Technologies: A Theoretical Model And Empirical Examination. Journal of Computer Information Systems, (Fall): 27–36.

This work was previously published in Enterprise Information Systems Design, Implementation and Management: Organizational Applications, edited by Maria Manuela Cruz-Cunha and Joao Varajao, pp. 270-286, copyright 2011 by Information Science Reference (an imprint of IGI Global). 989

990

Chapter 54

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China Tao Chen SanJiang University, China, Nanjing Normal University, China, & Harbin Institute of Technology, China Li Kang SanJiang University, China, & Nanjing Normal University, China Zhengfeng Ma Nanjing Normal University, China Zhiming Zhu Hohai University, China

ABSTRACT Manufacturing transition is an important part of industrial upgrading. At present, Chinese scholars study the problem of manufacturing chiefly from two perspectives: The first is to discuss the status quo of Chinese manufacturing from the perspective of industrial competitiveness, with countermeasures put forward against manufacturing upgrading. The second is to directly discuss the upgrading of manufacturing from the perspective of global value chain, with the following proposal put forward: Chinese manufacturing upgrading should stretch from the low end to both ends of value chain. In addition, a discussion is also made to the role of producer services in promoting manufacturing, and the role of governmental regulations in upgrading manufacturing. Although these two perspectives are rational, they have some defects: Both of them are based on the hypothesis that the institutional environment in which manufacturing lies is stationary, and manufacturing is considered and measured with systems as exogenous variables; so the impact of institutional environment on manufacturing upgrading is overlooked. Based on reviewing previous literature, this chapter analyzes and discusses the path evolution of manufacturing in the transitional period in mainland China. DOI: 10.4018/978-1-4666-1945-6.ch054

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

1 INTRODUCTION Industry, especially manufacturing, is the foundation and pillar of national economy. To most developed and developing countries, the leading role and fundamental function of manufacturing cannot be replaced by those of any other industrial sector. Since reform and opening-up began over 30 years ago, especially since China joined WTO in 2001, Chinese economy has been developing very rapidly, along with the continuous rise of its economic aggregate. This is closely linked with the swift development of manufacturing. So to speak, the development of manufacturing supports the “ridge” of Chinese economy. The same trend has also occurred to other countries. The governments of many Western countries have again brought forward their plan of “reindustrialization”, i.e. paying attention to the important contribution of manufacturing to economic growth. Therefore, China cannot develop its economy without the development of manufacturing. Instead, we must pay attention to the role and function of manufacturing. At present, Chinese manufacturing develops very rapidly. In the past ten years, both the volume and value of production of Chinese industry have always been growing rapidly. If calculated as per a constant price, the annual average growth of the total production value of Chinese manufacturing from 1995 to 2003 was 14.53%. By 2003, the total production value of manufacturing had reached about 12.27 trillion yuan. According to the calculation of UN Statistics Division and Industrial Development Organization, the annual average growth rate of Chinese manufacturing from 1998 to 2003 reached as high as 9.4%, while the same figure for developing countries in the same period was only 4.4.%. The value of the total export volume of Chinese manufactured goods divided by the number of employees in manufacturing rose from 1763.92 US dollars in 1995 to 9570.09 US dollars in 2004; the annual average growth rate from 1998 to 2001 was 18.30%. Since China joined

WTO, the growth rate of export has been increasing more rapidly. The annual average growth rate from 2002 to 2004 reached 22.50% (Jin,et al., 2007). In 2008, the number of manufacturing enterprises in China was 396950, with an added value of 44135.836 billion yuan, total assets of 32340.308 billion yuan, and 77315.7 thousand employees. The number of manufacturing enterprises, total production value, total assets and number of employees in 2008 grew by 174.90%, 497.13%, 243.69% and 67.37% respectively over 2000. Among them, the total production value had the biggest growth rate: nearly 500% (Lin, 2010). However, behind the rapid development of Chinese manufacturing exists a series of problems yet to be solved in an effective way. First, the per capita added value of Chinese manufacturing is far lower than the world average. If calculated as per the constant price in 2000, the per capita added value of Chinese manufacturing in 2006 was 610 US dollars, lower than that of the developing regions in West Asia and Europe, Latin America and the Caribbean, merely equivalent to 13.5% of that of industrialized countries (Please refer to Table 1) (Li, et al., 2009). Second, the regional distribution and industrial structure of Chinese manufacturing are seriously imbalanced. From the perspective of regional distribution, a huge gap exists in the distribution of manufacturing among eastern, central and western China. The total production value, added value, the total assets and number of employees of the manufacturing in eastern China are 73.68%, 68.85%, 70.14% and 72.50% respectively; the same figures in central China are 16.23%, 18.86%, 16.91% and 16.75% respectively; the same figures in western China are 10.10%, 12.29%, 12.95% and 10.75% respectively. From the perspective of industrial structure, the distribution of manufacturing is also imbalanced. Nearly 66% of the added value of manufacturing in 2008 was distributed in ten industrial sectors including ferrous metal smeltering, communication equipment and computer (Lin,

991

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

Table 1. International comparison of the per capital added value of manufacturing (the constant price of 2000) Year Country group

1991

1994

1995

1998

2000

2006

Industrialized countries other than CIS

3573

3614

3730

3996

4291

4509

African countries located to the south of the Sahara

29

26

26

27

27

30

East Asia and South Asia

128

151

164

166

199

267

Latin America and the Caribbean

664

694

680

736

739

792

West Asia and Europe

431

448

460

518

535

664

CIS countries

422

227

216

198

239

369

North Africa

164

162

167

183

195

207

China

167

233

254

313

366

610

Note: The data of China include those of Taiwan Province and Hong Kong Special Administrative Region, but exclude those of Macao Special Administrative Region. Data source: UN Industrial Development Organization (UNIDO)

2010). This imbalance will lead to the huge waste of human, material and financial resources in eastern, central and western China, especially in central and western China, and the ineffective utilization of resources, thus setting back industrial development and economic progress. Third, the added value of Chinese manufacturing makes up a relatively high proportion in China’s GDP. From 2003 to 2007, the proportion of the added value of Chinese manufacturing in China’s GDP did not change a lot, always remaining between 34% and 40%. This proportion was not only far higher than that of such developed countries as USA, Japan, Germany, UK and France, but also far higher than such developing countries as Brazil, India and Mexico. Although some research indicates that China is now in the mature period of new industrialization (Li Gang et al, 2009), but this relatively high index not only reflects the characteristics of industrial structure of China in the middle stage of industrialization, but also indicates that there possibly exists a problem of disproportion to some extent in Chinese economic structure (Jin, et al., 2007). We should not only develop our economy, but also upgrade our manufacturing. We can solve the

992

problems existing now in Chinese manufacturing only by transforming manufacturing from a low end to a high end and from resource wastage and environmental pollution to resource conservation and environmental protection by applying a series of practical measures. Therefore, so to speak, the sustainable, rapid and healthy development of Chinese economy in the future will be closely linked with the upgrading of manufacturing.

2 THE INTERNATIONAL COMPETITIVENESS OF CHINESE MANUFACTURING Of all the fundamental theories of industrial international competitiveness having been so far put forward, the most influential are the theory of comparable advantage (Ricardo,1817) and the theory of competitive advantage (Porter,1990; Liu, et al., 2006). In Paul Krugman’s International Economics (the most extensively-distributed and authoritative textbook in this field in the world), “comparative advantage” is defined as: “If the opportunity cost for producing a certain product in a country is lower than in other countries, then

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

this country has relative advantage in producing this product”. Therefore, the fundamental principle concerning comparative advantage and international trade is: “If a country exports commodities in which it has comparative advantage to another country, then both these two countries can benefit from the trade between them”. In contrast, according to the theory of the competitive advantage of nations put forward by Professor Porter, the competition among countries in market economy in fact goes on among enterprises, and enterprises with international competitive advantage are concentrated in only a limited number of industrial sectors. Therefore, industrial sectors should be used as the basic units for studying national competitive advantage. The competitive advantage of enterprises not only comes from themselves, but also originates from the microeconomic foundations on which their development relies---a diamond system consisting of factor conditions, demand conditions, the competitive background of corporate strategy and structure and relevant supporting industrial sectors. Therefore, the attention to national competitive advantage should be focused on the cultivation of these microeconomic foundations (Liu, et al., 2006). As regards to the index-based appraisal of industrial competitiveness, the industrial competitiveness of a country can be appraised by using

more than one index. As indicated by relevant researches, we can derive somewhat different judgments if we appraise the competitiveness of Chinese manufacturing and its subsidiaries by using different indexes. Some indexes reveal that the international competitiveness of a certain industrial sector of China is elevated, while some other indexes reveal that the international competitiveness of this industrial sector of China is declining. As a matter of fact, this phenomenon often occurs when we observe a complicated thing from different perspectives, or the same complicated thing manifests itself in different ways in different aspects. Jin,et al.(2007) combine many indexes into one to express the trend of the change of the industrial competitiveness of China in a comprehensive way. Through due research, they worked out the composite index of the international competitiveness of Chinese manufacturing (Please refer to Table 2). By comparing Chinese manufacturing with American and Japanese manufacturing, they formed such an opinion: The competitiveness of Chinese manufacturing has always been rising continuously. From the perspective of its subsidiaries, the competitiveness of Chinese manufacturing in “other products” in the above table is the strongest. Office products and telecommunication equipment, whose competitiveness has exceeded

Table 2. The composite index of the international competitiveness of Chinese manufacturing The index of comparative advantage

The index of competitive advantage

The composite index of the international competitiveness

Manufacturing

102.8

132.1

117.5

1.Iron and steel

210.0

172.5

191.3

2. Chemical finished products and relevant products

88.2

115.7

102.0

3. Other semi-finished products

98.1

130.5

114.3

4. Mechanical and transportation equipment

119.8

164.1

142.0

Including: Office products and electronic products

134.9

190.9

162.9

5.Textiles

93.5

132.8

113.2

6. Apparel

83.2

113.5

98.4

7.Other products

89.2

108.1

98.7

993

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

that of textiles, are called “the second most competitive industrial sector of manufacturing”. Transportation equipment (especially automobile), chemical products, integrated circuits and components and power machinery and equipment have the weakest competitiveness in China, with their competitiveness still shrinking continuously. Compared with EU, China has the most obvious disadvantage in chemical products (especially medicines), transportation equipment (especially automobiles) and power machinery and equipment. In contrast, office and telecommunication equipment, other products (especially personal and household goods), apparels and textiles, enjoy a relatively high competitiveness advantage. At last they derived such a conclusion: The present elevation of the international competition of Chinese industrial enterprises is not determined by industrial enterprises to a very large extent; instead, this elevation depends on the development of Chinese finance and mass media. In our opinion, there are two important conditions for guaranteeing the enhancement of the international competitiveness of Chinese manufacturing: One is that Chinese manufacturing enterprises must secure a financial support from Chinese financial enterprises all over the world; the second is that China must have some media (including newspaper, TV and radio) influential in the mainstream crowds in foreign countries (especially European countries and the USA). Therefore, while assisting Chinese industrial enterprises in “going abroad”, Chinese government must consider how to assist Chinese financial institutions (especially banks) and media (especially newspaper) in “going abroad”. In addition, Chen et al. (2009) studied the international competitiveness of Chinese and American manufacturing through empirical analysis on the basis of the hierarchy-based opinion of industrial competitiveness. In their opinion, industrial competitiveness consists of four hierarchies, which are in turn (from bottom to top): the source of competitiveness—industrial environment; the

994

essence of competitiveness—productivity; the performance of competitiveness—market share; the result of competitiveness—industrial profitability. These four hierarchies are interlinked and cycled logically. The ultimate goal of industrial competitiveness is to generate profit; to generate profit, we should first prove that we have a stronger industrial competitiveness than other countries in trade; the foundation for enhancing trade competitiveness is to enhance the productivity of this manufacturing sector; to enhance the productivity, we should invest in the construction of soft and hard environments including technical innovation, advanced equipment and education & training; the capital resources for investment in environments rely on industrial profit in turn. Only by investing in environments (the first hierarchy) with profits (the highest hierarchy of competitiveness), can we enter the new-round cycle of productivity and market share, and can we acquire continuouslyreinforced market competitiveness. On this basis, they analyzed the international competitiveness level of 30 types of manufacturing in China, and derived an inconsistent conclusion: Measuring industrial competitiveness with the index of profitability and the index of productivity has a very high goodness of fit, which proves that the industries with a higher productivity will also have a higher profitability in Chinese domestic market. However, the result of sequencing based on the index of profitability and the index of productivity is quite different from the result of sequencing based on the index of market share: Gamma coefficient is negative, which proves that the industrial sectors with a higher profitability and a higher productivity do not necessarily have a bigger global market share, or things may happen the other way round. This proves that the first hierarchy and explanatory variable of industrial competitiveness---the factor of industrial environments, exerts a very great influence upon the status of competitiveness; industrial structure, industrial policy, trade policy, system environment, etc. can bring about some difference among the

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

three competitiveness indexes. Compared with Chinese manufacturing, not only are the indexes of the hierarchy of profitability and the hierarchy of productivity in American manufacturing interrelated, but also the competitiveness indexes of the hierarchy of market share and the hierarchy of productivity are somewhat interrelated; namely, the sequencing results of these three competitiveness indexes are not quite different. This proves that American manufacturing has a more mature development environment than Chinese manufacturing, so production efficiency can be smoothly transformed into market share and profit in the USA. From this we can see the importance of environments, namely, the important influence of such factors as systems upon the upgrading of manufacturing.

3 THE UPGRADING OF MANUFACTURING FROM THE PERSPECTIVE OF GLOBAL VALUE CHAIN To discuss the upgrading of manufacturing from the perspective of global value chain is the mainstream of the contemporary research into industrial upgrading. As the internationalization of contemporary economy becomes more and more intensified, features that are different from those of the previous tide of globalization have occurred to production, trade, investment, turnover, means of organization, etc. As known to all, division of labor is the source of economic growth, while industrial transfer is an important means to realize spatial division of labor. As international division of labor is intensified from among industries to among different products in each industrial sector, and then to different working procedures of each product, international industrial transfer is also evolved from spatial transition among different industries to that among different products, and then to that among different working procedures of each product. The intensification of division of

labor and industrial transfer are the key contemporary features of this globalization different from those of the previous rounds of globalization. On the other hand, global value chain is the leading force for boosting the intensification of division of labor and coordinating industrial transfer. This leading force has not only changed the microscopic foundation of globalization, but also produced a revolutionary effect upon competition models and development strategies. As regards to the theory of global value chain, an American scholar (Michael E Porter,1985) put forward the theory of Value Chain from the perspective of the theory of enterprise competition strategies. This theory is a theoretical framework for analyzing enterprise activities under the condition of international competition. Its core opinion is: The value created by enterprises in fact originates from some specific value activities on a relevant value chain. Grasping “strategic link” is a key factor for controlling the entire value chain and relevant industrial sector. In 1990s, a scholar(Gereffi, Korzeniewicz,1994,1999)put forward the theoretical framework of Global Commodity Chain. This theory directly related the value-added chain with global industrial organizations, and therefrom made a comparative research into the commodity chain driven by producers and purchasers. To eliminate the limitation of the word “commodity”, and highlight the importance of the creation of the relative value of enterprises and the acquisition of value on the chain, Gereffi and numerous researchers in this field reached an agreement at the beginning of the 21st Century to replace global commodity chain with a term “Global Value Chain (GVC)”. The classification of global value chains is basically based on the dichotomy under the framework of global commodity chain, namely, the producer-driven value chain and purchaser-driven value chain put forward by Gereffi. The producer-driven value chain means boosting market demand through producers’ investment, thus forming a system of vertical division of labor of local production

995

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

supply chain. Under this value chain, investors include not only transnational companies blessed with technical advantages and seeking for market expansion, but also national governments trying hard to boost local economic development and set up an independent industrial system. The purchaser-driven value chain means that large purchaser organizations blessed with strong brand advantages or sales channels coordinate and control production, design and marketing activities that aim at the target market. This value chain is featured by labor intensiveness and represented by consumer goods (e.g. apparel, shoe, toy, consumer electronics, etc.). Another important content of the theory of global value chain is about the governance model of value chain. So far, no uniform conclusion has been reached in the academic circle on the classification of the governance models of global value chain. As regards to the governance models of global value chain, Humphrey, Schmitz(2002) made a distinction among the following four governance models on the basis of the difference of organization coordination and power distribution: market-based model; model based on equilibrium network; capture-based model; hierarchy-based model. The enterprises in developing countries generally enter the assembly and manufacturing link of capture-based GVC as subcontractors by relying on cheap labor force. The high competition and low income brought by low-entry barriers make subcontracting enterprises face a huge pressure of upgrading. Through an empirical analysis into the textile and apparel sector of industry in the world, Gereffi(1999)worked out the sequential upgrading model under GVC, namely technical upgrading—product upgrading---functional upgrading—chain’s upgrading, and optimistically held the opinion that the subcontracting enterprises in developing counties can smoothly realize this kind of sequential upgrading by joining GVC and accepting the supports from leading enterprises in developed countries in such aspects as technical diffusion, employee training and equipment

996

introduction. With this sequential upgrading, the performance of subcontracting enterprises, namely the quantity of value created and acquired by them, also increases gradually. Gereffi’s analysis has two problems: First, as indicated by a lot of practice of developing countries, the above-mentioned model of sequential upgrading cannot be realized automatically (Humphrcy,Schmitz,2002). In addition, upgrading will change the contrast of powers and the structure of income distribution in GVC. Therefore, the upgrading of subcontracting enterprises is subject to suppression from leading enterprises. The size of upgrading barrier depends on the governance model of GVC. Next, upgrading does not necessarily mean the enhancement of the performance of subcontracting enterprises. In the capture-based GVC, the upgrading of subcontracting enterprises is a kind of passive upgrading aiming to obey the global strategies of leading enterprises. By continuously searching for and supporting new subcontractors and intensifying competition, leading enterprises capture the newly-added value created by the upgrading of subcontracting enterprises. In contrast, in the market-based and network-based governance with more equivalent powers, enterprise upgrading is a kind of active upgrading adapting to competition and seeking for profit, and can accordingly acquire the benefits brought by upgrading. According to the research of Liu (2007), and in connection with the practice of developing countries, Zhuo(2009) worked out four matching models of management, upgrading and enterprise performance: (1) Market-based management—(independent and slow sequential upgrading)—(slow enhancement of performance). Under this governance model, the transaction target is mature standardized products; the division of labor and transaction among enterprises are based on market contracts marked by “at arm’s length”; therefore, enterprise upgrading is a kind of endogenous and independent upgrading based on competency. This upgrading is free from the control and hindrance of other enterprises, but it is relatively slow, so the enhancement of

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

enterprise performance is also progressive. (2) Governance based on equilibrium network ---(independent and fast sequential upgrading)—(fast enhancement of performance). Governance based on equilibrium network is a coordination method of division of labor and transaction based on the mutual supplementation of competency, the sharing of knowledge and technology and relative equality; it does not involve a relationship between controlling and being controlled; therefore, enterprises can independently carry out upgrading in various forms. In this kind of network, the partial innovation required by high-degree division of labor greatly reduces the investment required by upgrading and mitigates investment risks, while the sharing of competency, technology and knowledge accelerates the realization of upgrading and the enhancement of performance. (3) Capture-based management---(rapid but passive technical upgrading and product upgrading)—(it is difficult to enhance the performance, or even the performance declines). In capture-based GVC, in order to guarantee the diversity of products, the timeliness of supply and the reliability of product quality, transnational companies of developed countries have to assist subcontracting enterprises in accelerating technical upgrading and product upgrading. In addition, by such means as patent pool, strategic isolation, brand reinforcement and retail market merger, they endeavor to raise the entry barriers for links with a high additional value including design, R & D and marketing, and slow down the process of the functional upgrading and chain’s upgrading of subcontracting enterprises, so as to prevent their core competency and income from being eroded. Under such circumstances, it is generally difficult to realize the sequential upgrading of subcontracting enterprises, and the income created by the rapid technical and product upgrading is also captured by transnational companies as a result of the intensification of the competition in the assembly manufacturing link. (4) Hierarchy-based management--- (rapid but passive technical upgrading, slow product upgrad-

ing)—(it is difficult to enhance the performance greatly). Hierarchy-based governance model is a method of production organization and coordination. Under this model, in order to reduce their cost and occupy the market, transnational companies of developed countries establish joint ventures through FDI in other countries, and control and operate enterprises by relying on such core competencies as ownership and R & D design. In the hierarchy-based GVC, as joint ventures can directly obtain the technology, brands, capitals and equipment of transnational companies, they can realize technical upgrading rapidly, and make their products meet the uniform global quality standards of transnational companies. Under this model, product upgrading is relatively slow because of its dependence on the market development and competition status of the countries to which the investment is oriented. Moreover, functional upgrading and chain’s upgrading are strictly controlled by transnational companies. On the other hand, it is difficult to greatly enhance the performance of joint ventures, because transnational companies try to squeeze the profit margin of joint ventures by collecting a high fee of technical transfer, key parts and components, and brand licensing. With the Yangtze River Delta as its research subject, Liu, et al.(2009) analyzed the disadvantages of Chinese manufacturing upgrading under GVC. In their opinion, merging into GVC has weakened the relationship among industrial departments in different regions in China, and exerted an unfavorable influence upon the integrated development of regional substantial economy. The technology transfer and technology spillover embedded in outsourcing and subcontracting activities have an obvious “breakpoint” and “isolation” effect upon the industrial development of developing countries. The imbalance and asymmetry of GVC income distribution lead to the difficulty in fund accumulation. In the “captured” value chain, owing to the change of the global competition environment, the continuous entry

997

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

of suppliers and the competition effect of cheap commodities, the foundation for the existence of some previous obvious advantages that can be acquired by relying on international big purchasers has already been seriously eroded, and has been replaced with more and more obvious defects and shortcomings, which constitutes a serious threat and hazard to the industrial upgrading process of developing countries. Under GVC, as international subcontractors, enterprises of developing countries will, while molding their global brands and independently building sales terminal channels at home and abroad, meet with “the positional block” of transnational companies that control core technical patents and product standard systems and international big purchasers that control the sales terminal passages and brands of international demand market. Therefore, through the so-called “Learning by exporting”, the skills of product and technical upgrading can be learned at most, and the real core technology and functional upgrading skills cannot be learned at all. For this, they held the opinion that the export-oriented development strategy based on GVC has made a huge contribution to economic takeoff during the early stage of economic development of Yangtze River Delta. However, in order to make Yangtze River Delta really become a base of advanced manufacturing and transfer from manufacturing to creation, we cannot merely rely on this development strategy. On the basis of attaching equal importance to international and domestic market, we should try to integrate the industrial relevancy and cycle system on which Chinese enterprises rely for survival and development, mold the governance structure of domestic value chain, and adjust the relationship structure among Chinese industries located in different regions, so as to lay a solid development platform for the manufacturing upgrading of the Yangtze River Delta and the integrated development of regional economy.

998

4 THE UPGRADING OF MANUFACTURING FROM OTHER PERSPECTIVES Promoting the upgrading of manufacturing by developing producer services is an indirect upgrading method. Modern Producer Services is an industrial sector offering direct services to productive or commercial activities as intermediaries, including finance, insurance, accounting, R & D design, law, technical and management consultancy, transportation, telecommunication, modern logistics, advertising, marketing, brand, personnel, administration and property management. Obviously featured by high knowledge, intelligence, growth, employment and influence, it is derived from the matrix of manufacturing. Therefore, it has a natural and intrinsic industrial relevancy or interaction with manufacturing. As indicated by the research of Park and Chan (1989), the development of manufacturing can bring about that of producer services, which can, in turn, promote the upgrading of manufacturing. There is an obvious positive relevancy between them. In contrast, by arguing from an opposite perspective, Farrell and Hitchens (1990) pointed out that the lack of producer services or the inadequate price and competitiveness of producer services in a region will set back the efficiency, competitiveness and operation of local manufacturing, thus destroying the development process of this region. Some other scholars, including Zhi (2001) and Zhou(2003) analyzed this from the perspective of industrial amalgamation. As pointed out by them, with the continuous advancement of information technology revolution, the traditional boundary between services and manufacturing becomes vaguer and vaguer, and they tend to develop in an interactive and amalgamated way. Liu(2008) also held the opinion that the development of producer services can reduce the installation cost of manufacturing enterprises, thus assisting enterprises in forming their core competitiveness and forming their interaction in industrial relevancy.

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

5 CONCLUSION There are a lot of research literatures of experts and scholars on the upgrading of manufacturing. All the researches into the upgrading of manufacturing, from such perspectives as industrial competitiveness, global value chain and “the indirect method of upgrading”, have their respective theoretical bases and realistic significances for existence. All of them have made their respective due contribution in this regard. However, all the existing literatures neglect to study the upgrading of manufacturing from the perspective of the entire national systems, which is a macroscopic perspective that is difficult to grasp. From the emergence of institutional economics in the field of economics to the gradual perfection of new institutional economics, we should abandon those things regarded as hypothetical exogenous variables in the past, and turn these exogenous variables into endogenous ones, and study some practical problems from this perspective. We can discover some useful things from both transaction costs and institutional history, and from both the national theory of new institutional economics and its enterprise theory, so that we can study and practice the upgrading of manufacturing in a better way. Economic development cannot do without systems, because economic development needs to be promoted by people, and people promote economic development by working out some systems. Such an objective fact as economic development is promoted by people subjectively. Therefore, it is very important to study this kind of subjective representation and some things hidden behind it. For example, in Chinese industrial upgrading, the promotion of government plays a very great role. However, in the process of decision making, the government does not always have a very thorough understanding of industrial upgrading. A very big misunderstanding has occurred in the present industrial upgrading, namely: as understood by many of us, the industrial upgrading can be real-

ized through a transition from the low-level facet of one industrial sector to the low-level facet of another. As a result, we always provide processing sites to large transnational companies of developed countries under economic globalization. Many people understand this simple transfer from the low end of one industrial sector to the low end of another as industrial upgrading, which is, as a matter of fact, very unreasonable. Therefore, how to standardize this kind of understanding and the behaviors of the government from the perspective of systems, and how to measure the opportunity cost of this kind of upgrading from the perspective of transaction cost, will be an indispensible part of our future research.

ACKNOWLEDGMENT I would like to express my sincere gratitude to the financial aid of Natural Science Foundation of China (grant number:70971031, 71031003), Sanjiang University (project number: K08010),and the innovation project for the graduate students of the Business School of Nanjing Normal University(project number:10CX_003G).

REFERENCES Chen, L. M., Wang, X., & Rao, S. Y. (2009). The comparison between the international competitiveness of Chinese and American manufacturing: Empirical analysis based on the hierarchy opinion of industrial competitiveness. China Industrial Economy, 6, 57–66. Farrell, P. N., & Hitchens, D. M. W. N. (1990). Producer services and regional development:a review of some major conceptual policy and research issues. Environment & Planning A, 22, 1141–1154. doi:10.1068/a221141

999

Research into the Path Evolution of Manufacturing in the Transitional Period in Mainland China

Gereffi, G. (1999). International trade and industrial upgrading in the apparel commodity chain. Journal of International Economics, 48(1), 37–70. doi:10.1016/S0022-1996(98)00075-0 Gereffi, G., Humphrey, J., & Sturgeon, T. (2005). The governance of global value chains. Review of International Political Economy, 12(1), 78–104. doi:10.1080/09692290500049805 Humphrey, J., & Schmitz, H. (2002). How does insertion in global value chains affect upgrading in industrial cluster. Regional Studies, 36(1), 1017–1027. doi:10.1080/0034340022000022198 Jin, B., Li, G., & Chen, Z. (2007). The status-quo analysis and enhancement countermeasures for the international competitiveness of Chinese manufacturing. Finance & Trade Economics, 3, 3–10. Li, G., Jin, B., & Dong, M. J. (2009). Basic judgment over the development status quo of Chinese manufacturing. Review of Economic Research, 41, 46–49.

Liu, Z. B., & Zhang, J. (2007). Forming, breakthrough and strategies of captive network in developing countries at global outsourcing system: Based on a comparative survey of GVC and NVC. China Industrial Economy, 5, 39–47. Liu, Z. B., & Zheng, J. H. (2008). Driving the Yangtze River Delta with service industry, (pp. 56-59). Beijing, China: The Press of Renmin University of China. Park, S. H., & Chan, K. S. A. (1989). Crosscountry input-output analysis of intersectoral relationships between manufacturing and services and their employment implications. World Development, 17(2), 199–212. doi:10.1016/0305750X(89)90245-3 Porter, M. E. (2002). National competitive advantages. Beijing, China: Huaxia Press. Zhi, C. Y. (2001). The industrial amalgamation of information telecommunication industry. China Industrial Economy, 2, 24–27.

Lin, Y. L. (2010). The status quo of Chinese manufacturing and research into its comparison with that of foreign countries. The Journal of North China Electric Power University, 3, 32–37.

Zhou, Z. H. (2003). Industrial amalgamation: The new power of industrial development and economic growth. China Industrial Economy, 4, 46–52.

Liu, L. Q., & Tan, L. W. (2006). Two-dimensional appraisal of industrial international competitiveness—The thoughts against the background of global value chain. China Industrial Economy, 12, 37–44.

Zhuo, Y. (2009). The governance of global value chain---Upgrading and the performance of 5 local enterprises---The questionnaire survey and empirical analysis based on Chinese manufacturing enterprises. Finance & Trade Economics, 8, 93–98.

Liu, Z. B., & Yu, M. C. (2009). Go from GVC to NVC - The integration and industrial upgrading of the Yangtze River Delta. Academic Learning, 5, 59–67.

This work was previously published in Comparing High Technology Firms in Developed and Developing Countries: Cluster Growth Initiatives, edited by Tomas Gabriel Bas and Jingyuan Zhao, pp. 134-144, copyright 2012 by Information Science Reference (an imprint of IGI Global).

1000

1001

Chapter 55

UB1-HIT Dual Master’s Programme:

A Double Complementary International Collaboration Approach David Chen IMS-University of Bordeaux 1, France

Jean-Paul Bourrières IMS-University of Bordeaux 1, France

Bruno Vallespir IMS-University of Bordeaux 1, France

Thècle Alix IMS-University of Bordeaux 1, France

ABSTRACT This chapter presents a double complementary international collaboration approach between the University of Bordeaux 1 (UB1) and Harbin Institute of Technology (HIT). Within this framework, the higher education collaboration (dual Master’s degree programme) is supported by research collaboration that has existed for more than 15 years. Furthermore this collaboration is based on the complementarities of competencies of the two sides: production system engineering (UB1) and software system engineering (HIT). After a brief introduction on the background and overview, the complementarities between UB1 and HIT are assessed. Then a formal model of the curriculum of the dual UB1-HIT Master’s programme is shown in detail. A unified case study on manufacturing resource planning (MRPII) learning is presented. Preliminary results of the Master’s programme are discussed on the basis of an investigation carried out on the first two cohorts of students.

BACKGROUND AND OVERVIEW Research relationships between the University of Bordeaux 1 (UB1, France) and Harbin Institute of Technology (HIT, China) exist for several years and both parties have established strong DOI: 10.4018/978-1-4666-1945-6.ch055

and long-term relationships with their industries over some 30 years. In the research domain on computer integrated manufacturing and production system engineering and integration, the cooperation between the University of Bordeaux 1 (IMS-LAPS: Laboratory for the Integration of Materials into Systems-Automation and Produc-

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

UB1-HIT Dual Master’s Programme

tion Science Department) and China started in 1993. Several Europe-China projects coordinated by UB1 have been carried out (1993-1995; 19961997; 1998-2002) in this domain, involving more than 7 major Chinese universities such as Tsinghua University, Xi’an Jiaotong University, Harbin Institute of Technology, Huazhong University of Sciences and Technologies, and others. More recently, the cooperation between the University of Bordeaux 1 and Harbin Institute of Technology has been strengthened to develop enterprise interoperability research activities in the Interop Network of Excellence (2004-2007) programme under the auspices of the European 6th Framework Programme for Research & Development (FP6) (European Commission, 2003b). There is a long and strong cooperation between UB1 and HIT in research on other topics as well, including enterprise system modelling, engineering and integration. However co-operation in higher hducation was not so well-developed in the past. Consequently, it was logical to extend the existing co-operation from the research base to incorporate higher education. Therefore, in September 2006 UB1 and HIT launched a dual master’s degree programme on enterprise software and production systems. This programme relies on the know-how of HIT in computer sciences and enterprise software applications, and of UB1 in enterprise modelling, integration and interoperability research. This joint international programme aims to train future system architects of production systems, with the ability to model, analyze, design and implement solutions covering organization, management, and computer science in order to improve performance of both manufacturing and service enterprises. It also aims to develop the capabilities of students to develop and grow in an international working environment particularly in China or France but also in most other countries where the themes covered by the programme are now and will continue to be vital.

1002

The programme is organized over two years. The first year’s courses are given in HIT and are concerned with industrial oriented computer sciences. The second year’s courses are given in UB1 and dedicated to production management and engineering. The first two cohorts of the master’s programme have successfully completed their studies and their industry internships in China and France and have obtained the Master’s Degree of the University of Bordeaux 1 and the Master’s Degree of Harbin Institute of Technology in September 2008 and 2009. Table 1 gives an overview on the organization of the two year programme. All courses are presented in English, including examinations and internship defense. One characteristic is that the industry internship can be carried out in China, or in France or in any third country in the world. Table 1. Organisation of the dual master’s programme Year 1 Teaching/training

Semester

Location

Project

First

Harbin or Bordeaux

Internship

First

World

Courses

Second

Harbin

Detail: • Project (135h / 9 ECTS - European Credits Transfer System), • Training in enterprise (305h / 21 ECTS), • Algorithm and System Design and Analysis (90h / 6 ECTS), • Database Design and Application (analysis and design) (94h / 6 ECTS), • Software Architecture and Quality (93h/6 ECTS), • Project Management and Software development (92h / 6 ECTS), • Object-Oriented Technology and UML (86h / 6 ECTS). Year 2 Teaching/training

Semester

Location

Courses

Third

Bordeaux

Training in company

Fourth

World

Detail: • Modelling of industrial systems (135h / 9 ECTS), • Production management (135h / 9 ECTS), • Industry performance measurement (45h / 3 ECTS), • Industry systems integration (90 h / 6 ECTS), • Option (45h / 3 ECTS), • Training in enterprise (450h / 30 ECTS).

UB1-HIT Dual Master’s Programme

The internship placements are mainly in companies, large as well as small/medium enterprises (SMEs), which have industrial co-operation projects with China, but not necessarily limited to that. Besides IT-oriented work, the internships are situated in the manufacturing industry sector as well as that of the services, typically as a responsible person in charge of industrial management (production, quality, and maintenance), a person in charge of design, development and implementation of software applications, a consultant, or a project leader.

COMPLEMENTARITIES UNDERPINNING THE COLLABORATION Software Engineering and Production System Engineering As mentioned above, this collaboration is based on the complementary strengths of UB1 (production system engineering) and HIT (software system engineering). Considering an enterprise from the general point of view as a system providing goods and services or, from the narrower point of view of its information system, it is clear that both of these approaches relate to the fundamental philosophy of engineering. In both cases the purpose is to design an overall architecture for the system, consistent and relevant to a predefined mission. Models and simulations have a central role in both approaches. Production system engineers view the enterprise as a system having a purpose related to a strategy. Within this purpose and strategy, performances are defined and enable the evaluation of how well the enterprise runs. The necessity for communication and cooperation between sections within a company or between companies within a network has led to the important concept of integration. Today, the numerous forms of co-operation and the versatility

they require brings into prominence the concept of interoperability that can be broadly understood as a loose integration. Because of the complexity of the enterprise, it is always considered to relate to a reference (a conceptual model or reference architecture). With respect to this reference, the engineering methodologies used are supported by modelling languages and frameworks (enterprise modelling), the role of which is to enable the understanding of the structure and behavior of the enterprise. The existing diversity of languages and software supports leads to the need to analyze them in detail, in order to compare them and potentially use them together. In this perspective, a pure syntactical approach is not enough, and therefore current scientific developments in this field are related to semantics and deal with meta-models and ontology. Furthermore, the consideration of the humanbeing as a component of the enterprise must always be remembered. For this reason the relation of the models with decision-making (of design and/or of management) is an important issue, whatever the approach used. From a software engineering point of view, the need for integration can be matched through the provision and the implementation of software tools, mainly enterprise resource planning (ERP) tools. This domain then focuses on IT solutions analysis, implementation projects, IT solution performance analysis, and the identification of the usability domain and the limitation of classical methods. The ways in which the functions of the information system are integrated using such IT tools is globally understood today. Organizational challenges are also quite well known. The main outstanding issues relate to supporting the processes of the enterprise by consistently integrating the several IT solutions that have functionalities that generally cover more than is required. In this context, the capability to match the models of the enterprise (the requirements) with the models emerging from the IT solutions (the so-called space of solutions) becomes crucial. Finally, a

1003

UB1-HIT Dual Master’s Programme

continuing core problem is ensuring a permanent alignment of the information system and its various implemented IT solutions with the strategy of the company. Because the economic environment is dynamic, this leads, of necessity, to a policy of continuous engineering. In summary, the two domains relate both to the design, integration and control of systems under performance conditions. In order to match the dynamic requirements and take changing constraints into account, it is necessary to continually improve the understanding of the interactions between the various models and to gather and integrate the various points of view such as organization, software, etc. In this drive to keep on improving performances, the exploitation of the complementarities between software engineering and production systems engineering is a thoroughly necessary requirement.

Enterprise Interoperability as an Emerging Topic Related To These Complementarities Enterprise interoperability is a topic currently emerging at the confluence of software engineering and production systems engineering. It is a topic of considerable and growing scientific and technical research, fundamentally because of the considerations presented above. Worldwide, the competitiveness of enterprises, including SMEs, will strongly depend in the future, on their ability to develop and implement massively and rapidly networked dynamic organisations. New technologies for interoperability within and between enterprises will have to emerge to radically solve the recurrent difficulties encountered - largely due to the lack of conceptual approaches - to structure and interlink enterprises’ systems (information, production, decision) (European Commission, 2003b). Today, research on interoperability of enterprise applications does not exist as such. As a result of the IST Thematic Network IDEAS (Baan, 2003),

1004

the roadmap for interoperability research emphasises the need for integrating three key thematic components, shown in Figure 1: •





software architectures and enabling technologies to provide implementation solutions enterprise modelling to define interoperability requirements and support solution implementation ontology, to identify interoperability semantics in the enterprise.

Interoperability is seen as the ability of a system or product to work with other systems or products without special effort on the part of the user/customer (Baan, 2003). The ISO 16100 standard (2002) defines manufacturing software interoperability as the ability to share and exchange information using common syntax and semantics to meet an applicationspecific functional relationship through the use of a common interface. The interoperability in enterprise applications can more simply be defined as the ability of enterprise software and applications to interact usefully. The interoperability is considered to be achieved if the interaction can, at least, take place at the three levels: data, application and business enterprise through the architecture Figure 1. Three key thematic components and their integration (Baan, 2003; European Commission, 2003b)

UB1-HIT Dual Master’s Programme

Figure 2. The three levels of interoperability (European Commission, 2003a)







of the enterprise model and taking semantics into account, as shown in Figure 2. At the beginning of the 2000s, research in the interoperability domain in Europe was badly structured, fragmented, and sometimes overlapping unnecessarily. There was no unified consistent vision and no co-ordination between various European research centres, university laboratories and other bodies. Not only was this the case with the pure research, but it was true in the training and education areas as well. To improve this situation, two important initiatives were launched by the European Commission: Interop Network of Excellence and Athena Integrated Project (European Commission, 2003a; 2003b).

The Interop Network of Excellence and the Athena Integrated Project Interop NoE was a Network of Excellence (47 organizations, 15 countries) supported by the European Commission for a three-year period (2003-2006) (European Commission, 2003b). This Network of Excellence aimed to extract value from the sustainable integration of these thematic components and to develop new industrially significant knowledge. Interop’s role was to create the conditions of a technological breakthrough to avoid enterprise investment being simply pulled by the incremental evolution of the IT becoming commercially available. Consequently, Interop’s joint programme of activities aimed to:

integrate the knowledge in ontology, enterprise modelling, and architectures to give sustainable sense to interoperability structure the European research community and influence organisations’ programmes to achieve critical research mass animate the community and spread industrially significant research knowledge outside the network.

In more detail, the joint research activities were composed of the following work packages: •

• • • • • • • • • • •

enterprise modelling and unified enterprise modelling language (UEML): unifying for interoperability and integration ontologies for interoperability domain architecture and platforms domain interoperabiliy synchronization of models for interoperability model driven interoperability model morphisms semantic enrichment of enterprise modelling, architectures and platforms business/IT alignment methods, requirements and method engineering for interoperability interoperability challenges of trust, confidence/ security services/take-up towards SMEs.

Athena (Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications) was also an Integrated Project supported by the European Commission for the three-year period (2003-2006) (European Commission, 2003a). Its objective was to be the most comprehensive and systematic European research initiative in the field of enterprise application interoperability, removing barriers to the exchange of information within and between organizations. It would perform research and apply results in numerous industrial

1005

UB1-HIT Dual Master’s Programme

sectors, cultivating and promoting the networked business culture. Research and development work was carried out hand in hand with activities conceived to give sustainability and community relevance to the work done. Research was guided by business requirements defined by a broad range of industrial sectors and integrated into piloting and training. Athena would be a source of technical innovations leading to prototypes, technical specifications, guidelines and best practices, trailblazing new knowledge in this field. It would mobilize a critical mass of interoperability stakeholders and lay the foundation for a permanent, world-class hub for interoperability. Projects running within Athena were organized in three action lines in which the activities would take place. The research and development activities were carried out in action line A. Action line B would take care of the community building while action line C would host all management activities (European Commission, 2003a). Concerning the R&D action line, six projects were initially defined as follows: •

enterprise modelling in the context of collaborative enterprises (A1)

Figure 3. Interaction of Athena action lines

1006

• • • • •

cross-organisational business processes (A2) knowledge support and semantic mediation solutions (A3) interoperability framework and services for networked enterprises (A4) planned and customisable service-oriented architectures (A5) model-driven and adaptive interoperability architectures (A6).

Relations between the three action lines are shown Figure 3. Interop NoE and Athena IP have strongly influenced and contributed to research and development on enterprise interoperability in Europe and beyond. Harbin Institute of Technology was also been invited to participate in Interop NoE meetings and in the creation of the Interop Virtual Laboratory which is considered one of the important achievements of this Network of Excellence.

The Interop Virtual Laboratory (Interop-VLab) Interop-VLab, a sustainable European scientific organization, is the continuation of the Interop

UB1-HIT Dual Master’s Programme

Network of Excellence. It aims at federating and integrating current and future research laboratories, both academic and industrial, in order to fulfil objectives that a participating organization would not be able to achieve alone. It is supported by local institutions to promote interoperability in local industry and public administration. Interop-VLab’s mission includes the following: •







Promoting the enterprise interoperability domain and acting as a reference: establishing a sustainable organization, at European level, to facilitate and integrate high level research in the domain of enterprise interoperability and be a reference for scientific and industrial, private and public organisations Contributing to the European Research Area: contributing to solving one of the main issues of the European Research Area - the high fragmentation of scientific initiatives - by synergistically mobilizing European research capacities, enabling the achievement of critical mass by aggregating resources to match major future research challenges that would not be possible by individual organisations Developing education and professional training: promoting and supporting initiatives of European higher education institutions in the domain Promoting innovation in industry and public services: facing the industrial challenge of creating networks and synergies, Interop-VLab aims to promote and support applied research initiatives addressing innovation and the reinforcement of interoperability between enterprises, at European, national and local levels; this approach will also help to create synergy between European, national and local research programmes.

Harbin Institute of Technology is the leading partner of the China Pole of Interop-VLab. The China Pole is constituted of ten important Chinese universities spread across China. Besides research related projects, an Interop master’s degree programme involving Interop-VLab members including HIT and UB1 was also planned.

FORMAL MODEL OF THE UB1-HIT DUAL MASTER’S DEGREE CURRICULUM This section presents the details of the dual UB1HIT master’s degree curriculum. Because this programme is built on two separate disciplines and carried out in two locations in two different countries, the main challenge to its success would be the development of a deep mutual understanding of the curriculum implemented in each location and a close collaboration between the two teams, to avoid unnecessary redundancies and emphasizes synergistic complementarities. To meet this objective, a detailed and explicit representation of the curricula was necessary. Usually university training curricula are presented in a textual form, often using tables. In general, inter-relationships between various courses and lectures tend not to be identified and/ or explicitly described and considered. Sometimes this can create difficulties for students in fully understanding the relationships between component courses and their logic, and consequently in mastering the overall knowledge that they need to acquire (Alix et al., 2009). Based on the feedback from the students after three years running on an experimental basis, it is necessary to present the master’s degree programme overall curriculum in a more formal and explicit way so that both students and teachers on both sides can have a clear and unambiguous understanding of the contents of the programme and of their roles within it. Therefore, the purpose of this section is to present the formal model of the

1007

UB1-HIT Dual Master’s Programme

UB1-HIT dual master’s programme curriculum. Unified Modelling Language (UML) was chosen to model the lectures delivered in the two years and the possible relationships between the series of lectures in the two years. Complementarities and potential future improvements are also discussed below.

Model of Year 1 Curriculum in HIT This section describes and model of the Year 1 curriculum carried out at Harbin Institute of Technology School of Software in China. The objective of the Year 1 training is focused on software engineering, information systems analysis and design, programming techniques and IT project management. This curriculum is mainly organized in three modules as shown in Figure 4: Language; Science and Methodology; IT Technique. In the Language module, there are two courses, English and French. •

English: Because of all the courses of this joint master’s programme are in English, a command of English is very important. The

Figure 4. UML model of year 1 curriculum at HIT

1008



objective is to give the students the ability to read and write reports/papers in English, and to communicate with professors fluently, orally and aurally, in English. French: This course aims to teach the Chinese students daily French, which can help them to adapt to French daily life when they arrive in France.

The Science and Methodology module aims to teach students how to carry out scientific research, how to analyse the objects in the universe and the relationships among them. This module contains two courses, dialectics and operational research. •



dialectics: This course is to teach students the resolution of disagreement through rational discussion and ultimately the search for truth. operational research: This shows how to use mathematical modelling, statistics, and algorithms to develop optimal solutions to solve complex problems, improve decisionmaking, and make process efficiencies, to finally achieve a management goal.

UB1-HIT Dual Master’s Programme

The IT Technique module is the main part of the first year study. This centres on software engineering. It offers a series of IT technique courses, such as databases, Java programming, etc., as well as a series of software management courses, such as software quality assurance, IT project management, etc.. In addition, there is a practical course in this module, in order to put both IT and project management knowledge into practice. •

IT: this set of modules aims to teach students the skills of design and implementation of IT solutions for different kinds of firm. The modules are as follows. ◦ databases: this module focuses on how to use a relational database, including, designing a proper entity relationship model (ERM), creating correct data view based on ERM, querying data by structured query language (SQL), defining store procedure for a database, etc. ◦ algorithm analysis: this module is an important part of broader computational complexity theory, providing theoretical estimates for the resources needed by any algorithm to solve a given computational problem: it shows how to analyze an algorithm, how to determine the amount of resources (such as time and storage) necessary to execute it, and finally achieve the goal of optimising the program. ◦ software architecture: this module shows how to analyze, design and simulate the structure or structures of the system - the software components, the externally visible properties of those components, and the relationships between them. ◦ Java programming: this module introduces one of the most popular programming languages: after complet-



ing this module, students should have the ability to implement an executable application and learn other programming languages by themselves. ◦ object-oriented design and UML: unified modelling language (UML) is a standardized general-purpose modelling language in the field of software engineering: it includes a set of graphical notation techniques to create visual models of softwareintensive systems; after this course, students should have the ability to use UML to design a proper software system model. Management: This set of modules contains lectures on the methodology of IT project management. The courses involve the following modules. ◦ software quality assurance (SQA): this topic covers the software engineering processes and methods used to monitor and ensure quality: it encompasses the entire software development process - software design, coding, source code control, code reviews, change management, configuration management, and release management. ◦ IT project management: this topic shows how to lay out the plan for an IT project, and how to realize, and anticipate and avoid the risks of failure of the IT project development: after this course, students should be able to use the methodology learned to reduce the cost of the IT project and to make the project efficient and as successful as possible. ◦ software development process management: this module gives more details about SQA and IT project management in the development phase of a project.

1009

UB1-HIT Dual Master’s Programme



Practical work: This module gives students a chance to put their knowledge into practice. Students are required to manage a full IT project by themselves, from requirement analysis, system model design to software implementation, test, and then software deployment: after completing this module, students will have an overall understanding of software engineering.

Model of Year 2 Curriculum in UB1 This section presents the model of the Year 2 curriculum at the University of Bordeaux 1 in France. The objective of this training is focused on enterprise system engineering, and in particular, enterprise modelling, production management, enterprise integration and interoperability.

The curriculum of year 2 is organised in five modules, as shown in Figure 5: MSI (industrial system modelling); ESI (industrial system management); MPI (industrial system performance); PRI (industrial system integration); OPT (option - bibliographical research work). The MSI module is mainly concerned with enterprise modelling and design. It starts with a lecture on system theory, laying down the fundamental concepts of the systemic view of the enterprise. Then enterprise modelling focuses on GRAI (graphs of interlinked results and activities) and IDEF (integration definition) methodologies (IDEF0 function modelling, IDEF1 information modelling and IDEF3 process modelling). The MOOGO (method for object-oriented business process optimization) process modelling tool developed by the Fraunhofer Institute for Produc-

Figure 5. UML model of year 2 curriculum at University of Bordeaux

1010

UB1-HIT Dual Master’s Programme

tion Systems and Design Technology (IPK) of Berlin and Petri net formal modelling are complementary to GRAI and IDEF. Productic (production science) is a lecture presenting the general problems and state-of-the-art of enterprise engineering. In parallel, design theory and innovation are presented to allow understanding of the basic concepts and principles of enterprise system design. The ESI module focuses on production planning and control techniques with the emphasis on the MRPII method. MRPII teaching is mainly organised around an extended case study (details are given below), including (a) paper exercises, (b) game based simulation, (c) computerisation using Prélude software (Chen & Vallespir, 2009). Sales forecasting and inventory management methods (for example, the order point method) support both manufacturing resource planning (MRPII) implementation and supply chain management which is also another important lecture in this module. In addition, other recent methods, such as KANBAN based on JIT (just in time) and lean manufacturing, allow complementing MRPII. In parallel, project management techniques such as the PERT (programme evaluation and review technique) method are also presented. The MPI module covers enterprise performance evaluation. Besides the Taguchi method and the reliability approach which can be related to design issues in the earlier MSI module (as shown Figure 5), a large part of the teaching is focused on quality concepts and methods. Benchmarking is also considered an important approach to improving the performance and quality of the enterprise systems and products. Another lecture is concerned with problems and solutions for recycling which is becoming more important in modern industrialised societies. Finally a game based on simulation shows how to link the flow (physical, information) in an enterprise to the performance (quality, delay), and how to act on the flow to improve the performance. The PRI module is about enterprise integration and interoperability. Here, enterprise integration is approached principally through the use of en-

terprise architecture and framework modelling approaches, such as CIMOSA (computer integrated manufacturing open system architecture), PERA (Purdue enterprise reference architecture) and GERAM (generalised enterprise reference architecture and methodology). In parallel, basic concepts, framework and metrics for enterprise interoperability are also presented, because these are becoming significant new trends replacing traditional integration oriented projects. It is also noteworthy that teaching in this module is largely based on e-learning on the one hand and on the other, on seminars presented by well-known European experts in MDI (model driven interoperability), A&P (architecture & platform) for interoperability, and ontology for interoperability. Finally the OPT module was originally designed to be a slot for optional courses. For the time being it has only one option (bibliographical research work). The students are asked to choose a subject proposed by professors and perform a bibliographical research on this. This work is done by groups of two students. Each group must write a report, present the work and answer questions in front of a jury. This work is an initiation to research work and aims at developing the capability of students to carry out bibliographical research.

Complementarities and Possible Improvements Relationships between the courses in years 1 and 2 are tentatively identified as indicated in Figure 6. Several types of relationships are defined as follows: •



is a relationship: for example the IT project management lecture given in year 1 is a particular type of project management (general) studied in year 2 part of relationship: the software quality assurance lecture in year 1 is part of more general quality course in year 2

1011

UB1-HIT Dual Master’s Programme

Figure 6. Links between the courses of the two years

support relationship: this means that one course is used as a preparation or a means for another one, such as for example software oriented design and UML that are used to develop MDI and implement A&P in year 2. Enterprise modelling techniques can also be used to model user’s requirements at higher level abstraction in software system design, for example, control and information management (CIM) level in the model drive architecture (MDA) framework.







Several complementarities can be identified. •

1012

At the global level, courses on computer science are complemented by training on enterprise and production systems. This allows HIT students to acquire supplementary knowledge to be better able to develop production system oriented software such as enterprise resource planning (ERP), customer relationship management (CRM), supply chain management (SCM) and others. On the other hand, UB1 students who are more familiar with industrial systems are empowered with software development skills.

At a more detailed level and from the modelling point of view, enterprise modelling (mainly at conceptual level focusing on global system modelling) is complementary to IT oriented modelling. This is also true from the architecture perspective where enterprise architecture needs to be detailed in IT architecture and IT architecture must also be consistent with enterprise architecture. Both Years 1 and 2 deal with design issues. Design related lectures in year 2 (design innovation, design theory, Taguchi, reliability, etc.) provide generic design concepts and principles complementary to software design techniques learned in year 1.

At the course level, several potential improvements are envisaged as follows: •



Better coordination on the project management courses of the two years is needed. A consistent framework is necessary to position each lecture to show links and complementarities. More explicit relations between IT architecture and enterprise architecture must be

UB1-HIT Dual Master’s Programme

defined, and, in particular, the alignment between business/IT, and the consistent elaboration of IT architectures in relation to enterprise architecture.

A UNIFIED MRPII TRAINING CASE STUDY Professional training in universities on MRPIIbased production planning and control techniques as well as its implementation is one of the key issues in most of the production related master’s degree programmes in France. Quite often, MRPII-based education and training do not reach a satisfactory level in university curricula. There are several reasons for this. One is the lack of production and industry concepts and experience among most master’s degree level students. Another reason relates to the high conceptual character of production planning and management methods, requiring mastery of many abstract ideas, definitions and terms. The third reason is that the lectures, exercises and practical work on computers usually deal with different discrete examples, case studies and illustrations. A unified common case study allowing students to learn, understand, analyse and practise MRPII-based production planning techniques is still elusive. In this section, an innovative and experimental MRPII training project is presented. This project was first implemented in the master’s degree programme (in engineering, direction and performance of industrial systems (IPPSI)) at the University of Bordeaux 1 during academic year 2008-2009, and has been partly used on an experimental basis in the dual UB1-HIT master’s programme. The characteristic of this project is to combine an MRPII game, enterprise modelling (the GRAI methodology) and software implementation within a single common case study. The objective of the project is to provide the students with a unified and consistent case study to learn MRPII-based production planning, from the fundamental concepts, through

paper exercises and manual game simulation to the implementation of an MRPII-based software system. After the presentation of the principles and broad organisation of the project, we will show the various phases the students follow to learn MRPII-based production planning and control in a gradual and systematic manner. The experiences of the students obtained through formal feedback and possible improvements in the approach will also be discussed.

Description of the Case Turbix (Centre International de la Pédagogie d’Entreprise (CIPE), 2008b) is a small company that manufactures reduction gears referenced from R1 to R8 (8 finished products). The reduction gears are composed of two types of parts, E1-E8 manufactured in the company, and P1-P5 purchased externally. The E1-E8 parts are manufactured using two types of raw materials, M1 and M2. Figure 7 shows the structure of R3. Turbix is organised in two workshops, the machine shop to manufacture the E parts and the assembly shop to manufacture the finished products (R). Masteel and Fournix are two suppliers providing raw materials M and purchased parts P, respectively. The overall organisation and physical flow is shown in Figure 8. Figure 7. Example: R3 product structure (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

1013

UB1-HIT Dual Master’s Programme

Figure 8. Organisation and physical flow of Turbix (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

Figure 9. Turbix management architecture

themselves how the MRPII method works and what are the steps one must follow to implement MRPII software in a company. Participants using this game can plan the production and purchasing orders using the MRPII technique, and simulate the execution of planned orders through various functions of the company - commercial service, manufacturing service, inventory/stocks, purchasing service. etc. During the simulation, each participant takes a precisely defined role/responsibility. In detail, the game allows students Because of different customer lead times, R1 and R2 are produced according to sales forecasts established beforehand. R3-R8 are manufactured upon firm customer orders. E1-E8 and P1-P5 are manufactured and purchased according to the needs for R1-R8 production. M1 and M2 are purchased according to the needs for E1-E8 production. On the basis of this physical organisation, the architecture of the production management implemented in Turbix is presented Figure 9.

First Component: The Manufacturing Resource Planning (MRPII) Game The objective of the MRPII game (Centre International de la Pédagogie d’Entreprise (CIPE), 2008b) is to allow a group of participants to discover for

1014

• •

• •

to understand the structure and functioning of the existing production system to plan the master production schedule (MPS) for the finished products and draw up the material requirement planning (MRP) for parts E and P to calculate load and perform load levelling and finally to simulate the functioning of the production system over a period of two months, all consistent with the management architecture in Figure 9.

Second Component: The GRAI Methodology The GRAI methodology (Vallespir & Doumeingts, 2006) was developed at the Department for

UB1-HIT Dual Master’s Programme

Automation and Production Science/Graphs of Interlinked Results and Activities (LAPS/GRAI) of the Laboratory for the Integration of Materials in Systems (IMS) at the University of Bordeaux 1. This methodology sets out to model, analyse and design the decision-making sub-systems of a production management system. The method consists of • • •

a conceptual reference model defining the set of fundamental concepts modelling formalisms, and a structured approach.

The GRAI methodology is used in the project to model and analyse the existing production system of Turbix, to detect its potential inconsistencies and to design a new improved system.

Third Component: The Prélude Production MRPII Software Prélude Production is an MRPII compliant software developed for professional training and teaching purpose (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a). Its userfriendly interface allows students to learn how to manipulate MRPII software in a gradual way. This software is used in the project to computerise the production planning and management activities in Turbix Company. After the implementation of Prélude Production in the company, it is used to plan and control the daily production activities. It is also used together with the game to perform a simulation. Figure 10 shows the main functions of the Prélude Production software.

Figure 10. Main functions of Prélude Production software (Centre International de la Pédagogie d’Entreprise (CIPE), 2008a)

1015

UB1-HIT Dual Master’s Programme

The Programme and the Implementation of the Project

Figure 11. Overall logic of the project

In this section, we present the programme for the project and its organisation and implementation. The project is carried out by the students over several months. Two groups of students are formed, each group of about 10 students. Figure 11 gives the overall logic of the project.

Initialisation Phase To start the project, the objective, the organisation and time table, as well as the expected results at the end of each phase are presented to the two groups of students.

Playing the Game The next phase aims to show the students how to carry out the planning and simulation without the MRPII software tool. The objective is to allow students to develop a better understanding the basic concepts and techniques of the MRPII calculation, and, at the same time, a thorough understanding of the existing Turbix system. The game is played over one day and a half. At the beginning the students use the traditional inventory management technique (order point method) to manage the Turbix system for a one month (January) period. Then they are asked to migrate to the MRPII technique. Manual MRPII calculation is done to plan all the orders needed for the finished products (R1-R8), and parts (E1-E8) and (P1-P5). Load calculations on the four lines in the assembly workshop (the L1-L3 assembly lines, and the L4 test line) are carried out in order to validate the master production schedule (MPS). The MRPII simulation is launched on a day-by-day basis for the duration of the next month (February), for managing the production activities (purchasing, manufacturing and assembly) and the management activities (order release, production follow-up, inventory, orders close-up, etc.).

1016

Existing System Analysis After the MRPII game, the students are asked to analyze the functioning of the existing production system based on their knowledge and experience gained during the game. The GRAI method, using GRAI grid and nets, is used to model the decision-making structure of the existing project management (PM) system. Based on the model of the existing system, GRAI rules can be applied to detect possible inconsistencies. If inconsistencies are found (for example, a bad decision horizon or faulty period values), the students will propose necessary corrections to the existing system in order to improve its functioning.

UB1-HIT Dual Master’s Programme

Simulation of Improved System

DISCUSSION AND REMARKS

After the analysis and possible redesign of the production system in Turbix, the students then play the game again. The game simulation is done on the new system, having implemented the set of suggested corrections and modifications to the existing system. For example, one of the possible suggestions that might be proposed by the students is to adjust the value of the planning horizon in the MPS and the MRP levels to allow an improved co-ordination between them.

The experimentation carried out among students of the master’s class in 2008 showed strongly the student interest and feasibility of the project. The main added values of the project were found to be the following: •

Implementation of Prélude Production Software During this phase, the students are asked to computerise the production planning and control activities in Turbix using Prélude Production software. For this task, the students are divided into small groups (2 students per group per computer). Firstly, the students need to make a compilation of all the relevant technical data (bill of materials, routings, items and workstations) and put them in an appropriate form to be entered in the computer. Then a small scenario (using a number of sales forecasts and firm orders) is given to students to allow them to test the Prélude Production implemented for Turbix. This tends to be a very interesting task because the students need to find errors they may have made during the data collection.

MRPII Software Based Simulation After the validation of these test results, the simulation can begin. During the simulation, the students are asked to perform the same activities they did during the game but this time using the MRPII software. This phase allows students to compare the two simulations, the game simulation without computer aid and the simulation with the MRPII software (Prélude Production).







The project allowed the students not only to learn MRPII concepts and techniques, but also to practise the MRPII-based production control in a concrete and unique case study. The students could evaluate and compare the problems, difficulties and benefits at the different stages of the project using the same case. The game played before the computerisation stage allowed the students to take an active part in the activities of the enterprise as if they were actors in the company, thus putting them in a situation similar to that in the real enterprise The use of the GRAI method before computerisation allowed the detection of possible inconsistencies in the system. The benefits are to show the usefulness of enterprise modelling to improve company performance, and to computerise a reengineered system after the correction of inconsistencies. The project showed that computerisation of production management is not only a matter of software. Before introducing an MRPII package in a company, it is necessary to analyse and re-engineer the existing system to make it consistent, to have the appropriate technical data, to define the most suitable parameters for the software, etc..

This project has contributed to improving MRPII-based production management training courses in French universities by providing a unified case study framework which covers the various types of exercises (understanding funda-

1017

UB1-HIT Dual Master’s Programme

mental concepts, paper-based MRPII planning and manual simulation, enterprise modelling/analysis and re-engineering, computerisation, MRPII software-based simulation). One of the improvements planned for the near future is the reinforcement of the use of the GRAI methodology in the second phase of the project. It will also be necessary to investigate ways of increasing the time horizon of the simulation (from two months, possibly to 6 months or preferably one year). The extension of the time horizon will allow simulation of a long term production plan and the incorporation of some strategic production management decisions.

PRELIMINARY ACHIEVEMENTS AND ASSESSMENT This section describes the results of the two first cohorts of students on the master’s programme, and the feedback received from them. The students are asked to give a personal overview and overall appreciation on the content of the programme as well as the difficulties encountered in studying and comprehending each year and the benefits expected at the end of the programme. Finally the students who have earned all the European Credit Transfer System (ECTS) credits at the end of the second year of the programme and are eligible for the dual master’s degrees, are asked for a professional perspective/discussion on the advantages of the programme and degree. The first cohort of students, class 2008, had thirteen students, twelve Chinese and one French. The second cohort, class 2009, had fourteen students in year 2, ten Chinese and four French. The third cohort, class 2010, has fifteen students in its year 1, ten Chinese and five French. The relatively low number of French students, although growing, is probably because the predisposition to go abroad for study is weak in France and the students who go aboard tend to be pioneers. The employment opportunities for the graduates are in both manufacturing and service companies.

1018

Graduates can become managers and more specifically production, quality, or maintenance managers, R&D engineers and managers, consultants, project coordinators and managers in the general domain of implementing enterprise software applications (such as ERP, SCM, PLM and many others) in large companies and in SMEs. If, as would seem likely, the internship is a springboard to employment, another employment opportunity is in research teams and projects in academic institutions. Indeed, in 2008 eight students did their final internship in an academic or research laboratory, three in France and five in other European countries. In 2009, ten students have chosen research internships, five in French laboratories and five in other European ones.

Survey of the Opinions of the Students In December 2008, a questionnaire was sent to all the students of class 2008 and class 2009. The objective was to obtain an evaluation of the programme, taking into account the student’s difficulties, the facilities and their expectations before, during and after the programme, and to obtain feedback on the professional experience gained after the two periods of internship by the two cohorts. A simplified view of the questionnaire used is presented below.

Questionnaire Used 1. 2. 3. 4.

5.

Position, name and address of the company or university? Position of the internship activity (daily job) in the company? Competencies before year 1, before year 2 and at the end of year 2 Difficulties met and facilities provided during the first and the second year of the programme? Advantages and disadvantages offered/ encountered in relation to the double com-

UB1-HIT Dual Master’s Programme

6. 7.

8.

9.

petency, EM (enterprise modelling) and IT (information technology), of the programme ? Thoughts about the continuity between Harbin and Bordeaux In your daily job do you use the double competency (if not, which one do you use), advantages offered by IT / EM knowledge in your job? Differences and similarities between the form and operation of the internship in Harbin and in Bordeaux? Is the double competency an advantage in finding a job or PhD position?

Seven students of class 2008 replied, two of them employed in private companies, three PhD students, and two looking for a job or further training opportunities. Twelve students of class 2009 replied.

Results from Class 2008 •

Competencies: Most students (5) had low or only a fundamental level of software programming skills, and some students (2) had no software domain knowledge but principally mathematics or control theory and engineering respectively before the programme year 1. At the end of the first year, almost all the students (6) had achieved competency in software engineering, especially software architecture, software development, Java and databases. At the end of two years, most students believed they had acquired (i) knowledge about enterprise modelling and production management, (ii) knowledge about enterprise modelling methods like GRAI and IDEF, (iii) deep understanding of SCM, quality assurance and performance measurement, and (iv) knowledge through the bibliographic research work in academic fields like ontology and interoperability. After the full pro-







gramme, most of the students agreed that they had made progress in the English and French languages. Difficulties and facilities: During the first year of the programme, most problems came from language misunderstanding which made some courses difficult to assimilate. Three students thought that the courses were heavy even though they had a good studying environment (2 of them lacked knowledge and experience in software engineering). In the second year of the programme, 4 out of the 7 students who responded thought that topics such as interoperability and service oriented architecture were too conceptual and difficult to comprehend. With insufficient background knowledge of practical enterprise cases, models that are abstract and connections between these models are hard to understand. Double competency statement: Students have acquired knowledge of software development and enterprise modelling by the end of the two years. They have good knowledge of how IT works in the enterprise and also a good understanding of business processes which can help them to find the right technology when they design an enterprise management system. In their daily jobs five students out of the seven use this double competency. IT knowledge is used directly and regularly by persons in employment in companies while the PhD students use IT to implement programs to prove, analyze and show their research results. For those in employment, enterprise management knowledge supports their understanding of the framework and architecture of the issues they work on and supports the design of solutions in their daily work. Teaching specificities: The teaching in the IT domain tends to be considered more theoretical while the teaching in the EM domain is considered more practical because

1019

UB1-HIT Dual Master’s Programme

of the game-based simulations and exercises that can be seen as playing realistic roles. The enterprise games are also considered a useful tool to explore a particular context and have special values because most these games tend to be team-oriented. In Harbin the internship takes place at the same time as the course. Consequently, students have a complete project in which they use IT technology to carry it out. An advantage, according to the students, is that they can go deeper into detail through asking for information from the teachers but that sometime this becomes too closely detailed to form a proper overall view. In Bordeaux, the internship has a specific period and the subject in question is sometimes disconnected from the course even if that subject deals with management. This requires more individual initiative and creativity because the students can feel alone in confronting their problems even if they can ask their teacher. But it is considered a strong advantage that the students are totally immersed in the company.



Results from Class 2009 •

1020

Competencies: At the beginning of year 1, 9 of these students had competencies in software engineering: operating system, data structure, databases, IT project management, software quality assurance and some popular development languages such as Java, C++ and.NET. One student had specialized in automatic control and another had knowledge linked to mechanical engineering and production management. Thus the competencies were much more diverse than in the previous cohort. Before year 2, most of the students (9) had improved their programming skills as software engineers. By then they had more experience in programming and project



management, and knowledge of advanced databases, algorithm, software architecture and so on. They had also improved communication skills, with a good level of French and fluent English. The other two students had acquired knowledge of programming using Java, database design, and IT project management. At the end of semester 4, all the students had gained knowledge in enterprise computing and engineering, including production management, enterprise modelling, and quality management. Difficulties, facilities: Like the previous cohort, the first difficulty cited is language. The second arises from the fact that students are not au fait with the production environment and so concepts relative to an enterprise are difficult to comprehend, the concept of interoperability for example is understood, but the finer details are not, and while the model-driven architecture and enterprise modelling methods are readily learned, the lack of experience makes their use far from obvious. All the students complained about the schedule of the course, with too many courses planned in too short a period and too many different types of knowledge to be learned in different areas/ domains. Double competency statement: Despite difficulties, students agreed that they had acquired a double competency. Not only did they know how to do programming but also understood how an enterprise works using IT technologies. The background of one domain was felt to be a great help when working in the other. The dual competency provides more choices for a future career. Even if it is not easy to re-orient one’s mind from the software view to the enterprise view, they were confident that they would be able to bring these views together in the future.

UB1-HIT Dual Master’s Programme



Teaching specificities: In Bordeaux, there are more games-based training exercises as opposed to the programming practicals in Harbin. As regards the internship, they did not find major differences between Harbin and Bordeaux. In the first year, the goal was to develop software systems, and students worked directly from the analysis, and then designed the system and wrote the code. In the second year, students needed to read the materials about the production system to gain a holistic understanding of the subject.

Remarks The dual master’s degee programme represents a good challenge for all the responding students because of the challenging multidisciplinary and cross-domain training during the two years. The students also became very aware of the interests and needs of companies which are very close to the topics and subjects dealt with in the programme.

Needs Expressed During the Internship (So-Called, M2) This section provides the analysis of the internship of year 2 of class 2008, because internships of class 2009 are only in progress. As mentioned earlier, in 2008 five students did their internship in private companies. One internship topic concerned pure management issues, while all the others combined IT problems and the use of enterprise modelling methods to analyze and model enterprise systems.

Topic Relative to Management Only A well known large company in the domain of material construction had proposed a study on pricing strategy because the market is becoming more and more competitive, and the pricing strategy must be adjusted to take into account product turnover, life cycle phase and other dynamic variables. This study focused on the analysis and comparison of

the commonly used pricing strategies: premium pricing, value pricing, cost/plus pricing, competitive pricing and penetration pricing. The internship project led to the proof that the value strategy was the best strategy for new products and highend products, but that for all other products, the competitive strategy was shown to be the best one. This conclusion has enabled the company to improve customer loyalty, keep market share and make expected profit (Jia, 2008).

Internship Studies Involving Combined Topics One study led to the analysis of the possibilities of applying data mining techniques in crossselling to increase the overall sales of a company specialising in material construction. The study elaborated a process methodology based on data mining software, and described the way to build mining models to do cross-selling analysis. The student described how to write associative prediction queries, integrate these queries into a Web cross-selling application and then discussed the architecture of a web application with data mining predictions (Li, 2008). Another subject proposed by the same company concerned the exchange of data between servers of 120 commercial agencies which constitute the company. The objectives of the company were to (i) find a solution which could monitor the servers, (ii) analyse their performance, and (iii) predict potential problems and inform the system administrator in advance. Furthermore the company needed software to help the administrator do his daily work, in verifying the backup machine and the working situation of the servers, and other tasks. The mission of the student was to choose a correct solution to satisfy the company’s needs and then design and implement the architecture on the existing system (Yang, 2008). A third combined topics study related to a small company specializing in internet search engines. The main challenge of the company was to offer

1021

UB1-HIT Dual Master’s Programme

to internet users the relevant information about an enterprise, a product or a service. For this task the search engine limits the referencing to the web site of the enterprise in order to have consistent and precise information. Blog, personal page and forum pages are avoided. The student participated fully in the whole project, from the requirement analysis phase to the development phase, including learning and using specific languages, technologies, etc. The student acted as an actor in the project but also as project manager during the development phase (Wang, 2008). The fourth internship was carried out in a large French worldwide company, in the Oracle project pole. The student worked on the ERP technology taking into account the requirements of a specific customer that is a public sector administration. The company maintains the Oracle IT system for the administration. The student worked on the purchase order process from the demand, to the invoice payments, including the orders and receipts. This allowed him to study the complete acquisition workflow, and introduce some new concepts of finance and accounting, and new concepts and ontology in the financial area (Fausser, 2008).

Analysis of the Topics by the Private Companies Involved Information on projects carried out by the students during their internships in private companies was collected through the report they gave at the end of year 2 of their programme. Three subjects out of the five were proposed by one enterprise. This indicates the difficulty of finding industrial internship in France, mainly due to the language barrier. French companies find the double competency of the student very interesting but most are not prepared to integrate students who do not speak French into their company. A second important conclusion is that most of the topics (4 out of 5) required a double competency, and in those cases the students successfully applied IT techniques to improve the performance of those companies.

1022

CONCLUSION This chapter presents an international collaboration between the University of Bordeaux 1 and Harbin Institute of Technology. This collaboration is characterised by the fact that it is based on: •





a long-term strategy of both institutions (UB1 and HIT) to develop sustainable cooperation in the domain of interoperability which is considered a priority subject on both sides the two competencies, in UB1, enterprise modelling and interoperability, and production system sciences, and in HIT, computer sciences and software engineering, are complementary in the development of R&D and in this education programme the combination of research activities and education/training allow benefits to flow from the latest advances in research in enterprise software application interoperability (such as the European Union R&D projects, Athena, Interop, and others).

This collaboration model has considerable potential to be duplicated and extended to other universities and other countries. The formal UML model to represent the joint master’s programme curriculum allows explicit identification of all elementary lectures and the possible relationships between the lectures and modules. We believe that this formal modelling approach can help students to better understand the training curriculum and lead to an improved quality of education. Furthermore it also allows the teachers involved to check the overall consistency of the curriculum, to better coordinate and organise their lectures, to avoid unnecessary redundancies and overlapping coverage, introduce possible contractions and bring out synergies and complementarities. The feedback on the students’ experience of the dual master’s degree programme shows that

UB1-HIT Dual Master’s Programme

it responds to real business needs and concerns. Even with the language barrier, more companies are becoming interested in students with the double competency. For the students, even though the programme is difficult to assimilate during the two years because of its breadth and density, they are satisfied at the end because they have come to understand the crucial impact of IT on enterprise performance.

ACKNOWLEDGMENT The authors thank Zhiying Tu and Zhenzhen Jia (University of Bordeaux 1) for their contribution to this chapter.

REFERENCES Alix, T., Jia, Z., & Chen, D. (2009). Return on experience of a joint master programme on enterprise software and production systems. In B. Wu & J.-P. Bourrières (Eds.), Educate adaptive talents for IT applications in enterprises and interoperability. Proceedings of 5th China-Europe International Symposium on Software Industry Oriented Education (pp. 27-36). Talence, France: University of Bordeaux. Baan, A. Z. (2003, March). IDEAS roadmap for e-business interoperability: Interoperability development for enterprise application and software – roadmaps (IST-2001-37368). Paper presented at the e-Government Interoperability Workshop, Brussels, Belgium. Centre International de la Pédagogie d’Enterprise (CIPE). (2008b). Jeu de la GPAO (gestation de la production assistée par ordinateur). Retrieved October 30, 2010, from http://www.cipe.fr

Centre International de la Pédagogie d’Entreprise (CIPE). (2008a). Manufacturing resourses planning software package. Retrieved October 30, 2010, from http://www.cipe.fr Chen, D., & Vallespir, B. (2009). MRPII learning project based on a unified common case-study: Simulation, reengineering and computerization. In B. Wu & J.-P. Bourrières (Eds.), Educate adaptive talents for IT applications in enterprises and interoperability. Proceedings of 5th China-Europe International Symposium on Software Industry Oriented Education (pp. 233-240). Bordeaux, France: University of Bordeaux. Chen, D., Vallespir, B., & Bourrières, J.-P. (2007). Research and education in software engineering and production systems: A double complementary perspectives. In B. Wu, B. MacNamee, X. Xu, & W. Guo (Eds.), Proceedings of the 3rd China-Europe International Symposium on Software IndustryOriented Education (pp. 145-150). Dublin, Ireland: Blackhall. Chen, D., Vallespir, B., Tu, Z., & Bourrières, J. P. (2010, May). Towards a formal model of UB1-HIT joint master curriculum. Paper presented at the 6th China-Europe International Symposium on Software Industry Oriented Education, Xi’an, China. European Commission. (2003a). ATHENA Advanced Technologies for Interoperability of Heterogeneous Enterprise Networks and their Applications: Integrated project proposal. European 6th Framework Programme for Research & Development (FP6-2002-IST-1). Brussels, Belgium: European Commission. European Commission. (2003b). INTEROP - interoperability research for networked enterprises, applications and software, network of excellence, proposal part B. European 6th Framework Programme for Research & Development. Brussels, Belgium: European Commission.

1023

UB1-HIT Dual Master’s Programme

Fausser, J. (2008). Maintenance of Oracle ebusiness suite V11 (Internal report of Internship of M2). Talence, France: University of Bordeaux. International Organization for Standardization (ISO DIS 16100). (2000). Manufacturing software capability profiling - part 1: Framework for interoperability (ISO TC/184/SC5, ICS 25.040.01). Jia, Z. (2008). Pricing strategy for Point P based on analysis and comparison of commonly used pricing strategy (Internal report of Internship of M2). Talence, France: University of Bordeaux. Li, Z. (2008). Data mining applied in cross-selling (Internal report of Internship of M2). Talence, France: University of Bordeaux. Vallespir, B., & Doumeingts, G. (2006). The GRAI (graphs of interlinked results and activities) method. Talence, University of Bordeaux 1: Interop Network of Excellence Project tutorial. Wang, Y. (2008). Improvement and extension of professional search engine (Internal report of Internship of M2). Talence, France: University of Bordeaux. Yang, W. (2008). Monitoring work situation of servers in Saint-Gobain Point P (Internal report of Internship of M2). Talence, France: University of Bordeaux.

KEY TERMS AND DEFINITIONS Dual Master’s Degree: A master’s degree programme involving at least two different universities / institutions from two different countries, and allowing students to obtain two degrees from the two institutions. Enterprise Modelling: Representing the enterprise in terms of its structure, organisation and operations according various points of views (technical, economic, social and human). Interoperability: A property referring to the ability of diverse systems and organizations to work together (inter-operate). The term is often used in a technical systems engineering sense, or alternatively in a broad sense, taking into account social, political and organizational factors that impact system performance. Production Management: A set of techniques for planning, implementing and controlling industrial production processes to ensure smooth and efficient operation. Production management techniques are used in both manufacturing and service industries. Software Engineering: A profession dedicated to designing, implementing, and modifying software so that it is of higher quality, more affordable, maintainable and rapid to build.

This work was previously published in Software Industry-Oriented Education Practices and Curriculum Development: Experiences and Lessons, edited by Matthew Hussey, Bing Wu and Xiaofei Xu, pp. 57-81, copyright 2011 by Engineering Science Reference (an imprint of IGI Global).

1024

Section 5

Organizational and Social Implications

This section includes a wide range of research pertaining to the social and behavioral impact of Industrial Engineering around the world. Chapters introducing this section critically analyze and discuss trends in Industrial Engineering, such as participation, attitudes, and organizational change. Additional chapters included in this section look at process innovation and group decision making. Also investigating a concern within the field of Industrial Engineering is research which discusses the effect of customer power on Industrial Engineering. With 13 chapters, the discussions presented in this section offer research into the integration of global Industrial Engineering as well as implementation of ethical and workflow considerations for all organizations.

1026

Chapter 56

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs:

Absorptive Capacity Limitations Kathryn J. Hayes University of Western Sydney, Australia Ross Chapman Deakin University Melbourne, Australia

ABSTRACT This chapter considers the potential for absorptive capacity limitations to prevent SME manufacturers benefiting from the implementation of Ambient Intelligence (AmI) technologies. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. In order to understand the context of the research, a review of the role of SMEs in the Australian manufacturing industry, plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia is provided. Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform have brought technology to SMEs without the need for bespoke development. The results from the joint European and Australian AmI-4-SME projects suggest that SMEs can successfully use “external research sub-units” in the form of industry networks, research organisations and technology providers to offset internal absorptive capacity limitations.

DOI: 10.4018/978-1-4666-1945-6.ch056

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

INTRODUCTION Through case study research, this chapter discusses some of the challenges Small and Medium Enterprises (SMEs) in the manufacturing sector face in identifying and adopting Ambient Intelligence (AmI) technologies to improve their operations. Ambient Intelligence technologies are also known as Pervasive computing or Ubiquitous computing, and we include the descriptions of these terms when we refer to AmI technologies. Our study includes case studies of three Australian SMEs and a comparison with similar application requirements in a German SME manufacturer. The outcomes of the study are likely to be applicable to small firms in many nations. The 1980s and 90s saw the operations of many large manufacturers revolutionized by the introduction of process and technological innovations (Gunasekaran & Yusuf, 2002). While there have been uneven adoption rates in smaller businesses and across different nations (Chong & Pervan, 2007; Oyelaran-Oyeyinka & Lal, 2006) it is clear that technological innovations such as Electronic Data Interchange, Business Process Re-engineering, Enterprise Resource Planning and robotic automation, amongst others, have played key roles in increasing manufacturing productivity. At the beginning of the twenty first century this transformation continues. Ambient Intelligence (AmI) technologies are being positioned as the next performance and productivity enhancing purchase for manufacturers, and a potential means for manufacturers in developed nations to counter perceived threats from lower labour cost countries (Kuehnle, 2007). Thus, the key objectives of this chapter are to consider potential applications of AmI technologies in Australian SME manufacturers, and discuss the opportunities and shared challenges faced by such firms in adopting these technologies. In doing this we will compare different levels of absorptive capacity and technological readiness in Australian firms, seeking possible reasons for similarities and differences in their comparative technology adop-

tion processes. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. Our overarching research question is: “Can external intermediaries overcome absorptive capacity limitations in SMEs seeking process innovation through the application of AmI technologies?” In order to understand the issues surrounding this problem, a brief overview of ICT (Information and Communication Technologies) adoption in manufacturing and an explanation of Ambient Intelligence (AmI) technologies are provided in the following section. Following that we examine the role of SMEs in the Australian manufacturing industry plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia.

BACKGROUND ICT Adoption for Business Performance Improvement Brown and Bessant (2003) described the global manufacturing environment developing in this new century as an increasingly competitive landscape, characterised by on-going demands for improved flexibility, delivery speed and innovation. A frequently occurring element in manufacturers’ responses to these pressures is the implementation of increasingly sophisticated ICTs. The benefits of incorporating ICTs on business responsiveness have been identified as: more effective and more efficient information flows; assisting in value-adding improvements for current processes; greater access to efficiency enhancing innovations throughout the value chain (Australian Productivity Commission, 2004); and the ability to access world markets through e-commerce (Kinder, 2002). ICT adoption has been considered worth the risk, given the competitive pressures placed on business to keep pace with technology. For example, in Australia, the uptake of ICTs has in-

1027

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

creased dramatically towards the later part of the 90’s and into the 21st Century. Reports show that in 1993-94, 50 per cent of firms used computers with 30 per cent having internet access; by 200001 these figures had increased to 85 per cent and 70 per cent respectively (Australian Productivity Commission, 2004). Recent figures (Australian Bureau of Statistics, 2009) reveal that almost all Australian SMEs use ICTs, and 96% of them access the internet through a broadband connection. One of the latest developments in the application of ICTs to business improvement is that of Ambient Intelligence (AmI) technologies. The objective of AmI is to broaden and improve the interaction between human beings and digital technology through the use of ubiquitous computing devices. By using a wider range of simplified interaction devices, ensuring more effective communication between devices (particularly via wireless networks) and embedding technology into the work environment, AmI provides increased efficiency, more intuitive interaction with technology (Campos, Pina, & Neves-Silva, 2006) and improved value and productivity (Maurtua, Perez, Susperregi, Tubio, & Ibarguren, 2006).

Ambient Intelligence (AmI) Technologies Existing literature (Kopacsi, Kovacs, Anufriev, & Michelini, 2007; Li, Feng, Zhou, & Shi, 2009; Maurtua et al., 2006; Vasilakos, 2008; Weber, 2003) points to the co-existence of three features in any AmI technology: ubiquitous computing power, ubiquitous communication and adaptive, humancentric interfaces. Regardless of arguments about terminology and definitions (the terms “pervasive computing” and “ubiquitous computing” are in common use in the US, while “ambient intelligence” is favoured in the EU), these technologies are already commonplace. The beep signalling the automatic deduction of a road toll from your account as your car passes under a toll gate is one aspect of an AmI technology known as Radio-Frequency

1028

Identification (RFID). RFID technology is having an impact in many industries, some of which are not normally associated with high levels of ICT adoption. For example, during 2006, in NSW alone (one of the 7 states and territories within mainland Australia), more than 1.2 million head of cattle were automatically tracked from farm to saleyard to abattoir as their RFID ear tags passed through RFID sensor gates (NSW Farmers Association, 2007). In addition to increasing process speed and efficiency, AmI technologies have the potential to provide tracking of employee and customer activity. While concerns about the impact of technology upon power relations in the workplace are not new (Zuboff, 1988), the characteristics of AmI technologies present new challenges to worker privacy, informed consent and dignity. AmI technologies may, intentionally or not, dramatically increase employee surveillance and monitor consumer activity over the entire product life cycle. This potential and the very nature of ‘ubiquitous computing’ raises important ethical issues. Proposals to use RFID tags to track sufferers of Alzheimer’s disease (Caprio, 2005) and children provide examples of the ethical dilemmas AmI technologies can present. While these issues are beyond the scope of this chapter, we suggest Cochran et al (2007) for a review of ethical challenges associated with RFID. Social factors associated with the introduction and implementation of AmI technologies may be exacerbated in small and medium businesses. In addition to concerns shared with corporate workers, such as disquiet about their personal data potentially being sold to marketing groups and anxiety about the security of the information gathered, members of small businesses are particularly prone to AmI’s ability to ‘break the boundaries of work and home through their pervasiveness and ‘always on’nature’ (Ellis, 2004, p.8). While some profit-maximising small business owners welcomed this blurring of work and home boundaries, others did not, preferring to keep work and family spheres separate. Ellis cautions that in order to overcome existing negative preconceptions ofAmI technologies, users must feel

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

they control the devices and the data they produce, and be able to override them and cope with systems failure. In particular, AmI needs to be presented as “smarter” than existing ICTs and able to correct some of the problems associated with traditional forms of ICT support. In short, Ellis (2004, p. 9) asserts, “AmI needs to be associated with undoing some of the more problematic aspects of existing ICTs, to be accepted and not resisted as a more invasive, insidious and controlling form of what already exists.” Much of the promise of Ambient Intelligence (AmI) technologies rests upon connecting increasingly sophisticated and powerful sensors with existing computing facilities. McCullough (2001) identified the need to expand our thinking beyond the notion of filling environments with physical objects when considering Ambient Intelligence technologies. There is no longer such a thing as “empty space” when sensors and processing power combine to produce an environment that is “aware” of the locations, actions and information needs of humans. Clearly, the extent of existing Information and Communications Technology (ICT) infrastructure in an organisation will impact AmI technology implementations, providing either a “clean slate” from which to start or the opportunity to integrate new AmI capabilities with existing ICT systems and processes. In much the same way as the advent of mobile phones in China and India provided an opportunity for people unable to afford a landline, to access telephonic services, AmI technologies may prove a way for SME organisations to “leap frog” a stage of ICT implementation, and move directly to wireless and similar AmI technologies. Many other applications of AmI technologies are appearing as technologists extend the concept into areas such as “wearable technology” (clothing that incorporates sensors and interface devices), more intuitive home space designs, shopping assistance and the creation of seamless interfaces between work, home and leisure activities. While many of these applications currently seem unrelated to improving business productivity, it is clear that

the applications for business can only grow as the technologies become more sophisticated and less expensive. As Rao and Zimmerman (2005, p.3) state “there is a gap in the scholarly discussion addressing the business issues related to it, and the role of pervasive computing in driving business innovation”. It is in this context that the following case studies of four small-to-medium (SME) manufacturers - three Australian and one German firm – have been undertaken. In each firm, critical process analysis was carried out to examine possible process weaknesses and existing ICT systems, and recommendations were made concerning a selection of AmI technologies with the potential to boost business performance.

AMBIENT INTELLIGENCE TECHNOLOGY IN MANUFACTURING This section considers the applicability of several emerging AmI technologies to three SME manufacturers in New South Wales, Australia and compares the situation within these SMEs with one German SME manufacturer undertaking a similar technological adoption. In doing this the section also addresses questions about the preparedness of SMEs, particularly concerning their absorptive capacity limitations and how these may be overcome. Later sections also consider the potential impact of Ambient Technologies on the employees of the organisations studied. AmI technology is much more than RFID inventory control systems. Wireless, multi-modal services and speech recognition systems have the potential to increase manufacturing flexibility by supporting dynamic reconfiguration of process and assembly lines, and improving human-machine interfaces to reduce process times (Maurtua et al., 2006). Also, maintenance and distribution processes may be improved by linking common mobile wireless devices, such as mobile phones, Personal Digital Assistants (PDAs) or even pagers

1029

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

to production alert systems (Stokic, Kirchhoff, & Sundmaeker, 2006).

Small and Medium Manufacturers in Australia Organisations with between 20 and 199 workers employ 56% of Australia’s workforce (Wiesner, McDonald, & Banham, 2007). The Australian Bureau of Statistics (ABS) defines a small business as employing less than 20 people, and a medium enterprise as employing between 20 and 200 employees (ABS, 2001). The most recent ABS figures available (2007) for Australia indicate that there are around 47,000 manufacturing firms employing between 1 and 20 people, around 10,000 employing between 20 and 200 people, and only 873 employing over 200 people. In turnover terms, around 29,000 manufacturing firms reported annual turnover between $500,000 and $10 million, while only 3,300 firms reported turnover of $10 million or above. It is clear that the bulk of manufacturing in Australia occurs in small-to medium firms. While SME firms employ the majority of manufacturing workers their expenditure on R&D notably lags behind that of large manufacturers. Within the manufacturing industry companies with more than 200 employees were responsible for 73% of total industry R&D expenditure, with only 27% being contributed by the SME sector (ABS, 2007). However, in their exploration of the cost and impact of logistics technologies in US manufacturing firms Germain, Droge & Daugherty (1994) found that for manufacturing managers wanting to innovate with logistics technology, organisational size provides an advantage that transcends both the cost and nature of the technology. These authors confirmed the established view in 1994; that organisational size was positively correlated with technology adoption, as found in many previous studies. This link between manufacturing organisation size and increased ability to extract benefit from technological innovations may provide some explanation for the fact that while Australia’s manufacturing

1030

output has quadrupled since the mid 1950s, the Australian Government Productivity Commission (2003) states that overall, it has not grown at the same rate as the service sector. The Productivity Commission also describes Australia’s manufacturing sector as having “missed out on the productivity surge” of the mid 1990s while noting signs of improved manufacturing productivity in 2002 and 2003. The widespread availability of off-the-shelf ICT systems has probably meant that a great many more SMEs are adopting ICTs today than in 1994, however the limited resources of many such firms (both financial and human) almost certainly mean reduced awareness and limited capacity to exploit newer technologies commonly appearing in larger manufacturers. Given the significance of SMEs in Australian employment and the perceived need to increase manufacturing productivity, examination of the potential improvements available through the systemic application of AmI technology to SME manufacturers forms an important topic for research and government policy.

Absorptive Capacity This chapter applies the concept of absorptive capacity to manufacturing SMEs. We argue that SMEs can benefit from AmI technologies, using specialised intermediary organisations to overcome the “absorptive capacity” limitations evident in many SME organisations. Cohen and Levinthal (1990) proposed that internal Research & Development activities serve two purposes: to generate innovations, and to provide the ability to absorb relevant knowledge appearing in the external environment. The absorptive capacity of a firm is comprised of these two categories of activity. Their foundational paper conceptualised absorptive capacity in the context of large U.S. manufacturers, as evidenced by their survey of identifiable “R&D lab managers” (Cohen & Levinthal, 1990, p. 142) and their discussion of “communication inefficiencies” between business units. But what of small- and medium-sized manufacturers? Does

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

the notion of absorptive capacity have relevance to SMEs outside narrow, industry-segment specific technologies? If so, can external intermediaries assist SMEs to overcome absorptive capacity limitations regarding ambient technological innovations? The preparedness of SMEs to adopt and benefit from the mobile capabilities of AmI technologies is likely to be linked to their ability to overcome the limited absorptive capabilities associated with their small size. However, it is possible for SMEs to successfully deploy AmI technologies. Clear evidence of the benefits produced by a range of AmI technologies have been observed by the researchers at Costa Logistics’ distribution centre in Western Sydney. Costa Logistics specializes in the distribution of fresh fruit and vegetables, and achieves astounding levels of accuracy and throughput: staff average zero to three errors per million cartons picked in the warehouse, the daily inventory turn ratio is 100% and over A$2 billion of product is handled annually on behalf of their clients (Game-Lopata, 2008). The company was a SME when it started to use wrist-mounted bar-code scanners coupled with wireless communications. These successfully implemented AmI technologies are key factor that have enabled Costa Logistics rapid expansion to well over 200 employees in late 2009.

Innovation, Manufacturers, SMEs and Government Policy Several previous studies (Cutler, 2008; Philips, 1997) have concluded that innovative Australian firms of all sizes (both manufacturing and servicebased) tend to be more successful in terms of sales growth and market share than non-innovating firms. In addition, the impact of innovation is considered to be cumulative (Chapman, Toner, Sloan, Caddy, & Turpin, 2008) with some level of innovative behaviour or research and development being required to equip a firm to identify, assess and adopt technologies. The innovativeness and absorptive capacity of SMEs is a matter of concern for other

nations besides Australia. For example, in its 2008 budget, the UK government signalled its intention to set a goal for innovative SMEs to win 30% of its ₤150 billion public procurement spending (Kable’s Government Computing, 2008), equating to $98 billion (AUD) of incentives to encourage UK SMEs to innovate. While similar incentives are yet to appear in Australia, there are clear signs of government interest in the ability of SMEs to innovate (Department of Innovation Industry Science and Research, 2008). There is a growing body of work in the innovation literature on the limited absorptive capacity of SMEs to identify relevant innovations, understand and appreciate possible applications, and finally adapt and implement innovation in their organisations (Beckett, 2008; Liao, Welsch, & Stoica, 2003; Muscio, 2008). Many points concerning “constraining factors” and “implementation challenges” support the notion that SMEs can experience organisational absorptive capacity limitations. Beckett (2008) identifies knowledge and resource constraints that impede the ability of SMEs to develop absorptive capacity, but also provides an example of how absorptive capacity is built when the outlays of time and money required match the SME’s available resources. While the benefits of AmI technologies are already accruing in large organisations (Angeles, 2005) if manufacturing SMEs are to benefit from AmI technologies, one challenge requiring attention will be that of their limited absorptive capacity for technological innovations. Our research considered the possibility of external intermediaries being used to facilitate SME manufacturers’ assessment of the application of AmI technologies for process innovation, thus overcoming, at least partially, the problems of limited absorptive capacity within the partner SMEs.

1031

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

METHODOLOGY AND DATA COLLECTION Case research has been used to review and compare the operations of the three NSW manufacturing SMEs, identify process weak points (in partnership with the SME executive managers) and suggest potential Ambient Intelligence Technologies to assist each organisation. The data has been gathered as part of an Australian government funded International Science Linkages Project, which in turn was part of a larger European-Australian project on Ambient Intelligence in manufacturing - Ambient Intelligence Technology for Systemic Innovation in Manufacturing SMEs (AmI-4-SMEs). The European component of the AmI-4-SMEs project involved six SMEs, three research partners and three Information and Communications Technology (ICT) providers located in Germany, Ireland, Spain and Poland. The Australian AmI-4-SMEs project consists of six SMEs, two research partners (University of Melbourne and University of Western Sydney) and two ICT providers. Six SMEs were selected from those responding to a request for expressions of interest in participating in the study. Using the Australian Bureau of Statistics metrics (2001), all are classified as small-to-medium sized manufacturers and all are privately owned. The EU AmI-4-SMEs project aimed to design and develop a coordinated approach for process

innovation using, ICT “building blocks” and a software platform to support the improvement of manufacturing processes in SMEs. These improvements were achieved by re-engineering processes and introducing appropriate ICT tools. The method used to analyse and re-engineer business processes is an extension of the COST-WORTH methodology (ATB Institute for Applied Systems Technology Bremen GmbH, 2004) and has three main phases: Analysis and Conception, Selection and Specification, and finally Implementation. Due to lower levels of government funding support, the Australian Ami-4-SMEs project performed only the Analysis and Conception, and the initial aspects of the Specification and Design phase but not the Implementation phase. The links between these phases are shown in Figure 1. The Analysis and Conception phase produces an implementation plan for the proposed AmI solution. Analyses of each SME’s business processes and bottlenecks form the majority of this phase which concludes with presentation of a business re-engineering recommendation and a Return on Investment Analysis (Kirchhoff, Stokic, & Sundmaeker, 2006). One challenge of working with SMEs is to gather sufficient information without intruding to the extent that the organisation is adversely affected. On-site interviews and observation, questionnaires (these were developed as a part of the precursor

Figure 1. Representation of the Three Phase AmI-4-SME Methodology (Source (Kirchhoff et al., 2006))

1032

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

COSTWORTH project, see Nousala, Ifandoudas, Terziovski and Chapman, 2008), video recordings, a visit to a company (Costa Logistics) already using wireless, wearable and voice (Chang & Heng, 2006) technologies in its warehouse, and joint creation of process maps where they did not previously exist were used to collect data and minimise disruption to the SMEs. Interview and questionnaire data were used to select important, problematic processes for each SME. On-site observations and video recordings were analysed to create “as-is” maps of the process selected for improvement, and identify key limitations of each process. AmI technologies with potential to improve the selected business process were selected and the likely costs and benefits reviewed with the SME executive managers. A strength (and simultaneous limitation) of this approach, is that the time spent at each SME site is not extended or intensive. However, given the objectives of the analysis and conception phase of the AmI-4-SMEs project, and the need to minimise disruption to the operation of the manufacturing businesses, the methods are appropriate.

– manufacturer of plastic packaging closures; and TechMakers – a contract electronics component manufacturer. As described in the methodology section, due to lower funding levels, the Australian AmI-4-SMEs project did not fully incorporate the latter two phases of the project (see Figure 1), instead leaving it to the individual companies to complete these steps. The European project funding also included direct funding support for the SME partners, which was not provided under the Australian government support funds. However, the Analysis and Conception phase, and the initial aspects of the Specification & Selection phase, included a detailed initial assessment of each SME, identified possible AmI solutions and produced a rough implementation plan for the proposed AmI solution. Analyses of each SME’s business processes and bottlenecks formed the majority of this phase, and these were conducted on-site at the three NSW SMEs. Tables 1 and 2 present summary information and key issues related to technology adoption for each of the three SMEs from New South Wales, Australia.

RESULTS

Similarities and Differences between the Three Australian SME Manufacturers

Overview of the Three Australian SMEs The three Sydney-based SMEs represent a wide range of existing ICT complexity and skill, from “craft-work” factories to highly sophisticated manufacturers of technology. Although the organisations were not intentionally recruited to represent a typology of low, moderate and highly integrated ICT installations and skill sets, they do display these characteristics and it is useful to consider how AmI technology capabilities may be added to each of these settings. Pseudonyms have been used for the SME companies. Ranked in order from low to higher technological capability they were: SwivelMould – a plastics manufacturer specialising in rotational moulding work; BottleTop

Scalability issues related to specialised equipment appeared as a limitation shared by the three Australian SMEs. Production could not rapidly increase without the purchase of more machinery, and in the case of TechMakers, difficulty in arranging for the rapid importation of components also limited their growth. Key differences observed in the three Australian SMEs relate to the proportion of standardised production and organisational culture. Eighty per cent of BottleTop’s product volume results from standing contracts with its top twenty customers. Beyond the top twenty customers is what the Operations Manager refers to as a “long tail” of hundreds of small customers. In contrast, TechMakers survives

1033

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

Table 1. Company summary BottleTop

TechMakers

Turnover: $30M Employees 97 Profit: moving from break-even to profitable trading in 2008/09. Industry: Manufacturing of packaging closures Outlook: Growing in a shrinking market by taking market share from competitors. BottleTop has set an aggressive revenue growth target of $50M by 2011/12.

Turnover: $7 M Employees 70 Profit: Trading profitably Industry: Contract Electronic Manufacturers. Outlook: Industry is shrinking, work is increasingly being sent off shore, TechMakers are diversifying by creating and selling the devices they created to assist in their contract electronic manufacturing activities.

SwivelMould Turnover: $20M Employees: 80 Profit: trading profitably, but recently burdened by a major bad debt. Industry: Rotational Moulding Outlook: Rapid growth during drought through manufacture of water tanks,

Table 2. Key issues related to technology adoption BottleTop

TechMakers

SwivelMould

Staff The company managers view their employees as loyal, and based on a long association, almost like family. Some staff members resent the profits they believe the company owners are making and draw unfavourable comparisons to their hourly wage rate. A profit sharing scheme based upon reducing the scrap rate has been enthusiastically embraced by staff. BottleTop’s labour efficiency has increased from 76 to 92% in the last year primarily attributable to their process focus as they implement Six Sigma Manufacturing techniques

No staff-related comments or concerns appeared in interviews or observations.

SwivelMould have a high employee turnover as the work is repetitive, takes place in a hot and noisy environment and does not pay high hourly rates. SwivelMould had difficulty recruiting and retaining employees while unemployment rates were low. New employees are only given formal training after they have completed a probationary period and if they appear likely to work at SwivelMould for some time.

Tracking of Products Product tracking is highly automated, integrated links between moulding machines and software packages running on PCs provide a real time view of activity.

The Quality Manager describes product tracking as “the black hole” because once a job commences no job status information is available until the end of the manufacturing run. This issue can be addressed by the organisation as TechMakers employ a full time in-house programmer. When time permits the programmer will interface product tracking with the MRP system.

Orders are received by fax and transcribed onto job sheets by hand. As jobs are completed on the shop floor the quantity, colour etc details are confirmed by the foreman writing on the same sheet. Any variations in process are also hand written on the sheet.

Process Improvements BottleTop is using six sigma methods to improve quality, reduce product variability and reduce waste. The company is focussing on improving each process prior to automating it. As the Operations Manager states “We want to have strong processes, we don’t want rubbish processes just being done more quickly.” When quality processes are in place BottleTop’s focus will shift to automation and then to monitoring.

TechMakers processes are well developed and highly automated and with one exception, their work in progress system, discussed in the “Tracking of Products” column. A more difficult process to address concerns the performance of the distributor that acts as the exclusive Australian agent for the US manufacturer of components used by TechMakers.

SwivelMould has not mapped its processes. The process map built for one process as part of the AmI-4-SME process was the first time the company had used process mapping. Tradition and the knowledge of gang foremen are used to guide manufacturing processes.

Training Courses A mix of in-house and external training is used at BottleTop. External training is used to provide Six Sigma training.

No comments made in relation to training.

SwivelMould has developed a competency based training program in conjunction with an external training consultant. The objectives of the training are to increase productivity, quality and produce a change in the organisational culture.

continued on following page 1034

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

Table 2. Continued BottleTop

TechMakers

SwivelMould

Growth BottleTop managers are optimistic about their ability to grow revenue in a shrinking market. However, growing acceptance of closure systems make in countries with low labour rates may limits their growth.

TechMakers is and is the oldest and fourth largest contract electronic manufacturer (CEM) in Australia. The CEO is content to remain the fourth largest firm and claims there are advantages in not being the biggest player in the Australian market. The market for CEMs in Australia is shrinking as work moves off-shore. While the Quality Assurance Manager believes the company is committed to increasing revenue, the CEO (who is the owner of the company) stated privately that his objective is to improve the profitability of operations.

SwivelMould’s ambitious expansion plans are on track to deliver the anticipated growth in revenue. However as rain water tanks provide 55-60% of annual revenue SwivelMould’s plans are dependent to some extent upon the maintenance of government rebate policies, and continuation of Australia’s ten year drought. During the course of the study the drought broke on the eastern seaboard, reducing demand for rainwater tanks. While newer entrants to the market have been unable to cope with the downturn in the tank business, SwivelMould’s Managing Director is confident of its ability to continue its expansion due to investments made in new products and its pursuit of new markets.

Overseas Operations Sixty per cent of the company’s revenue is derived from importation and distribution activities. Logistics is important and problematic for the company as products imported as finished goods create significant supply chain challenges. BottleTop is considering employing a logistics outsourcing firm to manage these challenges

TechMakers are pursuing export opportunities for the technologies they have invented in house. They do not intend to attempt to compete with CEMs located overseas.

A reduction in demand for water tanks has occurred with the breaking of the drought in two cities (Sydney and Brisbane). To some extent this has been offset by increases in exports; the quantity of goods exported to the USA and China has trebled in the last year.

Documentation & Processes Highly integrated with and automated by ICT systems.

Highly integrated with and automated by ICT systems.

Rudimentary documentation, handwritten and physically carried between office and factory floor. Little integration with ICT systems.

in a shrinking market due to its flexibility, rapid prototyping turn-around and high quality output. The CEO and Quality Assurance manager concur that “every day is different.” SwivelMould’s operations sit between the other two SMEs. SwivelMould offers a full service from product concept through design to manufacture. As each type of product requires a specialised mould, batch runs are used in production. While safety equipment is in use at TechMakers and BottleTop, SwivelMould employees resist using standard safety equipment such as hard hats and hearing protection. Only foremen wear high visibility vests. The Managing Director describes these actions as symptomatic of the “70s culture” that he is trying to change.

AmI Technology Recommendations The Australian SMEs The wide range of technical skill levels in the three Australian SME manufacturers results in very few similarities in their technological capacities. The staff in two of the SMEs are happy with the opportunity to enhance their skills and comfortable with process change. In all the SMEs the arrival of technology that is industry-specific is seen as a form of “natural progression” from previous machines. In one SME (SwivelMould) the CEO is experienced difficulty in moving staff away from established skills and procedures.

1035

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

The recommendations for the SMEs were tailored to the existing environment of each organisation in terms of industry segment and their legacy ICT systems. In the case of SwivelMould, recommendations were made for AmI technologies that either link together existing “islands” of ICT equipment or provide quickly implemented, stand-alone solutions to environmental issues. For example, a recommendation to implement an integrated manufacturing system, taking advantage of the flexibility of wireless communications was made. As Kirchoff, Stockic and Sundmaeker (2006) assert, if insufficient or poorly integrated ICT infrastructure exists the first step towards obtaining the benefits of AmI technologies is to introduce ICT systems to support general manufacturing processes. Given the size of SwivelMould’s operations, it is likely that a standardised manufacturing package working on a PC/Windows, Unix or Linux operating environment will address this need. To provide a small example of the benefits available, using a secure on-line order entry system will avoid the need to manually transfer order details faxed to SwivelMould by their distributors onto worksheets. The second organisation, TechMakers presents a challenge as their business is manufacturing electronic devices and sub-assemblies, effectively acting as an outsourced technology design and manufacture facility for their clients. TechMakers are well aware of AmI technologies and would have already incorporated them into their operations if they had identified potential applications. Furthermore, their key business issues concern their contracting market and their dependency upon an intermediary distributor to order components from the USA. For TechMakers, the benefits available from AmI technology may exist in opportunities to design, manufacture or modify AmI systems for other companies. At BottleTop an alert system based on AmI technologies in the form of wireless communications has the potential to improve productivity by decreasing the number and duration of production stoppages caused by machine parameters moving

1036

out of set tolerances, and release personnel from repetitive inspection tasks for higher value, and more rewarding and interesting work. The following section compares the detailed findings from one Australian SME (BottleTop) with those from a German SME participating in the European AmI4-SMEs project. Despite differences in industry and location, both these SMEs have very similar opportunities to address production issues using AmI Technologies, pointing to the potential for standardised AmI based solutions to improve SME manufacturing operations.

Comparison between European and Australian SME Manufacturers German SME (Truckbody GmbH) Truckbody GmbH claims market leadership for EU manufacture of truck swap bodies (steel framed transport containers, and the legs on which they stand while waiting transfer from truck to truck, or truck to rail) primarily intended for the EU domestic market. A key competence is the manufacture and powder coating of large structures, up to 15m long, such as bus frames. The company employs 330 people, which places it in the EU classification of SME organisations, in contrast to Australia where a SME is classified as an organisation with between 20 and 200 employees. Truckbody’s production system is characterized by strong interdependencies between different task groups; a delay in one step impacts many other groups further down the production line. To reduce production delays the EU Ami-4SMEs project research and technology partners identified a need for automatic production alerts that interfaced to the company’s planning system. The ATB Institute for Applied Systems Technology, based in Bremen, Germany is the project leader of the EU AmI-4-SMEs project, and is currently finalising the implementation of a rule engine and user interfaces on mobile devices. When problems occur in Truckbody’s production, employees who

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

need to know about the disruption, such as the shift supervisor and the person with the skill to solve the problem, receive an automatically generated alert message. The alerts are based on user profiles (e.g. manager, foreman), the current location of the user (e.g. meeting, office, home) and the severity of the situation (e.g. deviation threshold, breakdown, loss). Use of a multi-modal user interface, (specifically a wireless message sent to a mobile phone or PDA) leverages the capability of AmI technologies to provide timely alerts that are “pushed” to relevant employees regardless of their location. In this manner, the AmI technology provides immediate and mobile access to production information, warns of delays to the production line, and so supports reallocation of work and staff. Prototypes have been developed as part of the Selection and Specification phase of the study.

Australian SME (BottleTop Pty Ltd) BottleTop produces a very different product from that of Truckbody GmbH. BottleTop manufactures specialty packaging, with particular strengths in the personal care, pharmaceutical, health foods, chemical, cleaning, food, beverage and cosmetics markets. Operating for sixty years from its single Sydney manufacturing site, it has built a strong sense of loyalty among its 97 employees, and has extensive links to international fastening manufacturers. Although plastic manufacturing accounts for around 7% of all Australian manufacturing activity, the industry is quite mature (McCaffrey, 2006), and is shrinking at around 4% per year, mainly due to increased purchases from foreign injection moulding companies. BottleTop is growing in this shrinking market, winning market share from its competitors by focussing on quality, service, technology and relationships within and outside the organisation. The company plans to more than double its revenue by 2011/12. While the revenue goal is ambitious, BottleTop’s revenue grew by 12% in 2006 even after allowing for a 5% reduction in revenue from

its existing customer base due to some customers moving their business to off-shore suppliers. Discussions with BottleTop’s Production Management team identified the following AmI technology scenario as an attractive business concept: BottleTop’s moulding and assembly machines have in-built Programmable Logic Controllers (PLCs) which can monitor the six key variables that control the formation of the plastic closure. If a software program collects and monitors the PLC data, when any of these six parameters move outside pre-set limits an SMS alert to a mobile phone, or pager message could be automatically generated and sent to on-site maintenance personnel. This provides several potential business benefits, including minimization of machine downtime, reduction of defective, scrapped product and reduced need for visual inspection. Currently all the plastic fasteners are inspected by a human operator as they leave the machine. Previous attempts to use computers coupled to cameras to replace human visual inspection of parts leaving the injection moulding machines were not successful due to the camera’s inability to cope with reflective foil routinely used in BottleTop’s products. It is important to note that the company’s HR practices are likely to support the introduction of the proposed AmI solution. A bonus scheme rewarding operators for reducing the amount of defective caps produced from each machine has been enthusiastically embraced; operators have been heard to comment, “That’s my money on the floor” when the speed of the machine is set too fast and fasteners overshoot the hopper. The preceding comparison demonstrates that despite operating in unrelated industries in different countries, some SME manufacturing processes have sufficient commonality to permit the development of generic AmI solutions. Furthermore, the appearance of the same requirement in different manufacturing contexts shows that AmI technologies have the potential to be “general purpose” production enablers in diverse SME manufacturing settings. This in turn suggests the possibility that affordable

1037

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

“turn-key” AmI solutions may become available from technology providers. The next section considers the possibility of third party technology providers tailoring generic AmI solutions to the specific requirements of each SME, thus overcoming the absorptive capacity limitations inherent in SMEs.

DISCUSSION AND DIRECTIONS FOR FUTURE RESEARCH In Australian manufacturing SMEs, there is a very low likelihood of in-house R&D being used to build absorptive capacity to investigate AmI technologies. SMEs prefer to buy new technology when it is already embedded in an industry specific product rather than master the details of the underlying innovation (Oyelaran-Oyeyinka & Lal, 2006). Instead, we propose that SMEs are more likely to use industry or informal networks to become aware of potentially useful innovations, and then “buy” the innovation embedded in capital equipment or consulting services as a means to ‘recognise the value of new information, assimilate it, and apply it to commercial ends’ (Cohen & Levinthal, 1990, p.128). However, Cohen and Levinthal (1990) question the effectiveness of “buying” absorptive capacity in the form of consulting services or through acquisitions when the knowledge is to be integrated with existing business systems. They state “To integrate certain classes of complex and sophisticated technological knowledge successfully into the firm’s activities, the firm requires an existing internal staff of technologists and scientists who are both competent in their fields and are familiar with the firm’s idiosyncratic needs, organizational procedures, routines, complementary capabilities, and extramural relationships” (Cohen & Levinthal, 1990, p. 135). Out-sourcing of deep absorptive capacity to equipment and software vendors able to provide “turn-key” solutions that match industry requirements seems to be a way for manufacturing SMEs

1038

to gain the commercial benefits of AmI technologies despite the resource and time constraints that prevent them building absorptive capacity in any area other than their core business competence. Similar requirements appear in two very different SMEs on two continents. The potential for the same AmI technology solution components to address these requirements, albeit tailored to the specifics of equipment in use at each site, suggests that SMEs can benefit from AmI technologies by using specialised intermediary organisations to provide the “absorptive capacity” on their behalf. This finding points to potential links between absorptive capacity and “make vs. buy” decision-making, and to “broad” or “deep” versions of absorptive capacity (Henard & McFadyen, 2006) as avenues for future research. In addition, an opportunity exists to track the spread of AmI technologies in SME Australian manufacturers and in doing so contribute to the diffusion of innovation literature. Additionally, AmI implementation challenges for Australian SME manufacturers extend beyond the boundaries of their own organisations. Large ICT manufacturers use a channel marketing approach to sell their products to the SME market segment. The channels may include retail and direct sales forces, but frequently hardware is “bundled” with service and software offerings from business partners, specialising in a particular industry segment, such as manufacturing. While intermediary business partners may supply specialised knowledge and generic AmI solutions to compensate for limited SME absorptive capacity, the organisations that partner with large ICT providers are often SMEs themselves. The ability and willingness of these business partners to gain AmI skills may in turn be a limiting factor in the adoption of AmI technologies by Australian Manufacturing SMEs. Absorptive capacity limitations of SME organisations can potentially affect uptake of AmI technologies at two points: within the manufacturing SME and within the SME technology partner. Low levels of in-house AmI skills and heavy level reliance on SME Australian technology providers

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

suggest there may be an argument for the provision of government subsidies to encourage the adoption of AmI technologies in Australian manufacturing. A precedent exists in that subsidies have been provided for the purchase of RFID scanners for NSW meat producers (NSW Farmers Association, 2007). Without some form of government encouragement, the task of integrating AmI systems with existing ICT investments and the concomitant diversion from core manufacturing activities, may be enough to prevent the adoption of AmI technologies and, therefore, achievement of the elusive “productivity surge” in Australian manufacturing SMEs.

their organisations. We would also like to thank our colleagues in Australia, Assoc. Prof. Mile Terziovski and Richard Ferrers at the University of Melbourne, Dr David Low at the University of Western Sydney and our colleagues at the ATB Institute for Applied Systems Technology; Bremen, Germany for their helpful advice and the opportunity to compare Australian and EU based SMEs. The authors would also like to acknowledge the support of an International Science Linkages grant from the Australian Department of DIISR (Project Number CG110181), the University of Western Sydney and the University of Melbourne.

CONCLUSION

REFERENCES

Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform since Cohen and Levinthal’s 1990 paper have brought technology to SMEs without the need for bespoke development. Furthermore, Cohen and Levinthal appear to assume that investments in absorptive capacity only exist in the form of R&D spending, rather than networking with other organisations to use “connect and develop” models typical of Open Innovation (Chesbrough, Vanhaverbeke, & West, 2006). In contrast, the results from the EU and Australian AmI-4-SME projects (ATB Institute for Applied Systems Technology Bremen GmbH, 2008) suggest that SMEs can use “external research sub-units” in the form of experiences reported by members of their industry network and trade associations, and solutions proposed by research and technology providers, to offset internal absorptive capacity limitations.

Angeles, R. (2005). RFID Technologies: SupplyChain Applications and Implementation Issues. Information Systems Management, 22(1), 51–65. doi:10.1201/1078/44912.22.1.20051201/85739.7

ACKNOWLEDGMENT The authors would like to thank the SME participants in the Australian AmI-4-SME project for providing access to, and information about

ATB Institute for Applied Systems Technology Bremen GmbH. (2004). COST-WORTH Methodology Independent Reference Scheme [Electronic Version]. Retrieved April 22, 2008 4:25 pm from ATB Bremen GmbH. ATB Institute for Applied Systems Technology Bremen GmbH. (2008). AMI-4-SME Platform. from http://ami4sme.org/results/platform.php Australian Bureau of Statistics. (2001). 1321.0 Small Business in Australia. Retrieved April 21, 11am, 2008, from http://www.abs.gov.au/Ausstats/ [email protected]/0/97452F3932F44031CA256C5B000 27F19?Open Australian Bureau of Statistics. (2007). 8104.0 - Research and Experimental Development, Businesses, Australia, 2005-06. Retrieved September 3, 2:50pm, 2008, from http:// www.abs.gov.au/AUSSTATS/[email protected]/ Latestproducts/8104.0Main%20Features3200506?opendocument&tabname=Summary&prodno =8104.0&issue =2005-06&num =&view=

1039

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

Australian Bureau of Statistics. (2009). 8129.0 Business Use of Information Technology, 2007-08 (Publication. Retrieved September 17, 2009: http:// www.abs.gov.au/AUSSTATS/[email protected]/Latestpro ducts/29F02AF6A0F6C9A3CA2576170018FB56 ?opendocument Australian Productivity Commission. (2003). Trends in Australian Manufacturing. Retrieved April 21, 11:46 am, 2008, from http://www.pc.gov. au/research/commissionresearch/tiam/keypoints Australian Productivity Commission. (2004). ICT Use and Productivity: A Synthesis from Studies of Australian Firms. Retrieved. From http://www. pc.gov.au/research/commissionresearch/ictuse Beckett, R. C. (2008). Utilizing and Adaptation of the Absorptive Capacity Concept in a Virtual Enterprise Context. International Journal of Production Research, 46(5), 1243–1252. doi:10.1080/00207540701224327 Brown, S., & Bessant, J. (2003). The Manufacturing Strategy-Capabilities Links in Mass Customisation and Agile Manufacturing - an Exploratory Study. International Journal of Operations & Production Management, 23(7), 707–730. doi:10.1108/01443570310481522 Campos, A., Pina, P., & Neves-Silva, R. (2006). Supporting Distributed Collaborative Work in Manufacturing Industry. IEEE Xplore. Retrieved Feb 22, 2010 Caprio, D. W. J. (2005). Radio-Frequency Identification (RFID): Panorama of RFID Current Applications and Potential Economic Benefits. Paper presented at the Committee for Information, Computer and Communication Policy (ICCP) of the Organization for Economic Co-operation and Development (OECD). Retrieved Viewed April 14, 2008, from http://www.oecd.org/dataoecd/60/8/3546 5566.pdf

1040

Chang, S. E., & Heng, M. S. H. (2006). An Empirical Study on Voice-Enabled Web Applications. Pervasive Computing (July-September 2006), 76 - 81. Chapman, R., Toner, P., Sloan, T., Caddy, I., & Turpin, T. (2008). Bridging the Barriers: A Study of Innovation in the NSW Manufacturing Sector. Sydney: University of Western Sydney. Chesbrough, H. W., Vanhaverbeke, W., & West, J. (2006). Open Innovation: Researching a New Paradigm. Oxford: Oxford University Press. Chong, S., & Pervan, G. (2007). Factors Influencing the Extent of Deployment of Electronic Commerce for Small-and Medium-Sized Enterprises. Journal of Electronic Commerce in Organizations, 5(1), 1–29. Cochran, P. L., Tatikonda, M. V., & Magid, J. M. (2007). Radio Frequency Identification and the Ethics of Privacy. Organizational Dynamics, 36(2), 217–229. doi:10.1016/j.orgdyn.2007.03.008 Cohen, W. M., & Levinthal, D. A. (1990). Absorptive Capacity: A New Perspective on Learning and Innovation. Administrative Science Quarterly, 35(1), 128–152. doi:10.2307/2393553 Cutler, T. (2008). Venturous Australia: Building Strength in Innovation; Report of the Review of the National Innovation System in Australia. Melbourne: Cutler and Company. Department of Innovation Industry Science and Research. (2008, Feb 19, 2008). Small Business Surveys - Finance and Banking, Innovation, and International Activity Retrieved June 3, 2008, from http://www.innovation.gov.au/Section/SmallBusiness/Pages/SmallBusinessSurveysFinanceandBankingInnovationandInternationalActivity.aspx

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

Ellis, R. M. (2004). The Challenges of Work/Home Boundaries and User Perceptions for Ambient Intelligence [Electronic Version]. Chimera Working Paper, 2004-14. Colchester: University of Essex. Retrieved July 1, 2008, from http://www.essex. ac.uk/chimera/content/pubs/wps/CWP-2004-14RE-e-Challenges.pdf Game-Lopata, A. (2008). Exclusive: Secret Sauce [Electronic Version]. Logistics. Retrieved September 18, 2009, from http://www.logisticsmagazine. com.au/Article/Exclusive-Secret-Sauce/172799. aspx Germain, R., Droge, C., & Daugherty, P. J. (1994). A Cost and Impact Typology of Logistics Technology and the Effect of its Adoption on Organizational Practice. Journal of Business Logistics, 15(2), 227–248. Gunasekaran, A., & Yusuf, Y. Y. (2002). Agile Manufacturing: a Taxonomy of Strategic and Technological Imperatives. International Journal of Production Research, 40(6), 1357–1385. doi:10.1080/00207540110118370 Henard, D. H., & McFadyen, M. A. (2006). R & D Knowledge is Power. Research Technology Management, 49(3), 41–47. Kable’s Government Computing. (2008). Chancellor sets 30% Target for SME Procurement [Electronic Version]. KableNET. Retrieved April 21, 12:10pm from www.kablenet.com/kd.nsf/Fro ntpageRSS/23B6ED42820897968025740A004A 2E25!Open Document Kinder, T. (2002). Emerging E-commerce Business Models: an Analysis of Case Studies from West Lothian, Scotland. European Journal of Innovation Management, 5(3), 130–151. doi:10.1108/14601060210436718 Kirchhoff, U., Stokic, D., & Sundmaeker, H. (2006). AmI Technologies Based Business Improvement in Manufacturing SMEs. Paper presented at the eChallenges e-2006. from http://www.ami4sme.org/.

Kopacsi, S., Kovacs, G., Anufriev, A., & Michelini, R. (2007). Ambient Intelligence as Enabling Technology for Modern Business Paradigms. Robotics and Computer-integrated Manufacturing, 23(2), 242–256. doi:10.1016/j.rcim.2006.01.002 Kuehnle, H. (2007). Post Mass Production Paradigm (PMPP) Trajectories. Journal of Manufacturing Technology Management, 18(8), 1022–1037. doi:10.1108/17410380710828316 Li, X., Feng, L., Zhou, L., & Shi, Y. (2009). Learning in an Ambient Intelligent World: Enabling Technologies and Practices. IEEE Transactions on Knowledge and Data Engineering, 21(6), 910–924. doi:10.1109/TKDE.2008.143 Liao, J., Welsch, H., & Stoica, M. (2003). Organizational Absorptive Capacity and Responsiveness: An Empirical Investigation of Growth-Oriented SMEs Entrepreneurship. Theory into Practice, 28(1), 63–85. Maurtua, I., Perez, M. A., Susperregi, L., Tubio, C., & Ibarguren, A. (2006). Ambient Intelligence in Manufacturing. Paper presented at the Intelligent Production Machines and Systems, 2nd I*PROMS Virtual Conference. McCaffrey, J. (2006). Version]. Retrieved Dec 17, 2007 from http://commercecan.ic.gc. ca/scdt/bizmap/interface2.nsf/vDownload/ ISA_5111/$file/X_4862570.PDF McCullough, M. (2001). On Typologies of Situated Interaction. Human-Computer Interaction, 16, 337–349. doi:10.1207/S15327051HCI16234_14 Muscio, A. (2008). The Impact of Absorptive Capacity on SMEs’ Collaboration. Economics of Innovation and New Technology, 16(8), 653–668. doi:10.1080/10438590600983994 Nousala, S., Ifandoudas, P., Terziovski, M., & Chapman, R. L. (2008). Process Improvement and ICTs In Australian SMEs: A Selection and Implementation Framework. Production Planning and Control, 19(8), 735–753. doi:10.1080/09537280802476169

1041

Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs

NSW Farmers Association. (2007). National Livestock Identification System (Cattle) 013.07i NLIS Cattle [Electronic Version]. Retrieved April 10, 2008 from http://www.nswfarmers.org.au/data/ assets/pdf_file/0012/3072/FS_NLIS_Cattle_0207. pdf Oyelaran-Oyeyinka, O., & Lal, K. (2006). SMEs and New Technologies; Learning E-Business and Development. Basingstoke, UK: Palgrave McMillan. doi:10.1057/9780230625457 Philips, R. (1997). Innovation and Firm Performance in Australian Manufacturing. Industry Commission Staff Research Paper. Canberra,AGPS Rao, B., & Zimmermann, H.-D. (2005). Pervasive Computing and Ambient Intelligence: Preface [Electronic Version]. Electronic Markets, 15, p. 3. Retrieved June 3, 2008 from http://www.informaworld.com/smpp/ftinterface~content=a713735017 ~fulltext= 713240928. Stokic, D., Kirchhoff, U., & Sundmaeker, H. (2006). Ambient Intelligence in Manufacturing Industry: Control System Point of View. Paper presented at the IACTED Conference on Control and Applications 2006 conference Vasilakos, A. V. (2008). Ambient Intelligence. Information Sciences, 178(3), 585–587. doi:10.1016/j. ins.2007.08.016 Weber, W. (2003). Ambient Intelligence - Industrial Research on a Visionary Concept. IEEE Xplore Retrieved Feb 22, 2010 Wiesner, R., McDonald, J., & Banham, H. C. (2007). Australian Small and Medium Sized Enterprises (SMEs): A study of high performance management practices. Journal of Management & Organization, 13(3), 227–248. doi:10.5172/jmo.2007.13.3.227

Zuboff, S. (1988). In the Age of the Smart Machine: the Future of Work and Power. Oxford Heinemann Professional.

KEY TERMS AND DEFINITIONS Absorptive Capacity: The absorptive capacity of a firm is comprised of its ability to generate innovations, and absorb relevant knowledge appearing in the external environment. Ambient Intelligence Technologies (AmI): Technologies that combine to create an environment that is are sensitive and responsive to the presence of people Open Innovation: Organisations using ideas and capabilities originating outside their boundaries in order to increase the rate with which innovation occurs and decrease innovation costs. Open innovation also includes an organisation selling innovative ideas it has generated but cannot use in its business. Pervasive and Ubiquitous Computing: Terms in use in the USA to refer to the same technologies as those named Ambient Intelligence Technologies in Europe Radio Frequency Identification Device (RFID): Data collection devices consisting of electronic tags for storing unique identifying data. Small Medium Enterprise SME: Small to Medium Enterprise. This measure of a company’s size is generally based upon employee numbers, and varies across countries. Technology Adoption: A process that begins with awareness of a specific type of technology or device, and progresses through stages ending in use or rejection of the technology.

This work was previously published in Handbook of Research on Mobility and Computing: Evolving Technologies and Ubiquitous Impacts, edited by Maria Manuela Cruz-Cunha and Fernando Moreira, pp. 65-82, copyright 2011 by Information Science Reference (an imprint of IGI Global).

1042

1043

Chapter 57

Teaching Technology Computer Aided Design (TCAD) Online Chinmay K Maiti Indian Institute of Technology, India Ananda Maiti Indian Institute of Technology, India

ABSTRACT Since Technology Computer Aided Design (TCAD) is an important component of modern semiconductor manufacturing, a new framework is needed for microelectronics education. An integrated measurementbased microelectronics and VLSI engineering laboratory with simulation-based technology CAD laboratory is described. An Internet-based laboratory management system for monitoring and control of a real-time measurement system interfaced via a dedicated local computer is discussed. The management system allows the remote students to conduct remote experiments, perform monitoring and control of the experimental setup, and collect data from the experiment through the network link as if the student is physically in a conventional laboratory. The management system is also capable of evaluating of a student’s performance and grading laboratory courses that involve preliminary quiz and viva-voce examinations, checking of experimental data and submitted online laboratory reports. The proposed online TCAD teaching methodology will provide an opportunity for expanding microelectronics education.

INTRODUCTION The field of microelectronics technology is recognized as a driving force for the Information Age. Micro- and nanoelectronics device and circuit design and fabrication are specialized fields in electrical engineering. The main goal of undergraduate and/or postgraduate level microelectronDOI: 10.4018/978-1-4666-1945-6.ch057

ics teaching is to produce high-quality engineers who are able to make contributions in the context of the rapid change that characterizes integrated circuit (IC) fabrication. For microelectronics courses, laboratory should include a clean room infrastructure, semiconductor equipment operation procedures, process and metrology, device testing, and process integration and manufacturing learning as hands-on fabrication as well as characterization of devices that enhance the educational

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Teaching Technology Computer Aided Design (TCAD) Online

experience. However, due to the high cost of a microelectronic fabrication laboratory, teaching microelectronic circuit fabrication is very much driven by the availability of resources at the institution providing such courses and is primarily taught at universities where an actual fabrication facility is available and, currently, it is mostly taught via demonstration mode. Microelectronics engineering education is in transition. New thought is being given to topics such as what constitutes microelectronics process design fundamentals, how to shrink the gap between industrial and academic perspectives on process design, and how to help students gain more experience and knowledge. Currently, in most final year undergraduate and virtually all master’s level programs, there are courses on device physics and processing technology (usually as one single course) based on standard text books on MOS and bipolar device physics that often do not include a laboratory component. Integrated circuit fabrication courses are offered as an elective in some Electrical and/ or Electronic Engineering programs that cover fabrication theory of integrated circuits and process modeling. The introduction of process and device simulations in undergraduate teaching is also considered a difficult task. This is mainly due to the complicated user interaction with most of the available process and device simulators; usually the input information is prepared in the form of files written in a specific input language for each simulator. In general, professional Technology CAD (TCAD) simulation tools are difficult to use and are considerably more complex. The users need dedicated training sessions to successfully use the tools. Also, during the last three decades, a new generation of semiconductor processing involving new material systems such as strained-Si and Silicon-Germanium (SiGe) have appeared, and integration of Group III-V compound semiconductors with Si technology is evolving (Maiti, Chattopadhyay, & Bera, 2007). With these ad-

1044

vancements in semiconductor manufacturing, it is becoming difficult for the VLSI designers to optimize circuit design without considering the effects of advanced ULSI/GSI integration processes. The International Technology Roadmap for Semiconductors (ITRS, 2007) predicts the use of TCAD may provide as much as 40% reduction in technology development costs. TCAD has grown in both sophistication and maturity and is now an essential engineering tool for new technology development in industrial environments. Recent industry trends have given rise to major development opportunities for TCAD, and the virtual wafer fabrication has now become an integral part of semiconductor fabrication.

THE NEED Semiconductor device theory and IC processing courses are becoming more important in electrical engineering curricula due to the fast changing semiconductor technologies and the challenges faced by the semiconductor industry. Process/ device simulation tools are being introduced to students taking graduate courses in device physics and SPICE for circuit simulation. Semiconductor CAD originated in the early 1960s with efforts to understand and optimize bipolar transistors (Dutton, & Yu, 1993). TCAD is now a proven approach for developing process technologies, and a comprehensive set of TCAD tools is available from universities and from TCAD vendors. Advanced TCAD tools are capable of modeling the entire semiconductor manufacturing process. Also, the most challenging issue in the context of IC design and manufacturing now is product yield because the main causes for yield loss have changed over the years. Technology development in the semiconductor industry is a very complex task and requires a deep understanding of different physical and mathematical techniques. Next generation semiconductor engineers will need to be competent in the use of advanced process and

Teaching Technology Computer Aided Design (TCAD) Online

device simulation tools as part of their everyday practice, as TCAD simulation allows a more transparent vision of state-of-the-art IC fabrication. The renewed focus on chip interconnects is moving TCAD modelling beyond active devices and further into the higher levels of the EDA tool hierarchy. It is imperative that microelectronics education introduce industry-standard computer aided design tools to enhance traditional teaching methods and to provide students with the necessary CAD skills as required for semiconductor industry. TCAD software requires a considerable investment both in development of new capabilities and training in the use. Several attempts have been made to integrate professional TCAD simulation tools in semiconductor device courses (Kenrow, 2004). The students are also encouraged to take an IC fabrication laboratory along with the course. However, available TCAD tools are dispersed in nature and problems are encountered in locating, acquiring, installing, and learning to operate a proper simulation tool. The other difficulty is to provide the students with access to TCAD simulation tools through user-friendly computing platforms, which now tend to be personal computers to support the process/device simulation and modeling activities. Availability of high-speed internet and advances in Information and Communication Technology (ICT) has allowed the enhancement of traditional learning methods. Advances in computer technology have made it feasible to provide engineering students with computer support for learning, especially in technical education. Nowadays, many universities have developed e-learning materials for theory courses and online laboratory facilities allowing students to perform laboratory experiments by simulation or with remote equipments (Maiti, 2010, Fjeldly, & Shur, 2003). Internet accessible device simulation laboratories for semiconductor education, which is a virtual laboratory accessible via standard World Wide Web (www) browsers, have also been reported and provide

students using different computing environments access to a broad range of simulation tools (for example, http://www.ecn.purdue.edu/labs/punch). Users can define simulations, run them, and view the printed and graphical output. To take full advantage of the predictive power of TCAD to reduce cost and time in semiconductor process development, introduction of “Technology CAD” as a subject in final year undergraduate and graduate level microelectronics courses was introduced in 1998 at IIT Kharagpur. In this chapter, we shall present a summary of our experiences gained in teaching TCAD during the last ten years. Emphasis is given to the critical role of the integrated online TCAD laboratory toward VLSI design along with process and device modelling. The chapter will address the issues and problems related to the intranet and/or internet to be used as the network for TCAD simulation laboratory for the remote users. Technology CAD is a onesemester, three credit (35-40 lecture hours and 30 laboratory hours) course for graduate students enrolled in Microelectronics and VLSI Design at IIT Kharagpur. The curricula are designed in such a way that the students are exposed to design procedures by participating in device and process designs via simulation. The course covers various topics via lectures, homework, laboratory assignments, and design projects that give the students the opportunity to gain extensive knowledge in semiconductor device/circuit design and fabrication. A brief review of the TCAD applications in semiconductor manufacturing is made first. The TCAD course structure and the objectives of the laboratory session are presented next. Use of an integrated measurement-based online Microelectronics and VLSI Engineering Laboratory and simulation-based Technology CAD Laboratory for expanding microelectronics education is described in detail.

1045

Teaching Technology Computer Aided Design (TCAD) Online

What is Technology CAD? Simulation of the fabrication steps of microelectronic devices and integrated circuits is a technique that is used extensively in silicon IC manufacturing to speed product development and reduce the development costs. Technology computer aided design is the term used to describe a broad range of modeling and analysis activities that consist of detailed simulation of IC fabrication processes, device electrical performance and extraction of device parameters for equivalent circuit models. Semiconductor manufacturing companies use TCAD for monitoring, analyzing and optimizing the IC process flows. The demand for engineers with experience in TCAD is increasing. Possessing a working knowledge or proficiency with TCAD is rapidly becoming a highly marketable skill for process engineers. Introduction of TCAD tools in device physics courses is becoming a necessity to provide the students with critical skills in the microelectronics area. Two major components of TCAD are (a) Process simulation and (b) Device simulation. Process simulation is modeling of semiconductor manufacturing processes. Process simulation allows one to investigate the possible process steps prior to actual fabrication. Process simulation includes basic unit steps such as oxidation, implantation, diffusion, etching, growth, and deposition. Process simulation can also highlight issues that could potentially lead to circuit failure. Simulations allow verification of process design rules with the goal of design for manufacturability. Device simulation is for modeling the semiconductor devices. Device simulation produces the dc and ac characteristics of devices and is used to determine transistor models. SPICE model parameters are extracted from the measured electrical characteristics and the model parameters can be used in circuit simulators such as SPICE, which is often used as a key tool in Electronic Computer Aided Design (ECAD) to design VLSI circuits.

1046

As TCAD helps in the design of process technology, it is often used in research and development of new processes and devices using computers as a substitute for the semiconductor processing laboratory. In process modeling, a systematic design of experiments (DoE) run is performed. DoE experiments can be systematically set up with control over process parameters and arbitrary choice of device performance characteristics. Most TCAD simulation tools provide detailed, colorful plots of device profiles and processing steps, giving the student the ability to see what is happening inside the device and to understand how the details of fabrication affect device properties. TCAD also offers greater physical insight into device operation; for example, one can visualize parameters that cannot be measured, such as current or electric field distributions inside a device. A costly fabrication laboratory is not needed for the purpose of demonstrating the fabrication steps as a computer is used as a substitute for a semiconductor processing laboratory. Students can also perform TCAD simulations beyond normal laboratory hours and at their own pace. Currently available professional TCAD tools are a very powerful and economical means for teaching microelectronics technology. The area of Design for Manufacturing (DFM) is also becoming very important. The idea of DFM is to shorten the design cycle, to reduce manufacturing costs, and to improve yield, resulting in faster time-to-market and increased profitability. TCAD also addresses the issues of process and device variability in manufacturing to mitigate the rising costs of semiconductor process and product development as each technology node is driving towards higher performance and increased complexity. TCAD may be used as a powerful tool to identify root causes for yield loss and are used to study device sensitivities on process variations. Currently a major trend in the industry is to apply TCAD tools far beyond the integration phase into manufacturing and yield optimization. Linking of process parameter variations with the

Teaching Technology Computer Aided Design (TCAD) Online

electrical parameters of a device through Process Compact Model (PCM) is in use. Process Compact Models are necessary to enable efficient analysis of complex and multivariate process-device relationships, with applications to enhancing process manufacturability and process control. PCM may be used to capture the nonlinear behavior and multi-parameter interactions of manufacturing processes. This is driven by the fact that product yield loss nowadays is dominated by systematic defects coming from lithography and IC design as well as by random process variations. For an extended TCAD in process modeling, generally a systematic design of experiments run is performed. DoE experiments can be systematically set up to study the control over process parameters and arbitrary choice of device performance characteristics. SPICE process compact models (SPCMs) can be considered as an extension of PCMs applied to SPICE parameters. By combining calibrated TCAD simulations with global SPICE extraction strategy, it is possible to create self consistent process-dependent compact SPICE models with process parameter variations as explicit variables. The simulation experiments in TCAD have the advantage that every process condition can be accurately controlled and that arbitrary product performance characteristics can be determined. This is opposed to real experiments, where the control of process steps may be difficult and subject to uncontrollable drift or variation in the equipment and where the limitations of metrology can make it difficult, expensive or impossible to make both nondestructive and destructive measurements. Visualization is one of the big advantages of TCAD tools, as during process simulation, evolution of actual cross-sections of the structure can be observed.

SPICE Parameter Extraction The success of a VLSI circuit design depends on the device models used to describe the device behavior. As semiconductor devices shrink,

the need for accurate circuit simulation SPICE model becomes acute. The most important test structures in an IC manufacturing process are the devices themselves. It is thus imperative that these devices are accurately characterized so the most accurate model parameter set for the device under test can be extracted. The device models usually consist of a set of model equations that are either empirical or derived from device physics or a combination of both. Therefore, the design of integrated circuits is heavily dependent on circuit simulation, which needs compact device models. From the measured device characteristics SPICE parameters are extracted. Parameter extraction is an integral part of compact modeling. The goal of parameter extraction is to determine the values of device model parameters that minimize the total differences between a set of measured characteristics and results obtained by evaluation of the device model. Several programs for parameter extraction are available on a commercial basis. Agilent offers an integrated circuit characterization and analysis program (ICCAP, 2004). ICCAP offers device engineers and designers a state-of-the-art modeling tool that fills numerous modeling needs, including instrument control, data acquisition, parameter extraction, graphical analysis, simulation, optimization, and statistical analysis. ICCAP runs on MS Windows 2000 or XP platforms and is easily accessible via Internet. Microsoft’s Remote Desktop Connection, an application that recreates the desktop of a remote machine on the local machine over a network, may be used for parameter extraction from remote locations. Silvaco’s Universal Transistor Modeling Software (UTMOST, 2008) is another data acquisition and parameter extraction tool with applications in the areas of device characterization and modeling. UTMOST generates SPICE models for analog, mixed-signal and digital applications. UTMOST accepts data from direct measurements, measurements stored in a measurement log file, process and device simulation, or from other

1047

Teaching Technology Computer Aided Design (TCAD) Online

model parameter sets in order to create a suitable parameter set for a chosen model. UTMOST can control a wide range of commercial measurement equipment and probers so a user has maximum flexibility in the configuration of a measurement system. UTMOST supports the simulation of dc, transient, capacitance and s-parameter characteristics. UTMOST also incorporates commercial device models, user-defined device models and macro models.

THE TECHNOLOGY CAD COURSE The TCAD course, offered since 1998 at IIT Kharagpur, has four primary objectives, and upon successful completion of the course, the students should be able to demonstrate a high level of proficiency. These objectives are as follows: 1.

2.

3.

4.

The use of TCAD tools allows students to learn the fundamentals of device processing in a virtual environment, including the determination of SPICE parameters; The ability to develop skills on semiconductor process design (very similar to the actual experience obtained from a fabrication laboratory); The ability to acquire proficiency with the use of TCAD tools and issues relating to analog and digital circuit design; and For a career in circuit design (both analog and full custom digital) and/or as process engineer.

To support the aims and learning objectives more effectively, the course goes beyond the possibilities offered by traditional lecture type microelectronics courses. The curriculum was developed by researching various universities recognized for microelectronics education. TCAD software tools are introduced to students taking a required course in semiconductor device physics in the electrical engineering program. The

1048

course consists of theory (3-credits) in classroom (three hours) and the online TCAD laboratory (http://lod.iitkgp.ernet.in/netlab/). The classroom teaching is used to present and develop specific topics that support theory and the corresponding laboratory work. As part of the course, students are introduced to the TCAD tools during class demonstrations by the instructor. Students are expected to spend time in the classroom learning the theoretical background, objectives, and purposes of the simulation runs to be performed in the laboratory sessions. Visualization is one of the biggest advantages of TCAD tools. During the process simulation, the evolution of actual cross-sections of the structure can be seen. TCAD visualization can also be brought to the classroom in different ways. Simulation results in a fixed graphics format can be integrated into the lectures, and the evolution of a simulation can be presented as a movie, e.g., embedded in a PowerPoint presentation. Tutorials are critical to the learning environment because they are designed with the purpose of enabling students to learn the material independently in an adaptive learning environment (learning at their own pace and areas of interest). Following the classroom teaching, the students continue with the tutorials, which are assigned as homework. TCAD tools in the laboratory are made available in an open laboratory environment in which students can use the software independently at any time and from anywhere. TCAD softwares (mostly the student versions) run with reasonable speed on any modern laptop under Linux or Windows operating systems, making interactive simulations possible in classroom as well. The topics covered in lecture classes are the introduction and overview, role of TCAD for semiconductor technology development, TCAD principles, tool integration, process technology for Si, silicon-germanium (SiGe), III-V semiconductors, process simulation, device simulation, bipolar and MOS device models, heterojunction device modeling, simulation of SiGe HBTs, simulation of

Teaching Technology Computer Aided Design (TCAD) Online

heterostructure FETs, simulation of AlGaAs/GaAs devices, introduction to virtual wafer fabrication (VWF) automation tools, device characterization and dc and ac SPICE model parameter extraction, TCAD calibration procedure, process compact models (PCM), and design for manufacturability (DFM).

TCAD LABORATORY With the advancements in semiconductor manufacturing, it is becoming increasingly difficult for the VLSI designers to optimize the design without considering the effects of the VLSI processes. The conventional TCAD laboratory is designed for both undergraduate and graduate students. It takes place in a classroom where a stand-alone sever running the Silvaco suite manages twenty PCs as clients. The Silvaco software used by the students are ATHENA, the process simulator, and ATLAS, the device simulator. Prior to the beginning of the laboratory sessions, an interactive software presentation is conducted that helps the students set up the preliminaries necessary for simulation runs for the tutorials, like doping, diffusion, etc. The tutorial sessions also include several simple example runs for familiarization with the tool. The laboratory class consists of five/six sessions. All sessions are designed for developing two levels of knowledge: the definition/implementation of the simulation script (input file) and the analysis/ understanding of the results. Session 1: The process simulator ATHENA is introduced. The student simulate step-by-step an NMOS transistor. During this session, the doping profiles are investigated after each process step. In this session the effect of boron implantation used for the threshold voltage adjustments is studied. Also in this laboratory session the students observe the evolution of the device structure through all processing steps and are asked to write down their observations.

Session 2: Still with the ATHENA tool, this session continues with the polysilicon deposition and finishes the transistor processing. Like the prior session, the doping profiles are investigated after each step. The students are encouraged to compare the process flows for long channel and short channel devices. Session 3: The device simulator ATLAS is introduced. In this session the students compute the ID-VD and ID-VG characteristics of the NMOS transistor for several process parameters. The gate sweep covers the operation regions of accumulation, depletion, inversion and deep inversion. The student then compares 1D cross-section of the carrier distributions at the center of the device with the theoretical predictions. With this comparison, the student is able to analyze the short channel effects observed in MOSFETs. Session 4: In this session the mixed-mode device and circuit simulators are introduced. The electrical characteristics are used to obtain the compact model of the transistor and the students have to build a CMOS inverter. Session 5: The student uses the Silvaco tools to design a process flow for a bipolar transistor using the graphical process flow editor tool. The students test the process flow by simulating it with the process simulation ATHENA and extracting the doping profiles across the emitter, base and collector regions. Session 6: With the device simulator ATLAS, the students compute the output and Gummel characteristics of the BJT for several process parameters. The electrical characteristics are used to obtain the compact model of the transistor. The student extracts Gummel-Poon model parameters from TCAD device simulations.

TCAD Tools During the last forty years, various simulation tools, mostly from the University laboratories have been reported. The principal aim has been to provide semiconductor engineers with computer

1049

Teaching Technology Computer Aided Design (TCAD) Online

simulation tools that form a quantitative link between the basic technological parameters and the electrical behavior of semiconductor devices. With the rapid development of the semiconductor industry, movement of individual TCAD tools from predominantly research and development to production environments has taken place (Fasching, Halama, & Selberherr, 1993). This fact has motivated a rapid development of TCAD systems as a uniform framework that supports the effective usage of multiple simulators with simple user interface, convenient and standardized data transfer, and visualization. Commercial TCAD vendors (Silvaco and Synopsys) provide an excellent collection of TCAD systems from different points of view regarding their respective origins, interests and goals. Commercial TCAD tools may be used to show students the link between physical and electrical simulation through the mixed TCAD and electrical simulation abilities of Silvaco and Synopsys tools. Advanced TCAD suites include process simulation, device simulation, compact models parameter extraction and circuit simulation suites, interconnect simulation, and optimization to other technology CAD requirements. However, the use and maintenance of coupled TCAD tools becomes difficult and requires a significant level of user experience.

TCAD Tools: Remote Access Solutions Historically, all the TCAD tools developed were available on various UNIX-based platforms. Central computer servers often with multiple CPUs and UNIX- or Linux-based operating system and large amounts of RAM are used, as such servers are ideal for large-scale computing tasks such as TCAD. Attempts have also been made for Windows versions of TCAD tools, but the use of Windows versions are very limited, as the software packages are distributed and supported by third-party vendors. Also, as laptops are typically optimized for mobility and long battery life

1050

instead of large-scale computation, laptops are generally not used for running TCAD applications. However, the use of commonly available remote access software tools conveniently transforms a Windows-based laptop into a graphic terminal for a central UNIX- or Linux-based computer server. Choosing a remote access tool requires several important considerations, as they are available from various vendors and differ in several respects: • • • • • • •

Bandwidth requirements Support for encrypted data exchange Support for Open GL graphics Possibility to disconnect from and reconnect to a terminal session Possibility to use local Windows applications Ease-of-use and initial setup Cost

ONLINE LABORATORY AT IIT KHARAGPUR Major steps for remote online laboratory development are (a) the design of the experiment, (b) remote control and operation of the instruments (for example, via LabVIEW, IC/CVlite, EasyExpert, VEE etc), (c) conversion to web applications, and (d) launching the experiments on the internet. A modular online remote laboratory typically consists of 12-15 hardware-based experiments that always need to be made available to students. The Laboratory-on-Demand (available at http://lod. iitkgp.ernet.in/netlab/) is an initiative to develop an online measurement-based novel integrated Microelectronics and VLSI Engineering Laboratory with simulation-based Technology CAD laboratory. The Home page for the laboratory is shown in Figure 1. An online TCAD laboratory gives the student, even at the undergraduate level, a chance to learn about realistic silicon wafer processing via hands-on simulation experiments. Recent advances in computer hardware now make even

Teaching Technology Computer Aided Design (TCAD) Online

Figure 1. The TCAD laboratory

a standard laptop suitable to run TCAD simulations in a matter of minutes. In Microelectronics and VLSI Engineering Laboratory, the students perform real measurements on a wide variety of devices. The measured experimental data are then passed on to the simulation-based TCAD laboratory for the extraction of SPICE parameters and will be discussed later in this chapter.

Microelectronics and VLSI Engineering Laboratory Measurements are fundamental to an understanding of any semiconductor device. To teach device design and developing concepts at the Indian Institute of Technology, one of the premier engineering educational institutions in India, we use tools from the National Instruments (NI) electronics education platform. With the NI ELVIS integrated design platform combined with LabVIEW (Travis, 2000), we teach microelectronic devices concepts in the EC29004 Semiconductor Devices Laboratory and the EC39004 VLSI Engineering Laboratory classes. The EC29004 and EC39004 courses are required for juniors in the electronics and elec-

trical communication engineering and electrical engineering degree programs at IIT Kharagpur. The Internet is a powerful tool for engineers both for measurement and automation of the characterization laboratory and process optimization; allowing them to access from distant locations, along with their control. Internet-enabled measurement of devices utilizes the internet to achieve greater flexibility. The laboratory management system should be able to handle both the hardware and the software components. The hardware side includes the design and development of an experiment and controls it from the local computer. The software side covers the design and development of the tools for the physical measurement and administration. There are many factors associated with the network, the internet or the intranet, that have direct influences on the system performance throughout the remote operations. These factors among others need to be addressed for a proper application of online laboratories. The Laboratory Server developed by the IIT Kharagpur NetLab developer team has several interesting features. Briefly, the IIT Kharagpur NetLab is an online Microelectronics and VLSI

1051

Teaching Technology Computer Aided Design (TCAD) Online

Engineering Laboratory in which several experimental modules are included. In this chapter, we emphasize the device characterization module, which makes use of two distinct pieces of laboratory hardware/software. The heart of the system is the LCR meter and a semiconductor device parameter analyzer (Agilent E4980 and Agilent 4156C). Broadly speaking, the units generate the user-specified inputs to a connected test device, perform the test and collect the measured data. The Laboratory Server, at a high level, consists of three main software layers. The first is the Web Server Layer (WSL). This portion of the system contains all functionality that operates within the web server process space on the Laboratory Server. The second is a system Data Layer, which contains system information such as experiment logs, accounts, system settings, etc. Third, the Experiment Execution Layer interfaces directly with the laboratory hardware and is responsible for performing experiments submitted to the system.

Online Laboratory Development During development of the Microelectronics and VLSI Engineering Laboratory, the roles of the System Administrator, Laboratory Manager, and students were considered in an integrated manner. For example, System Administrator’s role includes (a) Design of online laboratory system specifications, (b) Design of measurement, monitoring and control interface, (c) Administrative interface and database, (d) Web server configuration design, (e) Design architecture for sharing remote laboratories, (f) Adding/removing experiment, (g) Failure detection/recovery, (h) User login, (i) Time scheduling, (j) Managing students accounts, (k) Automation of student performance evaluation, and (l) Student feedback. The Laboratory Manager’s role includes (a) Preparing experiment modules, (b) Assisting students to conduct experiment, (c) Maintaining the experiment, (d) Developing new experiments, (e) Student’s progress report generation, and (f) Laboratory grade sheet preparation/

1052

publishing. The students’ role includes (a) Selecting laboratory and the experiment, (b) Studying laboratory manuals, (c) Appearing for preliminary quiz, (d) Booking experiment time, (e) Performing the experiment, (f) Passing the control to partner (collaborative learning), (g) Submitting laboratory report, (h) Appearing for viva-voce examination, and (i) Closing the laboratory session. The System Administration Interface (SAI) contains Microsoft Silverlight or equivalent Web Application that captures the administrative functionality necessary to run the Laboratory Server. This includes human interfaces for log/record viewing and resource management. This interface is accessible only to registered Laboratory Server administrators and, as such, also contains authentication and authorization functionality. In the current version of the Microelectronics and VLSI Engineering Laboratory software, the validation of experiment specifications is handled entirely by the Laboratory Server. The experiment validation component contains all the functionality required for implementing any experimental module. After the experiment module is confirmed, the specification is checked for permission for submitting the experiment as well as the software imposed (by LabVIEW) constraints on the device. This confirms that the specified experiment should be allowed to run within the current configuration/ session. The Experiment Execution Layer (EEL) is a laboratory module (experiment) specific set of components that interfaces directly with the laboratory hardware and governs the experiment execution process. This layer can be broken into three individual components: the execution engine runs as a separate and independent process and interacts with the other Laboratory Server layers. The execution engine runs in the background at all times while the Laboratory Server is operational. At a given interval, the execution engine checks the experiment execution queue for new jobs. In order to communicate with the laboratory hardware, the Experiment Execution Engine (EEE) uses the

Teaching Technology Computer Aided Design (TCAD) Online

methods defined by the set of Hardware-Specific Drivers (e.g., Agilent E4980). This component is made up of a series of custom built drivers specific to the hardware. In particular, each of these driver libraries defines methods that are specific to the piece of hardware they are meant to control using low level GPIB commands. At the lowest level of the Experiment Execution Layer is the library defining the GPIB Interface API. For the Laboratory Server to control the necessary laboratory hardware via a GPIB network, a GPIB Interface card supplied by the manufacturer must be installed. Combination of National Instruments (NI) software and hardware is found to be very effective for development of web-based laboratory such as Microelectronics and VLSI Engineering Laboratory. With the flexibility and power of NI ELVIS and LabVIEW, students can perform automated measurements. The online laboratory organization is shown in Figure 2.

Laboratory Management As one particular experiment can be performed at a time by an individual student or a group of stu-

dents (in case of collaborative learning), a careful scheduling of the experiments is required. Thus, for proper implementation and management of online laboratory system, an efficient laboratory management system is essential. As the experiments require ‘real-time’ control of the equipment, the online laboratory currently uses ‘time-slot’, during which the students set the measurement conditions and conduct experiments from remote PC. The time-slot (duration) is the time necessary to carry out an experiment and needs to be pre-estimated. The scheduling system for online experiments assumes the student would be able to finish the experiments during this time, just like they would in a conventional laboratory. The students first select a time slot and register (booking) to use the laboratory. At the scheduled time, the student logs in and is able to launch the laboratory client. However, the students can read manuals, watch experimental setups and videos, analyze the measured data without occupying the measuring equipment. Interactive experiments require the control of instruments, during which the users set the parameters and observe the results. As the actual measurement time is usually very

Figure 2. Online laboratory architecture

1053

Teaching Technology Computer Aided Design (TCAD) Online

short, it is not desirable to provide the equipment to students for long periods of time. In fact, for a measurement, most of the time is spent setting up measurement conditions and post experiment data analysis. Thus, for proper utilization of the resources, proper scheduling is necessary in the laboratory management system. In order to choose the laboratory service provider with different kinds of measurement equipment and target devices, an efficient equipment sharing software tool is necessary in case of multiple student requests arriving at the same time. When a student is prepared to conduct the experiment involving a unique experimental setup (i.e., cannot be used concurrently by others), the student logs in and is able to launch the laboratory client for the time slot already registered. Each experiment is associated with a given time length the administrator thinks is enough for the experiment, keeping in view the speed of the Internet and other network elements. The workflow may be summarized as follows: • • • •

• • •

Selection of the experiment Opening of the scheduling web page link in the experiment page The system displays the scheduling interface for the next 15 days The system displays the available time slots for the selected experiment in green color The learner selects the date/time for an experiment in the scheduling interface The learner selects the desired time slot and submits the request for booking The system saves the learner schedule information and updates the scheduling database.

The system displays a confirmation of the schedule to the learner. In due course, the user is given a temporary link to the web page through which the equipment can be accessed and controlled. After the specified amount of time, a

1054

JavaScript automatically closes the window with a warning. Some of the advantages of the timeslot scheduling are the user gets full control of the equipment/system and performs the experiment at any time he/she wants and does not have to wait, and it is ideally suited for experiments where several measurements are necessary under different measuring conditions. However, the utilization of the resources is poor and the efficiency of the management system is not high since the equipment is not always in use and is sitting idle for some time and the number of users accessing the resources is low. Existing remote laboratories limit the number of users to the number of actual available resources. By integrating mixed-mode scheduling services in the laboratory management system, it is possible that a diverse set of experiments can easily be deployed online, and resources can be shared, thus optimizing the resource utilization.

MOSFET Characterization and Modeling In this experiment, the student first measures the MOSFET characteristics, such as output and threshold voltage characteristics. Using these measured data, the student extract the “MOS Level 1” compact model parameters for a long-channel device. For a short channel device, the students use TCAD simulations for parameter extraction.

MOS Capacitor Characterization and Modeling Maintaining the quality and reliability of gate oxides is one of the most critical and challenging tasks in any semiconductor fabrication. Electrical characterization and monitoring are critical for maintaining gate oxide uniformity. Many electrical characterization techniques have been developed over the years to characterize gate dielectric quality. However, the most commonly used tool for studying gate oxide quality in de-

Teaching Technology Computer Aided Design (TCAD) Online

tail is the Capacitance-Voltage (C-V) technique. C-V test results offer a wealth of device and process information, including bulk and interface charges and many MOS device parameters. The importance of C-V measurement techniques is that a large number of device parameters can be extracted from the high frequency C-V curve. These parameters can provide critical device and process information. In this experiment, 1D MOS capacitor model is used to explain the formation of the inversion layer in an MOSFET device. While such a 1D model is quite appropriate to describe the long channel devices, modern deep sub-micron MOSFETs exhibit strong 2D effects, such as charge sharing between the source and drain depletion regions and the depletion under the gate. With TCAD simulation, these effects can be visualized. For the TCAD simulation laboratory, a physically based, semi-analytical, charge control model for Silicon-Germanium (SiGe) quantum well MOS capacitor structure has been developed for the purposes of simulation. The program calculates the capacitance, charge distribution in the SiGe quantum well and parasitic surface inversion layer under various gate biases. The SiGe layer should be underneath a thin Si cap layer to assure formation of a high quality gate oxide since oxides formed directly on Silicon-Germanium have high density of interface states. The MOS capacitor structure consists of the following layer sequence: (100) oriented p-type substrate, p+ doping spike, Si spacer layer, SiGe quantum well, Si cap layer, oxide and gate. It is assumed that the semiconductor region is doped uniformly throughout with Nd donors/cm3, with the exception of the p+-doping spike, which is doped with Na acceptors/cm3, and thin enough so that it is fully depleted at zero gate bias. Here, we have utilized quantum mechanical descriptions for mobile charge distributions in the Silicon cap layer and SiGe well layer using internal electrostatic potential calculation. In this experiment, the students first measure the MOS

capacitor characteristics and then compare the experimental data with TCAD simulation.

Bipolar Transistor Characterization and Modeling In the following, we discuss in detail the bipolar device characterization module, which makes use of a single instrument (Agilent 4156C) for the characterization of bipolar junction transistors (BJT) involving five experiments sequentially. The Experiment Module for BJT characterization consists of the following experiments: • • • • •

Static Collector Characteristics Gummel Plot Emitter Resistance Collector Resistance Current gain vs. Collector Current

In this experiment the students are asked to do the following: • • • •

Collect data on output and transfer characteristics for a given device, Graph observed characteristics Download and analyze data Extract device parameters from the data and model the bipolar device

The complete range of semiconductor dc parameters can be quickly and accurately evaluated with 4156C stand-alone instrument. The measured static collector characteristics of an NPN transistor are shown in Figure 3. Results may be used to analyze to obtain early voltage and collector output resistance. One of the most important steps in evaluating bipolar transistor parameters is measuring collector current and base current as a function of base-emitter voltage (Gummel plot). These measurements can be graphically analyzed to obtain saturation current, current gain, and current gain vs. collector current characteristics, along with base resistance and recombination

1055

Teaching Technology Computer Aided Design (TCAD) Online

current characteristics. The collector current and base current characteristics are illustrated in Figure 4. The series resistance of the emitter of a bipolar transistor can be determined by stimulating the base with current and measuring the voltage between the collector and emitter. The inverse of the characteristics curve gradient can thus be obtained for emitter resistance determination. The measured characteristics are shown in Figure 5. Figure 6 shows how the 4156C can be used to accurately measure series collector resistances in low voltage regions, making it particularly valuable in parametric analysis of CAD models. The measured transistor’s current gain vs. collector current characteristics is shown in Figure 7. In the integrated online TCAD laboratory described above, simulation program, Tool for Electronic Model Automation (TEMA) is used for the extraction of SPICE parameters (TEMA, 2010). The software is an EDA tool for automated SPICE modeling of advanced semiconductor devices for development of an Analog/RF circuit design kit. The tool is suitable for modeling of dc, ac, RF, 1/f Noise and high-frequency noise for on wafer or packaged semiconductor devices. The parameter extraction for the industry Figure 3. Output characteristics

1056

standard compact models, such as, SGP, VBIC, HiCUM, BSIM, PSP and HiSIM can be performed for design of complex analog/RF circuits and to verify analog mixed-signal designs. In the present version used in our laboratory, only the dc simulation is implemented. The extracted SPICE parameters from the measured Gummel plot (see Figure 4), are shown in Figure 8.

STUDENTS’ PERFORMANCE EVALUATION Keeping track of each learner’s activities and progress in the laboratory sessions is essential for the instructor. In the Microelectronics and VLSI Engineering Laboratory, the students’ performance evaluation (for each experiment) is generally done at several stages: i) pre-laboratory quiz (after which a student is allowed to book time and perform experiment), ii) submission of the laboratory report after the successful completion of the experiment, and iii) appearing in the viva-voce examination for the experiment. Once an experiment is successfully performed, the student needs to save the data for analyses. After the completion

Teaching Technology Computer Aided Design (TCAD) Online

Figure 4. Gummel plot

of the data analyses, the student has to prepare a detailed laboratory report for submission. In the developed laboratory management system, an online submission facility is provided. The online report submission format contains the details from the experiment as well as the data obtained and the analyses. The uploaded laboratory report may

be edited by the student for some time even after submission. Only the laboratory teacher has access to this report for evaluation and comments. Students are not allowed to have access to reports submitted by other students. Only after successful submission of the laboratory report, can the student take part in the viva-voce examination,

Figure 5. Emitter resistance

1057

Teaching Technology Computer Aided Design (TCAD) Online

Figure 6. Collector resistance

which consists of several online true/false, yes/no or short questions. Both the viva-voce performance and the laboratory report are checked for each experiment performed successfully by a student. Once the student completes the laboratory module (say consisting of 10 or 15 experiments), then the final grade is prepared and displayed. Only the student and teacher can see this grade. Finally, Figure 7. Current gain vs. collector current

1058

for all the laboratory users, the final grade sheet is prepared and displayed.

Preliminary Quiz The main motivation of online hardware-based laboratory experiment is to provide the facility to as many students as possible, and thus the number

Teaching Technology Computer Aided Design (TCAD) Online

Figure 8. SPICE parameter extraction

of users per instrument is large. It is thus necessary that a user really engages the equipment only when he performs the experiment. This is why a preliminary quiz is introduced and conducted before the learner can gain access to the actual experiment webpage and the equipment and/or the experimental setup. This is done primarily to optimize the use of the instruments by making sure that the user is familiar with the instruments and has sufficient knowledge about the experiment. The quiz consists of several random true/false questions about the instrument, setup, experiment and fundamentals, etc. The user goes through the learning materials, videos, and details of setup available for each experiment. When the user is ready to use the instrument, he/she appears for the quiz. Only after the learner obtains a certain mark (set by the instructor for each experiment), is the student allowed to book the experiment time slot and gain access to the experimental setup during that time.

Uploading Laboratory Report After the experiment has been successfully performed by the learner, is the experimental data saved and analyzed by the student for the preparation of the laboratory report. This report should contain detail from the experiment, setup, data, and results for parameter values, etc. The online laboratory report submission is basically filling in a form. After the preliminary section of the laboratory report is submitted, the user needs to enter the final results (parameter values) obtained from the analyses. These parameter values are then compared with the pre-set (default) parameter values generated by the instructor. Depending on the off-set (in percentage), the student is given marks, and grading is done accordingly. The instructor needs to set some tolerance for evaluation purposes and the system compares the student’s uploaded value with the default value. For example, for a tolerance of 10%, 1 mark will be deducted from the full marks (FM) for the parameter. The instructor may also specify 3 tolerances levels and the corresponding marks that are to be deducted.

1059

Teaching Technology Computer Aided Design (TCAD) Online

Viva-Voce Examination Finally, the learner has to take part in the vivavoce. The viva-voce consists of several true/ false or multiple choice questions as asked in a conventional oral examination. The students need to answer them in a certain period of time. The data is matched for correctness, and the number of correct answers is determined. The performance is graded accordingly. This value is then added to the laboratory report marks to get the final marks for the experiment.

Final Grading The final grade for the student is prepared when all the experiments in the laboratory class have been successfully completed. Both the student and teacher can see the grade. The instructors and/ or teachers can save this information for future reference. Marks for each experiment are then added to get the final grade report/sheet of the entire group of students for the class.

LABORATORY EVALUATION AND STUDENTS FEEDBACK In order to learn how the remote laboratories provide useful learning environments, different models of assessment are employed. Our experience with internet learning systems has shown that the greatest benefits come from automating the laboratory assessment, as described in the previous section. To evaluate the utility of the online TCAD laboratory, we offered the students (with 27 students) an option to repeat the experiment in a scheduled laboratory class to improve their learning. Only 16 students operated the equipment for 20 minutes or more in the scheduled laboratory class, while nearly all the students managed to complete the assigned task using the remote laboratory. Our trial with the remote access laboratory showed that students could perform

1060

experiments in a lesser time. Using the remote laboratory system, 80% of the students managed to complete the required tasks, remotely operating the equipment, collecting data and running the simulations. These results demonstrate that the remote laboratory has significantly extended the laboratory experience for most of the students in the class. TCAD tools are critical components of microelectronics engineering education. Over years of extensive development and incorporation of simulations tools in our course have taken place. Our impression is that available TCAD tools have been used extensively by the students and the students have received first-hand experience of using the tools. In the course evaluation and assessment process, the instructors have validated the outputs following the given specifications to assess the technical aspects of the home assignments submitted by the students. The students were asked to answer an online questionnaire and identify themselves by their roll number so that the instructor could relate their responses to logfile records of experiments performed. In a survey among the students on the use of the integrated Microelectronics and VLSI Engineering Laboratory along with simulation-based Technology CAD laboratory tools, several generic and conceptspecific questionnaires were asked. Analyses of some of the responses indicate a good balance of available tools in the minds of the students. The assessment results collected from the laboratory class (106 students in two semesters) are analyzed in Table 1.

CONCLUSION The Virtual Wafer Fabrication (VWF) has become an integral part of the semiconductor industry. The possibility of teaching semiconductor manufacturing in a university environment in a highly cost-effective manner by taking the advantages of high speed internet and available TCAD tools has

Teaching Technology Computer Aided Design (TCAD) Online

Table 1. Statistics of laboratory assessment Evaluation Questions

Very Useful (%)

Useful (%)

OK (%)

Neutral (%)

Not Useful (%)

Adaptability

67

21

11

01

-

Effectiveness

73

12

06

-

09

Understanding of Experiments

57

23

11

04

05

Improvement in average student scores

23

49

21

-

07

Impact of the tool in terms of learning

59 (high)

31 (medium)

09

-

01

Scalability of the tool

66 (high)

24

10

-

-

GUI (user friendliness)

79 (high)

14

06

-

01

Online lab intuitiveness

43

37

11

09

-

Simplicity and interactivity

89 (high)

-

-

11 (medium)

-

been explored. Technology CAD is shown to be an excellent resource for teaching microelectronics. The objective of a laboratory component of any semiconductor fabrication course is to teach the students the unit processes involved in microelectronic fabrication and to introduce the practice of process development. Technology computer-aided design (CAD) tools for process and device simulation are now widely used in semiconductor process design. Microelectronic courses can be taught efficiently and cost-effectively by using process and device simulation tools without any physical process laboratory setup. When used in conjunction with conventional modes for teaching design and fabrication of semiconductor devices and circuits, the process simulation tools, combined with the device simulation, can be a very powerful tool for greater understanding of electronic devices and their operations. Device modeling and parameter extraction can help students understand the principles of electronic device operations and their applications in electronic circuits. With online the TCAD laboratory, the students are able to perform simulation experiments and explore the impact of process flow modifications at virtually no cost.

An online Microelectronics and VLSI Engineering Laboratory integrated with a simulation-based Technology CAD laboratory can offer a new degree of flexibility in microelectronics education. However, the best environment for teaching semiconductor processing courses would be to use both the TCAD simulation tools along with the microelectronics laboratory, which is costly. The developed online TCAD course with the associated laboratory gives the students, even at the undergraduate level, a chance to learn about real silicon processing via hands-on process/device simulation without the need for IC processing facilities and the opportunity to explore device behavior at a much deeper level. We expect online TCAD laboratories will encourage creativity, as interested student can design, create, and test devices in a matter of minutes to hours without using expensive resources.

1061

Teaching Technology Computer Aided Design (TCAD) Online

REFERENCES

ADDITIONAL READING

Dutton, R., & Yu, Z. (1993). Technology CAD Computer simulation of IC processes and devices. Boston, MA: Kluwer Academic Publishers.

Maiti, C. K., & Armstrong, G. A. (2008). TCAD for Si, SiGe and GaAs Integrated Circuits. London: IET Press.

Fasching, E., Halama, S., & Selberherr, S. (Eds.). (1993). Technology CAD systems. Vienna, Austria: Springer-Verlag. Fjeldly, T. A., & Shur, M. S. (2003). LAB on the WEB, Running real electronics experiments via the Internet. New York, NY: John Wiley & Sons. doi:10.1002/0471727709 ICCAP. (2004). Agilent technologies, user manual. Kenrow, J. A. (2004). Integrating professional TCAD simulation tools in undergraduate semiconductor device courses. Proceedings of the 2004 ASEE Conference. Maiti, A. (2010). NETLab: An online laboratory management system. iJOE, 6, 31-36. Maiti, C. K., Chattopadhyay, S., & Bera, L. K. (2007). Strained-Si heterostructure field-effect devices. London, UK: Taylor and Francis. Purdue College of Engineering. (n.d.). Simulation Hub. Retrieved from http://www.ecn.purdue.edu/ labs/punch. TEMA. (2010). Meridian software technology (P) Ltd. User manual. The International Technology Roadmap for Semiconductors (ITRS). (2007). The TCAD laboratory. Retrieved from http://lod.iitkgp.ernet.in/netlab/. Travis, J. (2000). Internet applications in LabVIEW. Upper Saddle River, NJ: Prentice-Hall. UTMOST. (2008). Silvaco International, user manual.

1062

KEY TERMS AND DEFINITIONS Architecture: The architecture defines a framework or structure of an online laboratory. Client Requirements: The Client requirements describe different conditions (Operating System, Browser, Run-Time engines etc) necessary on a server/machine prior to run an application. Device Simulation: Semiconductor device simulation is the modeling of semiconductor devices. Experiment: Experiments are part of different types of Online Laboratories. Experiments may be real measurement-based or virtual (simulation). Hybrid Laboratory: The Hybrid Laboratory is a mix of remote software simulations and real experiments in one single environment that can be accessed over the Internet. Laboratory Management System: This represents a software application for the administration, documentation, scheduling experiments, tracking students’ activities and performance. Measurement Experiment: The structure of a measurement-based experiment is generally fixed. Measurement conditions and parameters might be changeable. Microelectronics Laboratory: Microelectronics is a sub-field of electronics. Microelectronics laboratory is related to the study and manufacture of micrometer-scale electronic components usually made from semiconductors. Online Laboratory: An Online Laboratory is defined as the hardware and software platform that provides resources over the Internet. Online laboratories may be divided in sub-categories:

Teaching Technology Computer Aided Design (TCAD) Online

Remote Laboratory, Virtual Laboratory and Hybrid Laboratory. Process Simulation: Semiconductor process simulation is the modeling of fabrication processes of semiconductor devices. Remote Laboratory: The Remote Laboratory provides real experiments that can be accessed via the Internet. This definition implies the control of real hardware and the realization of real measurements.

SPICE Parameters: SPICE parameters are semiconductor device parameters necessary for circuit simulation. Technology CAD: Technology CAD is a branch of electronic design automation that models the semiconductor processes and semiconductor devices. Virtual Laboratory: The Virtual Laboratory provides remote software simulations. The simulation model runs on the server and can be accessed over the Internet.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 185-205, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

1063

1064

Chapter 58

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment Sami Akabawi American University in Cairo, Egypt Heba Hodeeb American University in Cairo, Egypt

EXECUTIVE SUMMARY To compete successfully in today’s retail business arena, senior management are often demanding fast and responsive Information Systems that enable the company not only to manage its operations but to provide on-the-fly performance measurement through a variety of tools. Use of (ERP) systems have been slow in responding to these needs, despite the wealth of the internally generated business databases and reports as a consequence of functional integration. The specific nature and demands by those senior management staff require the congregation of many external data elements and use data mining techniques to provide fast discovery of performance slippages or changes in the business environment. Data Warehousing and Business Intelligence (BI) applications, evolved during the past few decades, have been implemented to respond to these needs. In this case write-up, we present how the ERP system was utilized as the backbone for use by BI tools and systems to provide Sales and Marketing units in a transnational company subsidiary in Egypt to actively respond to the demands for agile information services. The Egypt subsidiary is the HQ of the African region’s operations of several franchises and distributers of the company products, in addition to operating a beverage concentrate manufacturing plant in Egypt, which services the entire region’s beverage products needs.

DOI: 10.4018/978-1-4666-1945-6.ch058

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

BACKGROUND Company Overview The case firm considered in this chapter is a transnational company subsidiary in Egypt, which is located in the Free Zone in Nasr City district, in the country’s capital, Cairo. The company owns and operates a beverage concentrate plant within the domain of its facility for producing the concentrate syrup used in all its beverage products. These concentrates are then delivered to many bottling operators in African countries for the production of the final product mixing, bottling, packaging and trade fleet distribution in their respective territories. The concentrate plant is among the few plants of the transnational company world-wide, where the technical know-how formula of the company is produced, and the plant caters for the supply of concentrate syrup for the entire African, Middle East and Asian bottling territories. The company is divided into a number of business units (BU). Egypt’s subsidiary assumes the function of the head office to the African business unit which constitutes more than 25 franchises and distributers in many African countries. In this casepaper, we consider the analysis and evaluation of the information systems categories used in the head office of the company’s subsidiary in Egypt for the Sales and Distribution management in the region. In particular, we detail how the backbone Enterprise Resources Planning (ERP) and business intelligence systems (BI) are integrated and used to leverage the need for agile management of the operations in the Sales and Distribution functions within the highly dynamic competitive beverage market.

Brief Economic Outlook of the Country Where the Case Company Operates Forecasts put Egypt’s food and drink exports growing by 59.4 percent between 2007 and 2012,

which is not only a reflection of the free trade agreements ratified by the government of Egypt since late 90’s, but also the country’s improving food and drink processing industry. Regional trade agreements such as the Greater Arab Free Trade Area (GAFTA) have also given producers access to a far larger market. Having gone into effect in 2005, GAFTA has gradually lowered customs on locally produced food across a broad range of Middle Eastern countries. These agreements opened larger markets for Egyptian producers, given the similarity of diets and lack of language barrier. Meanwhile, Africa is also becoming another key export market, mainly for its proximity and lack of domestic production capacities in the African countries. Sharply rising food prices have been the cause of growing unrest in Egypt over the past two years. Though the Egyptian government has taken a number of measures to deal with the mounting public discontent, inflation hit the 20 percent mark and food prices skyrocketed, and a dire situation has evolved in the country. Egypt is a rather unique market for the region as it benefits from a very large population (over 80mn) and an unsaturated food and drink market. However, the food and drink trade balance is highly dependent on imports. In addition, inflation, in a country with national poverty rate of 22.9 percent according to the World Bank Development Data Group, (2002) and Earth Trend Country profile estimating 20 percent of the population below the poverty line, led to increased levels of political risk and unrest. Against a backdrop of worsening global financial market turmoil and rapidly accelerating inflation, particularly in emerging markets, Egypt has significantly more economic hardship clouds on the horizon. Nonetheless, Egypt receives a pretty high score in the region for its food and drink market due to its per capita food consumption growth. Egypt does not fare well for the country structure indicator, with low GDP per capita, although the size of its population and lack of market maturity help pull up its score.

1065

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

SETTING THE STAGE Companies in the beverage industry often operate as producers and distributors of canned and bottled soft drinks, concentrate and other juices and liquid beverage products. They experience rapid changes in the way they have to do business. For one thing, increased commoditization and diversification of beverage products has made it more difficult than ever for beverage manufacturers to differentiate their products and their brands against rivals and competing brands. In addition, beverage consumers are demanding greater product variety, higher levels of service and more value for their money (Bingham, 1999). The increased global competition, escalating retailer demands and increasingly stringent government regulations have added to the pressure on those companies, making it harder for them to compete globally. With growing concerns with food safety issues, these companies are also finding it necessary to monitor and track every phase of operation – from raw ingredients through finished product, to packaging, storage and distribution – to ensure compliance with the higher safety standards. Driven by these challenges, beverage producing and distribution companies are looking more and more to their manufacturing processes and supply chain in an effort to increase production efficiency, reduce operational costs and effect better overall management of the enterprise and its assets to sustain desired levels of profitability and growth.

Business Imperatives Historically, there has been resistance to the introduction of technology in the beverage industry except perhaps to automate certain production processes (e.g. mixing, canning, bottling and packaging). In many large companies, use of information technology (IT) has been limited to offthe-shelf or custom-tailored systems implemented

1066

to assist employees with such discrete business processes as accounting, corporate finance, human resources and purchasing. However, in the past few decades, many firms have implemented Electronic Data Interchange (EDI) and host of functional separate systems, often in response to customer and vendors demands or management directives for adopting business-to-customer (B2C) and business-to-business (B2B) e-business functionalities. Despite all of the information flowing through these disparate systems across the enterprise, the information services to operational and top management more often than not lacked integration, timeliness and comprehensiveness. The lack of timely financial information, integrated with order and production flow would hamper the ability of firms to know where they stand from day to day, both from financial and operations point of views. In most companies, by the time financial information becomes available in a report, it’s historical and too late to change course to correct problems. In today’s fast-paced global economy and rapidly changing marketplace, the lack of integrated, real-time information to provide business and market analysis, business intelligence and decision support tools negatively impact profitability, growth and ability to act quickly to external marketing forces. Moreover, efficient order entry, materials and inventory control and fast turnover dictate the need for agile and responsive information systems to minimize waste and delivering efficient customer service. In market-driven industries, once the product is made, time is not nearly so critical – but storage, transportation and distribution are. On the beverage industry side, we see an astonishing level of diversification fragmenting this industry. Where once there were a finite number of carbonated and non-carbonated soft drinks occupying the competitive landscape, today there is an almost infinite explosion of choices – from pure juice and juice blends, to sodas of every kind, waters, and host of beverages. Sales in this area are market-driven, so producers tend to rely

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

on special packaging, competitive pricing, eventdriven promotions and other attention-getting techniques to differentiate themselves. For these firms, forecasting and timing of product-to-market are crucial. Market-driven companies thus need to be able to put a product out on the shelves, see how it sells, then turn to adjust production to match demand. The preceding characteristics illustrate that each segment of the beverage industry has its own challenges to overcome whether it competes in local, national, regional or global markets. However, to retain customer loyalty and grow market share, all beverage manufacturers and distributors need greater flexibility to respond quickly – whether to changing customer expectations, government regulations, or fluctuations in the marketplace resulting from seasonal changes in drinking patterns. In short, beverage producer and distributer companies are finding they have to increase the speed or decrease the lag time between when their products are made and when they are out on the market. To coordinate their production capabilities, manufacturers also need to increase their ability to handle different materials and product lines simultaneously. If they are to improve procurement and inventory control, they must find a way to accurately track both raw materials and finished goods. And, to build their company and product brands among customers and consumers, they must be able to ensure consistency of quality across product lines. On the business end, optimizing plant operations can be a tremendous source of savings. Companies attaining high visibility and close control of key production elements opt to simplify complexity and better manage cash flows. They need to be able to forecast more closely to customer demands – avoiding either overextending or under-producing – to improve their return on corporate assets. The use of enterprise resources planning (ERP) systems becomes the natural choice for fulfilling such needs (Davenport, 1998). ERP systems operate on enterprise-wide domain

to ensure integration among all business functions and result in one large single view of the company’s data resources. If those systems are well matched to the business and well implemented, many benefits can be accrued such as: decreased reaction times, better logistics flow, increased responsiveness to market and customer changes and improved supply chain management (SCM).

Information Systems Backbone Implementation in the Case Firm By early 2000, the case company presented in this chapter planned to acquire an ERP system to replace its aging legacy systems. The company looked for a proven technology platform for its business applications with strong functionalities in areas such as financial applications, procurement, order entry and fulfillment, planning and scheduling, inventory management and optimization, product configuration, flexible product costing, manufacturing, EDI capabilities, and support of multiple plants and/or warehouses and complex distribution system. To that end, a blanket, multilayer deal with highly reputed ERP vendor was selected and configured to run several business application modules to cut costs and share data through multiple distributed systems located in multiple regional datacenters, with secured global access. These applications are built around ERP, customer relationship management (CRM) and SCM modules as well as set of integration middleware tools. The system aimed at helping improve the logistics of store deliveries, sales orders and the back office operations of the networked company’s manufacturers, bottlers and distributors around the globe. This distributed approach targeted the improvement of market execution, better service to consumers, in addition to giving a more integrated system platform to serve the information needs at the store level and account level, and more effective management of the business on the street. The implemented

1067

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

distributed ERP systems provided the following application and service tiers: 1.

2.

1068

The back office tier which included several modules. The African region implementation provided the following modules: a. Marketing Expenses Management b. Business Travel Management (BTM) c. Human Resources (HR) d. Financials and Sales e. Logistics f. Supply chain and Manufacturing Data warehouse and decision support tier which included the following modules to serve the African business unit: a. Value chain modeling, the module manages everything that adds value to a product. Starting from raw material used to produce the product such as sugar, water, vitamins and caramel, to the time employees spend on various functions and the salaries of employees, to the trucks used to distribute the product and the gasoline needed for the trucks. It uses all those individual costs to know how much the product should sell. b. Forecasting module, the sales business intelligence module is responsible for everything related to sales. It manipulates three types of data: historical data, which include the actual weekly sales; the business plan data, which constitutes the forecasting for the sales of the year; finally, the rolling estimate data, which is a more accurate prediction of the sales based on the discrepancy between the actual sales and the business plan. c. Decision support System, this module helps sales function managers in making decisions based on sales by providing the following services:







Sales flash report issued on specific day of the week. Sales reports of the week are sent to employees on company-provided blackberries. The weekly sales updates are given versus business plan predictions or versus prior years. In addition, sales is always directly related and translated into market share. The company used push strategy to alert the employee that he/ she has received an email. Telecom reporting is used to control the escalating costs of telecommunications within the African business unit. Managers get periodic data from mobile service operators to determine how much their units pay for mobile services. This particular application used the EDI facility provided by the system through the middleware layer. Margin minder report is used to track sales in every outlet, and provides small outlets with data on how to make more profit. For example if a small café is buying more of a certain Stock Keeping Unit (SKU) but the reports show that the café is not selling much of it, the system then recommends that the outlet buys less of this SKU and buys another one that seems to be selling good at the particular café. In addition, it does the ‘Right Execution Daily’ (RED) that manages the product display and the picture of success (pos), i.e., to make sure that company logo is put at the entrance of sites, that the tempera-

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

d.

ture of the product is set at the prescribed temperature, that the company refrigerators given to sites have their products only. Business Intelligence (BI) tools which perform activities such as collecting and transforming data from a variety of systems, consolidates and aggregate these data in readiness for reporting to assist in decision making. A visual reporting component was missing from this battery of tools for presentation of knowledge to assist with decision. At this stage of implementation, as such, this tier did not incorporate dashboard to provide visual summary of the operations key performance indicators (KPIs). This particular shortage is the subject of the additional development addressed in this chapter.

Additionally, there are some applications used in the African region Egypt HQ that the IT manager referred to as ‘ko’ which literally means ‘knock-out’, but what it actually meant is that it is site-specific, internally developed by the company, for the network of sites in the African business unit A middleware tier was deployed by the company to achieve integration among these disparate enterprise applications through using several reporting and communications layer comprising: •





Mail: the mail system used by the company is ‘lotus notes’ platform. This platform provides a collaborative work environment as well as email service. Microsoft office: the end-user productivity tool for the client side hardware (PCs, laptops, PDAs, etc.) Share point portal service: used as the gateway/ portal through which authorized employees can gain access to the various service provided by the host of applications in site or globally.

The architecture of the technology service layers after the implementation of the ERP system in the datacenter accessed by Egypt subsidiary HQ is shown in Figure 1. Some of the modules were implemented in a “vanilla” standard mode while others were customized to adapt to the beverage industry-specific operations. The company gets, through a blanket contract, upgraded versions of the modules on a periodic basis (for the standard application modules only). The network and communications infrastructure provided connectivity to the corporate datacenter for all sites in the African BU. Business transactions are entered on real-time basis into the respective ERP module. Each site must thus secure the health status of its internet/private connectivity and monitor its operational effectiveness and availability of its links.

The Sales Reporting System Prior Implementing the New Dashboard Facility For each regional site, the specific reporting and operational needs (which included manufacturing supply chain, sales force management, marketing, inbound and outbound logistics), are accessible remotely from the regional datacenter of the company in Africa. Several functionalities were considered for the management of the African BU in Egypt’s HQ site, which is concerned, among other business functions, with the consolidated sales within its domain. The process with which this objective was initially achieved is described in Figure 2. In this process, sales data exchanged between the franchises/bottlers and the HQ constituted: •

Actual sales figures: These are the actual sales volume reported by the bottlers. Actual sales were communicated on a daily basis through e-mails, via Excel templates, sent to every franchise analyst.

1069

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Figure 1. Architecture of the IS layers used in the company before the new BI dashboard implementation



Rolling estimates (RE) figures: which reflect the adjusted sales on a monthly basis to adjust targets during the course of a year, thus reflecting changes in the market and used to predict sales volume for the bal-

ance of year. These figures are confirmed after communication between the franchise managers, the business planning manager and the HQ office. Once confirmed, the business planning manager sends the final

Figure 2. Sales reporting process before implementation of the IDB system

1070

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment



confirmed version of the file which contains the actual sales so far and the balance of year figures on a monthly basis for all sites in the domain countries Business Plan (BP) figures: are set by the BU management at the beginning of the year. They do not change during the course of the year. The BP dictates target sales per brand per pack per month.

Sales Excel sheets Templates which are formatted by the sales analyst of each franchise/ bottler liberally, are used by the company for sales reporting. These templates constitute the following reports: 1.

Weekly Reports

The franchise Sales Analyst prepares and sends weekly reports after consolidating the daily sales sent by the bottlers to get total weekly figures. The reports also display the rolling estimates split weekly in addition to the weekly budget split and prior year weekly split. The Business Planning managers use the data communicated through those reports and consolidates data of all franchises to send consolidated weekly reports to HQ. 3.





Daily Reports

Bottlers of each country send daily sales reports that include the daily actual sales volume in both physical and unit case volumes to the franchise sales analyst. If they send it in physical cases, the franchise analyst converts it into unit cases using the unit case conversion factors. The reports include sales by brand and pack for every day of the week. The drawback of this reporting scheme is that there is no fixed template format for the daily reports sent by bottlers, who communicate the daily sales volume to the HQ. 2.

Every franchise analyst is requested to submit monthly reports as per deadlines dictated by a quarterly reporting calendar communicated by sales analyst. Those monthly reports include the following:

Monthly Reports

Best estimate for current month, draft of upcoming month RE and upcoming month weekly split. This data is usually communicated before the current month is closed. RE volume by brand and pack. This report is submitted after the month is closed, it communicates the ‘Actuals’ of the closed month in addition to the RE figures of the balance of the year data. All ‘Actuals’ and RE figures are provided with a brand/pack split.

The HQ Sales Analyst then consolidates monthly brand/pack data sent from all franchisees to prepare brand/pack monthly RE updates. As noted from this process, there are many countries involved in the reporting cycle of the sales results and many reports are communicated back and forth, using email and non-standardized Excel sheet templates between the country sales analysts and their corresponding managers in the HQ site. Many errors and inefficiencies continuously existed due to the use of those sales templates. The oft problems encountered included the following: •





Data inconsistencies due to inconsistency in product coding and Excel template variations led to lateness in reporting; Miscommunication due to network problems, variable semantics and other inconsistencies resulted in many duplicate data items (same data repeated in several reports); Consolidating non-uniform templates data and variable-semantics led to waste of time and efforts;

1071

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

• • •

Lack of analysis due to meager analytical competence of sales analysts; Lack of standardization led to too many data but no information; Redundant processes resulted in data chaos (too many templates causing confusion).

To that end, the focus of this case write up is to determine how the company utilized a new business intelligence dashboard tool, to provide the Sales and Marketing units with the necessary agile and fast decision making platform and to respond to the dynamic market and consumer changes over a wide geographic area; at the same time overcome the critical impediments illustrated in the process described earlier and schematically diagramed in Figure 2.

Literature Review During the past two decades, many organizations adopted the ERP as their preferred backbone information system for data and information management, since then world ERP market poised to command a little under $65 billion by the end of 2009, according to AMR Research Market Analytix Report (2005). The use of ERP systems by those organizations targeting cost reduction, increase productivity, improve customer satisfaction and suppliers relations in the drive to improve competitiveness in the global networked market space (Davenport, 1998). The cost expended for the acquisition and implementation of ERPs exceed millions of dollars and affect the overall firms’ earnings and revenues (Davenport, 1998) and their market values, (Chatterjee et al. 2002). With such huge capital investments, many firms are forced to assess carefully their return of investment (ROI) of such an infrastructure. The impact of adopting ERP normally goes beyond the immediate control of the business resources, it sets the grounds for organizational changes and the way business processes are performed, (Kallinikos, 2004). As noted by Brazel

1072

and Dang (2008), the implementation and use of ERP systems represent a radical change from the operation of legacy systems. Many researchers have therefore studied ERP use and adoption from various perspectives. As a category of information system, several authors approached research on ERP from the perspective of information systems success model (DeLone and McLean, 1992, 2003; Gable, et al. 2001 and 2003; Seddon, 1997; Sedera, et al. 2004; Ifinedo, et al. 2006a and 2007; Robey, et al, 1999). ERP implementation is a challenging endeavor for many organizations, spanning many functional areas (Yen, et al. 2002), demanding high level of coordination among all stakeholders and adjusting to changes and synergy in the procedural workflow by every business function. Use of such systems implies changes in the workflow of business processes (Kallinikos, 2004). To that end, success of the implementation projects and the critical factors influencing this success was studied by many researchers to investigate ERP implementation success factors, (Bancroft et al, 1996; O’Leary, 2002; Ptak, et al, 2000). Other studies aimed at identifying the critical factors impacting implementing ERP projects (Fryling, 2005). Factors such as those influencing successful implementation, from the operational (rather than technical) point of view were addressed by many researchers, for example, the business operations coverage of the package and the number of licensed users (Francalanci, 2001; Kumar, et al. 2001; Markus, et al. 1988; Parr and Schanks, 2000). The impact of system configuration and/ or setup revisions and enhancements was also studied by Fryling (2005), Nicolaou, et al. (2004 and 2007), Light (2001), Mensching, et al. (2004) and Nah, et al. (2001); impacts of organizational and national cultures (Krumbholtz et al. 2000; Soh, et al. 2000); the configuration and setup of the package’s parameters tables and their overall impact on the architecture and flexibility of the ERP packages for rapid adaptation (Fan et al. 2000; Spott, 2000). Some articles also addressed

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

the impact of ERP process standardization and the restructuring of organization tasks or business processes reengineering (Kumar and Van Hillgersberg, 2000). The ability of the system to provide quality and trusted information for operational and executive management decision making has been identified by many researchers (Xu, 2003; Madapusi, et al. 2007). ERP integrates all functional areas of the organization, acting as the backbone of the information management platform of the organization (Chou, 2005). With its integrated database, ERP systems integrate other functional business components such as Customer Relationship Management (CRM) and SCM systems data resources to support informational needs for decision making. Since ERP system is internally looking, the need to accommodate access to data across the organization boundary is required for decision making at the strategic and organization-wide levels. It is not that ERP systems don’t have wealth of information, they do; the challenges lie in the ways of mining them. ERP cannot facilitate real-time decision support function for several operational reasons. Since information is the foundation of every critical business decisions, Decision Support Systems (DSS) are vital for any organization (Drucker, 1998). Report writers can access data from multiple ERP modules and consolidate them with other data elements for decision support. Business Intelligence (BI) is about getting the right information, to the right decision makers, at the right time (Alter, 1980). It is an enterprisewide platform that supports multi-dimensionality reporting, analytics and decision modeling leading to fact-based decision making and enabling to get a “single version of the truth” (Rasmussen et al., 2001). The common pain points that BI is used to solve are typical examples of what most organizations experience: • • •

Data everywhere, information no where Different users have different needs Excel versus PDF

• • • • • •

Pull versus push On demand – on schedule Your format – my format Takes too long – wasted resources/efforts Security Technical “mumbo jumbo” … Why I just can’t get it to you when you want it.

By integrating business intelligence (BI) tools and ERP modules, data flows directly from the ERP database on real-time basis. However, some reliability, availability and scale efficiency may arise as a consequence – particularly due to the excessive access load that may hinder transactional operations. Separating the active ERP database from that of the BI resulted in embracing a second data storage tier, a data warehouse (McDonald et al., 2002). The ERP-BI, On-line Analytics Processing (OLAP) and DSS tools integration framework is based on congregating all needed data from the ERP system and other external data resources, load them into a Data Warehouse or a data Mart, then link to several BI tools, such as OLAP, data mining, analytics tool and reporting systems to create more consistent and knowledge-centric data reporting. BI tools provide such functionalities. More and more organizations extend their ERP beyond the level of back-office to improve sales, customer satisfaction, and decision making (Stedman, 2002). Integration of the BI and ERP system contributes additional values to businesses (Chou, 2005). According to Holsapple and Whinston (1996), characterization of common DSS features, BI generates different views for available data systems, a scaled data mart or data warehouse providing rich, timely and well structured and cleansed information to the BI. Bolt-on BI systems are also used to view financial, marketing and sales queries by using different tools (CRM2day.com, 2004). Customer Experience Management (CEM) in retailing has also been researched within the framework of BI by score of researchers such as Ding, et al. (2006) and Kamaladevi, B. (2010).

1073

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Issues of integration of SCM with customer experience management were dealt with by Chou (2005). Approaches to building and implementing BI systems were investigated by Celena Olszak, et al. (2007) being an important decision to many implementing organizations.

CASE DESCRIPTION Technology Concerns and Needs Within the context of the studied case, a new Sales Reporting process was developed to address the inefficiencies and pitfalls of the template based sales reporting system described earlier. Egypt’s HQ IT department together with the Sales unit, embarked on the development of a new BI tool which was code-named the IDB (information dashboard). The new IDB tool was designed to provide enhanced method for uploading sales data, address the inconsistencies of the country sales figures and the cumbersome method of obtaining the consolidated reports at the HQ. Analysts were trained to upload data on the new IDB platform using a newly developed product coding system. They get the codes from commonly accessible codes database. The coding system is based on a code consisting of 10 digits. The first four digits represent the brand type and flavor. The last four digits represent the package size and type. The two digits in the middle represent the type of syrup whether liquid for normal packages or powder supplied to machines. This level of breakdown for any product provided an easy and efficient facility for tracking Stock Keeping Unit level since it embedded all the necessary information regarding the product whether in terms of brand type, flavor, package size, type, format……etc. The implementation of the IDB tool availed single unified interface for users to upload the sales data from the respective country to provide aggregate reporting of sales for the HQ in Egypt. The tool allowed drill-down granularity for us-

1074

ers. We will first analyze the structure of this implemented information dashboard tool (IDB) and then describe the new sales reporting process. The IDB acted as the single “database” view for all trade related information used by all franchisees and is used through applying several data mining techniques. It is a customized web based application built to access the warehouse database and allowed access from anywhere through the company’s network. Furthermore, the IDB is used as the reporting vehicle to management - globally, retiring the use of once prevalent sales templates and the use of the email for posting these templates. Access to the IDB is achieved via web browser using specific URL or via the Company Portal (based on MS Share Point platform). Users log in through their IDs and passwords. After updating the system with the daily, weekly or quarterly sales, figures, respective users can log in and retrieve any report needed at any granularity level for any specific period. The sales reporting process constituted several activities. Every franchisee analyst is to prepare the weekly sales data using the relevant codes and upload the data on the IDB. The upload is performed automatically through an Autoloader and scheduled on Mondays every week. The system decodes the uploaded codes and maintains the actual data on the system. Business plan figures and rolling estimates figures are also uploaded in the same format and manner. Whenever any discrepancy in the uploaded data on the IDB of a single country affect the accuracy of the data of the total business unit, control measures are promptly carried out to assure the integrity and accuracy of data on the system. After all franchisees upload their weekly figures on the IDB, the HQ Sales Analyst verifies the data on the system and adjusts/ deletes any discrepancies found by detecting the countries responsible for the discrepancies and suggests methods for resolving these conflicts with the concerned analyst. The Sales Analyst can also report any variance detected on the IDB due to any error in the application engine through Business

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Applications Call Logging system, used by the IT department as the support tool for any encountered technical problems while using the IDB. The HQ sales analyst is also responsible for planning and coordinating all IDB activities with all franchisee analysts in terms of training analysts, installing new IDB applications and communicating with IT to resolve encountered problems. The HQ sales analyst is also responsible for issuing new codes for newly launched packs that are not on the system by filling a request form. The HQ Analyst also monitors the status of expired codes and de-activates them.

Developing the IDB Dashboard Layer The described new sales business process is depicted in Figure 3. Upon upload of the sales data on IDB, users can simply log into the system from anywhere and access the reports menu via the portal or the specially provided URL and pick the type of report required using drop down lists. These lists include all product parameters. The IDB system enabled users to get any type of information by selecting the specific criteria needed in the downloaded report with any degree of granularity. In other words, users can customize the reports retrieved from the system by choosing the appropriate parameters from the displayed scroll down lists. Drill down lists may include, but not limited to, parameters such as: • • • • • • • • • • •

Country Bottler Period (Year/Month/Week) Brand Flavor Package type Pack size Historical data Business plan data Rolling Estimate (RE) data Analysis of sales performance versus prior years, BP, RE…..etc.

Access to the facilities offered in the IDB system is granted though a robust and premeditated authorization scheme. IT unit provides authorized users with their respective personal user name and password for authentication, however accessing and retrieving information at various data-granularity levels are subject to authorization policies. This authorization scheme is set by the respective country management, and communicated to the IT department, to provide various access privileges for the various organizational hierarchical levels.

The Impact of the IDB System on the Effectiveness of the Sales Reporting Process The use of the IDB facility as a component of the battery of BI tools of the Company’s IS platform enhanced the Sales reporting cycle in many ways and had increased the overall efficiency of the Sales Reporting Process. The major areas of improvement IDB offered are: •



First of all, it has eliminated the redundant process of sending the former sales reports templates that were subject to various interpretations, wasting time and effort and causing inconsistencies and mistrust of data. Using IDB as an outer shell of the data warehouse provided a very useful tool as it integrates all types of information required whether historical data, brand information, package information, business plan figures…etc. Before using IDB, managers and analysts had to go through many Excel files and sheets to obtain information which is time consuming and some of the required data were lost or hidden. Besides, the same information was sometimes obtained with different values from different sources causing confusion.

1075

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Figure 3. Sales reporting process after implementation of the IDB system







1076

Provided multi-dimensionality reporting using “slice and dice” operations by the analysts. IDB acted as a timely and trustworthy reporting tool with a single source of sales data which is not subject to various interpretations. All analysts are bound by specific deadlines to upload their data, so the managers are confident to find the data on time and to retrieve it in a standard format to facilitate the data integration and the decision making process. IDB provided analysis facility. Reports retrieved facilitated comparisons and analytical measures that can be used to help managers in the decision making process.



• •

• • •



IDB helped in determining profitable trading partners through the use of multi-dimensionality data mining Augmented and replaced cumbersome previous spreadsheet-based system. Enabled driver-based planning to streamline and focus efforts around HR plans, manufacturing requirements, or sales resources. Enabled use of rolling forecasts to increase forward visibility. Reduced the need for consolidation, close and reporting cycles by days or weeks.. Facilitated conducting what-if scenarios for different revenue projections or changes in business lines. Facilitated tracking key corporate performance indicators from desktop.

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Beside these impacts on the performance of the Sales function, it enabled giving brand, marketing, and sales managers the knowledge they need to strongly impact the top line through the use of brand, sales, promotion/marketing and delivering a full range of sales management analyses. With the IDB tool, near-real time measures by account, channel/channel segment, promotion, and campaign - can be used to improve the effectiveness of other business cycles such as the full range of brand, portfolio, and product analyses along with the ability to ask random, ad hoc questions and alert when actual performance varies substantially from plan. In a nutshell, the IDB tool enabled getting the right information, to the right decision makers, at the right time. It is an enterprise-wide platform that supported multi-dimensionality reporting, analytics and decision modeling leading to factbased decision making and getting a “single version of the truth.” The integration of the marketing data warehouse with the IDB tool also resulted in many benefits such as: 1.

2.

Financial process management of annual marketing budgets. Through the marketing Business Warehouse the brand managers gained autonomy in managing this process themselves through accessing this option on the IDB, where they would directly input their budgets online and the embedded workflow would proceed to request granting the necessary approvals. Budget managers were also able to perform internal budget shifts if need be as well as raising new purchase orders where previously it was done manually. Ensuring internal system adherence to corporate financial and procurement procedures, there was no need for manual issuance any more as the ERP system provided the necessary constraints through budget coding,

3.

4.

5.

online approval requirements through a hierarchy workflow. Monitoring marketing spend per brand and ensuring correct allocation across the marketing mix. The IDB reports were very malleable in their structure - can be driven by specific periods, campaign types, seasonality, segmentation by below the line marketing activities, or above the line, even by media type which allowed brand managers and marketing managers a better ability to assess the direct impact of a campaign spend over sales in a particular moment in time. Providing budget reporting across franchise functions. Again, this has been a dissolved manual role as the new system allowed for automatic retrieval as explained in the point above. Reporting on brand contribution and profitability per brand, pack, flavor and concentrate. The integration with the company-wide planning system allowed further report segmentation by account, for example, how each cost contributed to the overall profitability of a brand by integrating other operating expenses (OPEX), capital expenses (CAPEX) reports- this type of reports are high level top-line data that were accessed by top managerial staff and also depended on the level of data-breakdown details that were fed in by accountants.

The final Information Systems infrastructure architecture of the company after the IDB implementation is depicted in Figure 4.

CURRENT CHALLENGES FACING THE CASE FIRM The ERP system has been put in place in the company for many years. Employees did not have an option, they had to use it. Though the company managed to streamline and align some

1077

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Figure 4. Architecture of the IS layers used in the company after IDB implementation

of its business processes and operations to the “best practice” processes embedded in the ERP, with minimum changes, to adapt to the specific nature of the beverage industry, customized boltin additional tools were inevitably used to cater for those specific characteristics. This was quite evident in the sales and marketing functions. As a result of the deployment of the new IDB system, several problems showed up. The following obstacles and challenges were observed after installing the IDB •



1078

User reluctance to abandon the use of old Excel templates increased resistance of the use of the new developed data entry facility. Training analysts to use IDB was a challenging feat. The dispersed remote locations of analysts made it difficult and costly to provide them with the necessary skills for optimum use of the new tool. Though conference calls and video-conferencing were used to tackle this obstacle, the level of operational excellence in using the tool was less than satisfactory as postu-

• • •

lated by the management. Incompetency of use of the facilities of the system was also observed due to the lack of analytical proficiency. Unrealistic expectations were anticipated in a relatively short period. Lack of in-house technical expertise (The IDB support team is remotely located). Real-Time data reporting was accessible once accounting inputs are uploaded into the system in due course. Errors in data entry may thus lead to wrong information and decisions. As one employee contends:

“The initial step when we switched from the legacy system to the new setup involved data migration. This was a very tedious task and involved many months of intricate detailed work where existing data had to be assigned codes similar to the coding on the ERP to allow smooth transition. The challenge was to transition with almost zero error and entailed at certain moment of time to halt transactions entry to ensure all real-time data is

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

captured. There was a mechanism to double check on validity of data after transition.”





Employees were not receptive to adapt to change and resisted system implementation by initially rejecting the idea for months. Advanced training technologybased methods were adopted to allow for virtual help where global ERP Power Users were able to access local employees’ screens to guide them through a step by step process. Adaptability took about one whole year as the users gained more confidence in system maneuver. Although the idea of the ERP is to eliminate a lot of paper work, the case was not so in Egypt in some areas as the embedded ERP workflow involved external as well as internal stakeholder’s input in the system. For example, marketing assistants raising new purchase orders through the transaction modules were required to attach three different quotations from three different suppliers. The idea is that suppliers were to send those quotations through an interface mechanism that allowed automatic attach to the purchase order. Given that the business climate in Egypt is still underdeveloped in terms of technology maturity, many continued to send their quotations manually by courier messenger, or fax, rarely few at the time were used to the idea of sending through email. This created a bottle neck since the marketing assistant had to then scan those documents and upload them manually to process the purchase order – where a purchase order raising should have taken about an hour for completion ended up consuming a whole day while it has still not been sent to the brand manager for approval. In some situations this produced further time bottlenecks as some suppliers







required down payments prior to activation of the required job on the system. Brand managers viewed their newly added task of managing the input-output process of budgets themselves and purchase order requisitions as a waste of time as they previously did not do so. They viewed it as a purely transactional administrative task that should be performed by the accounting department or rather assigned to a department secretary for coordination. In the beginning, they did try to perform the processing themselves, but later, decided that this administrative tasks have impeded their focus on managing the brand with consumers from a marketing perspective. An operational issue of ERP is currency variations. Although the system allowed for multiple currency input, the final marketing budget to be approved by division heads and headquarters had to be submitted and received in dollars. This created some issues where brand managers initially prepared their budget in Egyptian pounds per the various local quotations and pricing they received. This total amount however did not sometimes equate for the same amount assigned by global HQ on the ERP which was in dollars. The challenge for the finance was to constantly adjust rates according to fluctuations in exchange rates and other market factors such as inflation and so forth which also resulted in some chunks of the budget in Egyptian pounds being eaten away due to these adjustments. It continues to be an ordeal as concentrate pricing is also in US dollars while other transactions are in multiple currencies. The deployment of technology into everyday business is still not very mature within people’s mindset in Egypt. For example, the lack of trust still persisted in the Egyptian ideology, where the deployment of online signatures through ERP

1079

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment



protocol was still viewed with a suspicious eye. Middle managers especially in the accounting unit still requested a pen-written on paper signature-it made them feel secure. However this required that paper formats of ERP documents were printed and managers were requested put their pen signatures. This did not help in achieving a more lean and streamlined operation which the system was supposed to create. End-User Support and Maintenance

A few years ago, for an employee to report an IT problem they had to call the local help desk and technical staff will attend to the call to solve the problem for him or her. However, the company HQ has developed a new process that employees experiencing problems will have to make a call to call center, give their employee ID and describe the technical problem about the software or the hardware. The call center, remotely accessed, will give the employee a ticket number with the problem code and send them an email summarizing the problem. The problem could be simple enough to be solved over the phone (i.e., the center tells the employee on the phone what he or she could do to solve the problem). If it’s not then the center directs the problem to the local help desk to solve it. Then the center sends the employee an email asking if the problem has been solved or not to close the ticket. Some employees found this process difficult and time consuming

Future Measures to Enhance the Reporting Cycle and Recommendations The preceding analysis revealed that key success measures that may help the company to overcome the obstacles cited and leads to turning the IDB system into a success story include the following: •

1080

Executive management involvement and support

• • • •

• • •

Clear project ownership by including sales analysts in the project management team Proper planning Hard working and focus by all member countries’ analysts Clear communication between business and IT by setting meetings, discussing problems and analyzing issues and providing continuous feedback Clear role definition for both analysts and IT professionals Instituting efficient and effective change management process Proper and continuous staff and analysts training

To that end, the company is planning to install new version of the IDB system in the upcoming years. The new system will allow users to create their own queries with specific design and more advanced sales breakdown. Also, the company is planning to use more advanced analytical tools such as the “Instant Visual Analysis” tool which is a simple versatile tool that will make volume analysis and graphic representation faster, easier and more flexible. The Instant Visual Analysis tool uses pivot table reports to help analyze and graph numerical data, answering questions, exhibiting trends, etc. With a few mouse clicks users can see who sold the most where, which brands were the most successful, and which pack sold best.

REFERENCES Alter, S. (1980). Decision support systems: Current practice and continuing challenge. Reading: MA Addison Wiley. Bancroft, N. (1996). Implementing SAP R/3. Greenwich, CT: Manning Publication Co. Bingham, D. (1999). Food and beverage companies need to integrate information enterprise-wide. Beverage Online.

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Brazel, J. F., & Dang, L. (2008). The effect of ERP system implementations on the management of earnings and earnings release dates. Journal of Information Systems, 22(2), 1–21. doi:10.2308/ jis.2008.22.2.1

Fan, M., Stallaert, J., & Whinston, A. (2000). The adoption and design methodologies of componentbased enterprise systems. European Journal of Information Systems, 9, 25–35. doi:10.1057/ palgrave.ejis.3000343

Chatterjee, D., Grewal, R., & Sambamurthy, V. (2002). Shaping up for e-commerce: Institution enablers of the organizational assimilation of Web technologies. Management Information Systems Quarterly, 26(2), 65. doi:10.2307/4132321

Francalanci, C. (2001). Predicting the implementation effort of ERP projects: Empirical evidences on SAP R/3. Journal of Information Technology, 16(1), 33–48. doi:10.1080/02683960010035943

Chou, D., Tripuramully, H., & Chou, A. (2005). BI and ERP integration. Information Management & Computer Security, 13(5), 340–349. doi:10.1108/09685220510627241 CRM2day. (2004). Business intelligence. Retrieved from www.crm2day.com/bi Data monitor. (2001). Business intelligence from data to profit. Retrieved from www.researchandmarkets.com Davenport, T. (1998). Putting the enterprise into the enterprise system. Harvard Business Review, 76(4), 121–131. DeLone, W., & McLean, E. (1992). Information systems success: The quest for the dependable variable. Information Systems Research, 3(1), 60–95. doi:10.1287/isre.3.1.60 DeLone, W., & McLean, E. (2003). The DeLone and McLean Model of Information Systems success: A ten-year update. Journal of Management Information Systems, 19(4), 9–30. Ding, D., & Chen, J. (2007). Supply chain coordination with contracts game between complementary suppliers. International Journal of Information Technology & Decision Making, 6(1), 163–175. doi:10.1142/S0219622007002332 Drucker, P. (1998). The next information revolution. Forbes. Retrieved from www.forbes.com

Fryling, M. (2005). ERP implementation dynamics. Information Science and Policy. University at Albany. State University of New York, Oct. 2005. Gable, G., Sedera, D., & Chan, T. (2003). Enterprise systems success: A measurement model. Proceedings of the 24th ICIS, (pp. 576-591). Seattle, Washington. Gable, G., van Den Heever, R., Erlank, S., & Scott, J. (2001). Large packaged application software maintenance: A research framework. Journal of Software Maintenance and Evolution: Research and Practice, 13(6), 351–371. doi:10.1002/ smr.237 Holsapple, C., & Whinston, A. B. (1996). Decision support systems: A knowledge based approach. Minneapolis, MN: West Publishing. Ifinedo, P. (2006a). Extending the Gable et al. enterprise systems success measurement model: A preliminary study. Journal of Information Technology Management, 17(1), 14-33. Ifinedo, P., & Nahar, N. (2007). ERP system success: An empirical analysis of how two organizational stakeholder groups prioritize and evaluate relevant measures. Enterprise Information Systems, 1, 25–48. doi:10.1080/17517570601088539 Kallinikos, J. (2004). Deconstructing information packages: Organizational and behavioral implications of ERP systems. Information Technology & People, 17, 8–30. doi:10.1108/09593840410522152

1081

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Kamaladevi, B. (2010). Customer experience management in retailing. Business Intelligence Journal, 3(1), 37–54. Krumbholz, M. (2000). Implementing enterprise resource planning packages in different corporate and national cultures. Journal of Information Technology, 15(4), 267–280. doi:10.1080/02683960010008962 Kumar, K., & Van Hillegersberg, J. (2000). ERP: Experience and evolution. Communications of the ACM, 43, 23–26. Kumar, V., Maheshwari, B., & Kumar, U. (2001). An investigation of critical management issues in ERP implementation: Empirical evidences from Canadian organizations. Technovation, 23(10). Light, B. (2001). The maintenance implications of the customization of ERP software. Journal of Software Maintenance and Evolution: Research and Practice, 13(6), 415–429. doi:10.1002/ smr.240 Madapusi, A., & Kuo, C. (2007). Assessing data and information quality in ERP systems. Proceedings of the Decision Sciences Institute Annual Meeting, Arizona. Madapusi, A., Kuo, C., & White, R. (2007). A critical factors approach to ERP information quality and decision quality. Proceedings of the Decision Sciences Institute Annual Meeting, Arizona. Markus, M., & Robey, D. (1988). Information Technology and organizational change: Causal structure in theory and research. Management Science, 34, 583–598. doi:10.1287/mnsc.34.5.583 McDonald, k., et al. (2002). Mastering SAP business information warehouse. Canada: Wiley Publishing. Mensching, J., & Corbitt, G. (2004). EPR data archiving- a critical analysis. Journal of Enterprise Information Management, 17(2), 131–141. doi:10.1108/17410390410518772

1082

Nah, F. H., Faja, S., & Cata, T. (2001). Characteristics of ERP software maintenance: A multicause study. Journal of Software Maintenance and Evolution: Research and Practice, 13(6), 339–414. doi:10.1002/smr.239 Nicolaou, A. (2004). ERP system implementation drivers of post-implementation success. Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference, 2004, (pp. 589-597). Nicolaou, A., & Bhattacharya, S. (2007). Organizational performance effects of ERP systems usage: The impact of post-implementation changes. International Journal of Accounting Information Systems, 7(1), 18–35. doi:10.1016/j. accinf.2005.12.002 O’Leary, D. (2002). Enterprise resource planning systems: System lifecycle, electronic commerce, and risk. Cambridge, UK: Cambridge University Press. Olszak, C., & Ziemba, E. (2007). Approach to building and implementing business intelligence system. Interdisciplinary Journal of Information, Knowledge, and Management, 2, 135–148. Olszak, C., & Ziemba, E., (2006). Business intelligence systems in the holistic infrastructure development - supporting decision-making in organizations. Interdisciplinary Journal of Information, Knowledge, and Management, 1. Parr, A., & Schanks, G. (2000). A model of ERP project implementation. Journal of Information Technology, 15(4), 289–304. doi:10.1080/02683960010009051 Ptak, C., & Schragenheim, E. (2000). ERP: Tools, techniques and applications for integrating the supply chain. London, UK: Series on Resources Management, St Lucie Press/APICS.

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

Rasmussen, N., Goldy, P., & Solli, P. (2002). Financial business intelligence: Trends, technology, software selection and implementation. New York, NY: Wiley. AMR Research (currently part of Gartner inc.). (2005). Market Analytix Report: ERP 2004-2009. Robey, D., & Boudreau, M. (1999). Accounting for the contradictory organizational consequences of information technology. Information Systems Research, 10, 167–185. doi:10.1287/isre.10.2.167 Seddon, P. B. (1997). A re-specification and extension of the DeLone and McLean model of IS success. Information Systems Research, 18(3), 240–253. doi:10.1287/isre.8.3.240 Sedera, D., Gable, G., & Chan, T. (2004). Measuring enterprise systems success: The importance of a multiple stakeholder perspective. Proceedings of the 12th European Conference on Information Systems, (pp. 1-13). Turku, Finland. Soh, C., & Tay-Yap, J. (2000). Cultural fits and misfits: Is ERP a universal solution? Communications of the ACM, 43(4), 47–51. doi:10.1145/332051.332070 Spott, D. (2000). Componentizing the enterprise applications packages. Communications of the ACM, 43(4), 63–90. doi:10.1145/332051.332074 Stedman, C. (1999, November 1). Failed ERP gamble haunts Hershey. Computerworld. Retrieved April 16, 2006, from www.computerworld.com Stedman, C. (2002). Maximizing the ERP investment. Competitive Financial Operations: The CFO Project, 1, 1–6. Xu, H., Nord, J., Brown, N., & Nord, D. (2002). Data quality issues in implementing an ERP. Industrial Management & Data Systems, 102(1), 47–58. doi:10.1108/02635570210414668

Yen, D. C., Chou, D. C., & Chang, J. (2002). A synergic analysis for Web-based enterprise resource planning system. Computer Standards & Interfaces, 24(4), 337–346. doi:10.1016/S09205489(01)00105-2

KEY TERMS AND DEFINITIONS Business Intelligence (BI): Refers to computer-based techniques used in analyzing business data, such as sales by products and/or departments or associated costs and incomes. In addition, BI technologies avail historical, current, and predictive views of business operations. Common functions of Business Intelligence technologies are reporting, online analytical processing, analytics, data mining, business performance management, benchmarking, text mining, and predictive analytics. Dashboard: Is the application of visual iconic tools that indicate the status of a particular measurable quantity, event or value. Within the context of business performance measurement, the use of the dashboard may aide decision makers in quickly spotting the status of some KPIs of interest to them. Data Mining: Is the process of extracting patterns from data. Data mining is becoming an increasingly important tool to transform the data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery. Data mining is often used to uncover patterns in data pertaining to functional or departmental data sets. Data Warehouse: Is a repository (collection of resources that can be accessed to retrieve information) of an organization’s electronically stored data, designed to facilitate reporting and analysis. Data warehousing arises when an organization need reliable, consolidated, unique and integrated

1083

Implementing Business Intelligence in the Dynamic Beverages Sales and Distribution Environment

analysis and reporting of its data resources, at different levels of aggregation. Decision Support Systems (DSS): Constitute a class of computer-based information systems including knowledge-based systems that support decision-making activities. DSSs serve the management, operations, and planning levels of an organization and help to make decisions, which may be rapidly changing and not easily specified in advance. Enterprise Resources Planning (ERP): Is an integrated computer-based software system used to manage organizational internal and external

resources. It is an architecture which facilitates the flow of information between all business functions within the boundaries of the organization and manages the interfaces with outside stakeholders. Key Performance Indicators (KPI): Measures commonly used to help organizations define and evaluate how successful their businesses are, typically in terms of making progress towards long-term organizational goals. KPIs can be specified by answering the question, “What is really important to different organizational stakeholders?”

This work was previously published in Cases on Business and Management in the MENA Region: New Trends and Opportunities, edited by El-Khazindar Business Research and Case Center, pp. 156-176, copyright 2011 by Business Science Reference (an imprint of IGI Global).

1084

1085

Chapter 59

Sharing Scientific and Social Knowledge in a Performance Oriented Industry: An Evaluation Model

Haris Papoutsakis Technological Education Institute of Crete, Greece

ABSTRACT The chapter evaluates the contribution of shared knowledge and information technology to manufacturing performance. For this purpose, a theoretical model was built and tested in praxis through a research study among manufacturing, quality and R&D groups. The social character of science is perceived as a matter of the aggregation of individuals, not their interactions, and social knowledge as simply the additive outcome of mostly scientists, members of the three groups, making sound scientific judgments. The study results verify the significant contribution of shared knowledge to the manufacturing group performance. They also demonstrate that information technology influences notably the manufacturing group performance and, in a less significant way, the sharing of knowledge. Study results are useful to researchers and the business community alike as they may be used as a springboard for further empirical studies and can help put together strategies involving knowledge management and information technology.

INTRODUCTION At the turn of the twentieth century many companies (BP, Canon, GlaxoSmithKline, Honda, Siemens and Xerox, among them) have tried, with varied achievement rates, to leverage knowledge DOI: 10.4018/978-1-4666-1945-6.ch059

assets by centralizing Knowledge Management (KM) functions or by investing heavily in Information Technology (IT) (Davenport and Prusak, 2000; Hansen and von Oetinger, 2001). In parallel, the number of new knowledge management articles, according to Despres and Chauvel (2000, p. 55) “... has more than doubled each year over

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

the past decade”. Among them quite a few have proposed and tested models for the management of knowledge, with or without the support of information technologies (Knight, 1999; Larsen et al, 1999; Liebowitz et al, 2000; Kingsley, 2002). A considerably smaller number of such studies have investigated into how companies can leverage knowledge in order to improve business performance (Nelson and Cooprider, 1996; Chong et al, 2000; Firestone, 2001). Only one (Lee and Choi, 2003), among the articles reviewed for this study is combining all three variables: KM, IT and performance. This is exactly the gap this chapter is coming to fill in. Based on careful analysis of the above mentioned previous empirical studies, it builds and empirically tests a model that simultaneously explores the relationships among these three variables and their antecedents. The chapter is organized in six sections. In the following section the theoretical framework is defined and a brief presentation of relevant previous empirical studies, focused on the links among knowledge management and information technology to business performance is given. In section three, we situate our own model within the above framework. The variables and the investigation hypotheses are defined. In section four, the research methodology is presented and details are given on the questionnaires –the principal research instruments– and the indicators used for construct measurement. In section five, the investigation hypotheses are tested, using regression analysis, and statistical data are given on questions not analyzed elsewhere. Finally, in section six, conclusions are summarized and recommendations are given for managers of collaborating groups in order to increase shared knowledge and to positively affect manufacturing performance.

THEORETICAL BACKGROUND In the relevant literature, most attempts to investigate the links among KM and IT that lead

1086

to improved business performance, are done within the environment of the knowledge-creating company (Nonaka 1991; Nonaka and Takeuchi 1995). Building upon this pioneer work, Grant, in a series of articles (1995 with Baden-Fuller, 1996a, 1996b, 1997) and Sveiby (1997, 2001) presented in a very clear way the fundamentals of a knowledge-based theory of the firm. According to Grant (1997) –recapitulating on his previous work– the knowledge-based view is founded on a set of basic assumptions. First, knowledge is a vital source for value to be added to business products and services and a key to gaining strategic competitive advantage. Second, explicit and tacit knowledge vary on their transferability, which also depends upon the capacity of the recipient to accumulate knowledge. Third, tacit knowledge rests inside individuals who have a certain learning capacity. The depth of knowledge required for knowledge creation sometimes needs to be sacrificed to the width of knowledge that production applications require. Fourth, most knowledge, and especially explicit knowledge, when developed for a certain application, ought to be made available to additional applications, for reasons of economy of scale. Theoretically, our research stands upon the ‘knowledge-based theory of the firm’ (Grant, 1997; Sveiby, 2001). The fundamental problem in traditional management theory is how to align the objectives of workers with those of managers and the stakeholders. In accordance with the knowledge-based view, “… if knowledge is the preeminent productive resource, and most knowledge is created by and stored within individuals, then employees are the primary stakeholders” (Grant 1997, p. 452). Under this perspective, management’s principal challenge is to establish the mechanisms for collaborating individuals and groups to coordinate their activities in order to best integrate their knowledge into productive activity. Sveiby (2001) believes that people can use their competence to create value in two directions: by transferring and converting knowledge

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

externally or internally for the organization they belong to. When the managers of a firm direct the efforts of their employees internally, they create tangible goods and intangible structures such as better processes and new designs for products. When they direct their attention outwards, in addition to delivery of goods and money they also create intangible structures, such as customer relationships, brand awareness, reputation and new experiences for the customers. For Fukayama (1999), the existence of ‘social capital’ that serves as a glue to hold diverse constituencies together, is a primary cause of success or failure of any organization. The World Bank defines social capital as “norms and social relations imbedded in social structures that enable people to coordinate actions and achieve desired goals”, a definition that applies to countries, societies or organizations. It is here where social knowledge has an important role to play. Individuals develop social knowledge through their interactions with the social environment. Stable systems of social knowledge are organized around certain domains; the collaborating groups in our study. According to Turiel (1983) the acquisition of social knowledge can be interpreted in two different ways: (i) it can be knowledge transmitted to the individual by other persons, and in this case the knowledge acquired is dependent on what is transmitted; or (ii) it can be knowledge constructed by individuals specifically about certain social phenomena (p.1). In an effort to capture the dialectic and dynamic relationship between the individual and social knowledge, Jovchelovitch (2007) develops a social-psychological approach in order to investigate knowledge in every day life. In her framework, problems of social knowledge are discussed in relation to individual, social and collective representations. Knowledge represents at the same time subjective, inter-subjective and objective worlds (p. 168). It is under the above theoretical perspective that we are reviewing the literature, relevant to our investigation, in the following section.

Previous Empirical Studies Linking knowledge management and information technologies with business performance has never been an easy task. Comparing KM projects to their two prevailing predecessors (total quality management and business process re-engineering) Armistead (1999) notices that authors on KM “… do not use the same hard measures of success consistently” (p. 143). He believes that for a knowledge-based view to be useful, it must help improve some key performance indicators like quality, flexibility and cost. Referring to manufacturing companies he notes that operational processes, which depend more on knowledge, are expected to perform well against measurements of quality in consistency, while at the same time they improve productivity. Our research focused on two basically diverse areas: The measurement –in terms of both qualitative and quantitative results– of a KM project’s impact and, at the same time, the identification of the cause-effect relationship that exists between KM, IT, and the overall business performance. Some previous studies captured KM contribution by focusing on intellectual capital measures (Larsen et al, 1999) or accounts and audits (Liebowitz et al, 2000) but both groups of authors question the generability of their studies. Other studies, criticizing conventional performance measures –such as Return On Investment (used by Anderson, 2002) and Economic Value Added, used by multinationals like The Coca-Cola Company– propose measures based on the Balanced Scorecard (Knight, 1999) or other more abstract and tailored to the company, like the Comprehensive Benefit Estimation (Firestone, 2001) and the Cost of Information (Kingsley, 2002). In a recent work the relevant literature summarized above has been extensively reviewed (Papoutsakis and Salvador Valles, 2006). In most of the above empirical studies the role of shared knowledge among company departments is not consistent, despite the fact that

1087

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

the knowledge transfer process has been studied extensively. Trust and influence have only been recognized as antecedents of shared knowledge by Nelson and Cooprider (1996), while Lee and Choi (2003) consider trust and information technology as knowledge creation enablers among seven others. What is really missing is an integrative model combining shared knowledge and information technology with performance. Although several studies investigate the relationship between KM and performance (Nelson and Cooprider, 1996; Chong et al, 2000; Firestone, 2001) or IT and KM (Lee and Choi, 2003), they fail to explore the relationships among KM, IT and performance simultaneously. It is believed that if managers become conscious of the fact that these relationships have interactive features, they can stand a much better chance of improving the performance of their department or company. Measuring the impact of shared knowledge and IT upon manufacturing performance is not an easy task as this will strongly affect the behaviour of managers and employees not only of the manufacturing group, but those of the collaborating groups (in our case the quality and R&D groups). Regarding social knowledge, we have to consider that it exists in the relationships, not in the individuals themselves and thus it requires mutual commitment, since if one party withdraws it disappears. It is under this perspective that we have built and empirically tested the evaluation model proposed in the following section.

PROPOSED MODEL Aiming to gain insight into the essential factors influencing manufacturing performance, the development and testing of a conceptual model containing the minimum selected theoretical constructs, is considered. Three have been our major concerns, upon building our research model. First, we did not want to propose a model that

1088

delineates every possible variable or process that affects manufacturing performance. Second, we wanted to focus on shared knowledge as the leading expression of knowledge management, among the manufacturing, quality and R&D groups of a firm. Third, information technology, in our model, has been perceived to affect both manufacturing performance and shared knowledge. To assess the type of knowledge to be shared was also an interesting question. Von Krogh, Ichijo & Nonaka (2000) define knowledge as a justified true belief: when somebody creates knowledge, he or she makes sense out of a new situation by holding justified beliefs and committing to them. The emphasis in this definition is on the conscious act of creating meaning. In our study, we focused on collective knowledge that entails notions of collective belief, truth and justification (Corlett, 1996). Our analysis insisted on particular conditions of inter-group, justified true acceptance which is necessary for collective knowledge. According to Corlett, “… what makes belief, acceptance, justification and knowledge collective is that they are the results of human decision-makers related to one another in groups…” (2007, p.245). Obviously, each one represents his or her group interests. The road to sharing knowledge lies through individuals, mostly scientists in our study, and is based upon building social relationships and trust, deep dialogue and creative abrasion. There is a need of diversity of ideas and an environment where failures and reflection are valued as learning enablers. Science is the process used everyday to logically complete thoughts through inference of facts determined by calculated experiments. As science itself has developed, the so produced scientific knowledge has developed a broader usage within scientists. The development of scientific methods has made a significant contribution to our understanding of scientific knowledge. To be termed scientific, a method of inquiry must be based on the collection of data through observa-

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

tion and experimentation, and the formulation and testing of hypotheses. The social dimension of scientific knowledge is of significant importance, as well. We perceive the social character of science as a matter of the aggregation of individuals, not their interactions, and social knowledge as simply the additive outcome of mostly scientists, members of the three groups, making sound scientific judgments. Philosophers concerned to defend the social character of knowledge and to explore the social dimension of scientific practice (Laudan, 1984; Brown 1989; Goldman, 1995) have approaches that differ in their details but they agree in stating that scientists are persuaded by what they regard as the best evidence or argument, the evidence most indicative of the truth by their lights, and in maintaining that arguments and evidence are the appropriate focus of attention for understanding the production of scientific knowledge. Opposing them, Jovchelovitch (2007) criticizes the narrow association of knowledge with rationalism in the sense of scientific knowledge. As a result, scientific knowledge is viewed as more valid than everyday knowledge. Therefore, we have opted for our model to highlight a few key factors that can explain a large proportion of the variation noted in manufacturing performance. We have modified the sharing knowledge model validated and used by Nelson & Cooprider (1996) and we enhanced it with links allowing us to draw conclusions on the role and contribution of information technology as an enabler and facilitator towards both manufacturing performance and shared knowledge. Thus, the proposed evaluation model is built to investigate cause and effect links between sharing knowledge, its components, information technology and manufacturing performance. Both general and multiplicative methods are used to measure the indicators, at least two for every construct, and path analysis has been chosen as the analytic technique in this study because it assesses causal relationships (Pedhazur, 1982;

Wright, 1971). Pedhazur, building upon Wright, states that “…path analysis is not a method for discovering causes, but a method applied to causal models formulated by the researcher on the basis of knowledge and theoretical considerations.” (p. 580). Path diagrams, although not essential for numerical analysis, are useful tools for displaying graphically the pattern of causal relations among the set of variables under consideration. In this respect we consider the model more appropriate than the intellectual capital or the tangible and intangible approach used in other studies. Despite the fact that in recent years, social and behavioural scientists have been showing a steadily growing interest in studying patterns of causation among variables, the concept of causation has generated a great deal of controversy among both philosophers and scientists. Nonetheless, causal thinking plays a very important role in scientific research. Even in the works of those scientists who strongly deny the use of the term causation, it is very common to encounter the use of terms that indicate or imply causal thinking. Thus, we can conclude that scientists, in general, seem to have a need to resort to causal frameworks, even though on philosophical grounds they may have reservations about the concept of causation. Schematically, our empirical evaluation model illustrates the relationships among the five variables as shown in Figure 1. Our seven hypotheses correspond to the causal links of Figure 1 and derive from theoretical statements found in the literature related to knowledge management and information systems and technology. In the following section, we shall elaborate upon the variables incorporated in our model and, at the same time, we shall present our investigation hypotheses. Finally, it is important to bear in mind that path analysis is a method, and as such its valid application is subject to the competency of the researcher using it and the soundness of the theory that is being tested. Finally, it is the explanatory scheme of the researcher that determines the

1089

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Figure 1. The shared knowledge and information technology evaluation model

often characterized as organizational knowledge Badaracco (1991). Lack of this organizational and cross-functionally shared knowledge may result in loses of Manufacturing group performance, while its presence may lead to better performance. As we do not have a priori reasons to expect a different relationship, it is here that we are founding our first hypothesis. Hypothesis 1: Shared knowledge among Manufacturing, R&D and Quality groups, as perceived by the manufacturing organization, leads to improved manufacturing group performance.

type of analysis to be applied to data, and not the other way around.

VARIABLES AND HYPOTHESES Shared Knowledge Sharing of knowledge is a process distinct from managerial communication, which also deserves consideration. Nelson & Cooprider (1996, p. 411) define Shared Knowledge as “an understanding and appreciation among groups and their managers, for the technologies and processes that affect their mutual performance”. Appreciation and understanding are the two core elements of shared knowledge. Appreciation among diverse groups must be characterized by sensitivity to the point of view and interpretation of the other group, in order to overcome the barriers caused by the different environments and languages used. A deeper level of knowledge must be shared in order to achieve mutual understanding and this is

1090

In an effort to make more comprehensible the relationship between shared knowledge and the manufacturing group performance, we shall now define the two components or antecedents of shared knowledge: Trust and Influence.

Trust The significance of trust has been given considerable attention and has even been described as a ‘business imperative’ (Davidow and Malone, 1992; Drucker, 1993 among others). In rather similar ways, trust has been defined as “a set of expectations shared by all those in an exchange” (Zucker, 1986) or as “the expectation shared by the [involved] groups that they will meet their commitments to each other” (Nelson and Cooprider, 1996, p. 413) or finally as “… maintaining reciprocal faith in each other in terms of intention and behaviors” (Lee and Choi, 2003, p. 190). Szulanski (1996) empirically found that the lack of trust among employees is one of the key barriers against knowledge sharing and that the increase in knowledge sharing brought on by mutual trust results in knowledge creation. In the model proposed for this study, it is assumed that Manufacturing, R&D and Quality groups work better in an atmosphere of mutual trust based on mutual commitment and a stable long-term relationship,

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

which is the foundation for our conceptualization of trust. We, thus, hypothesize that mutual trust is a determinant of shared knowledge and it is here that we advance our second hypothesis. Hypothesis 2: The perception of increased levels of mutual trust among Manufacturing, R&D and Quality groups leads to increased levels of shared knowledge among these groups.

Influence As organizational groups engaged in joint work are often dependent upon each other, influence relationships are created. One way influence is developed, is through the law of reciprocity (Cohen and Bradford, 1989). People expect payback for contribution to an exchange. The perception of reciprocal benefits leads to mutual influence and success in future exchanges among the groups. Nelson and Cooprider (1996, p. 414) define mutual influence as “the ability of groups to affect the key policies and decisions of each other.” Consequently, we expect the following relationship to hold true and it is here that we are basing our third hypothesis. Hypothesis 3: Increased levels of mutual influence among manufacturing, R&D and Quality groups lead to increased levels of shared knowledge among these groups. The two important aspects with regard to shared knowledge are demonstrated in the evaluation model used for this research (Figure 1). First, mutual trust and influence are presented as antecedents of shared knowledge, and second, shared knowledge is presented as a mediating variable between mutual trust and influence, leading to manufacturing group performance. Therefore, we can hypothesize:

Hypothesis 4: Shared knowledge acts as a mediating variable between mutual trust and influence and manufacturing performance. As we have no a priori reasons to exclude that mutual trust and influence could also possibly affect manufacturing performance directly, we are here introducing our fifth hypothesis. Hypothesis 5: There is a positive relationship between mutual trust and manufacturing performance, as well as between mutual influence and manufacturing performance.

Information Technology Davenport & Short (1990, p. 11) define Information Technology (IT) as “…the capabilities offered by computers, software applications, and telecommunications” and further explain that “IT should be viewed as more than an automating or mechanizing force; it can fundamentally reshape the way business is done” (p. 12) and that “IT can make it possible for employees scattered around the world to work as a team” (p. 19). Applegate, McFarlan & McKenney (1999; p. vii) identify IT as: “…computing, communications, business solutions and services…” and further down (note in p. 3) they explain that “…IT refers to technologies of computers and telecommunications (including data, voice, graphics, and full motion video).” In the new economy era, information technology has a very significant role to play in supporting both communication and, in particular, knowledge sharing. IT affects knowledge sharing in a variety of ways. IT facilitates rapid collection, storage, and exchange of knowledge in a scale not possible up to recent times, thus fully supporting the knowledge sharing process (Roberts, 2000). Specially developed IT integrates fragmented flows of knowledge, eliminating, in this way, barriers to communication among departments (Gold et al, 2001). Advanced IT (like electronic whiteboarding and videoconferencing) encourages all

1091

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

forms of knowledge sharing and is not limited to the transfer of explicit knowledge only (Riggins and Rhee, 1999). Thus, we can hypothesize: Hypothesis 6: There is a positive relationship between IT support and the knowledge sharing process. The use of certain IT infrastructure such as intranets, extranets, groupware, internet, etc has been evaluated, in relationship to sharing knowledge, by means of an ad hoc question. IT, in our model, is perceived to affect manufacturing performance, as well.

Manufacturing Performance Under an industrial business management approach, manufacturing performance has three main activities: (i) the selection of goals; (ii) the consolidation of measurement information relevant to an organization’s progress against these goals, and (iii) the interventions made by managers in light of this information with a view to improving future performance against these goals. Although presented here sequentially, typically all three activities will run concurrently, with the interventions made by managers affecting the choice of goals, the measurement information monitored, and the activities being undertaken within the organization. For the purpose of our study, organizational stakeholders in every participating company have been questioned in order to assess the manufacturing group performance and, in addition, to compare the manufacturing unit under investigation with other units they have managed. Madnick (1991) points out the major ways in which IT support affects manufacturing group performance. First, IT provides opportunities for increased interand intra-organizational connectivity and, thus, increases both efficiency and effectiveness. Second, new IT architectures offer significant cost/ performance and capacity advances. And finally,

1092

with IT support, adaptable organizational structures that lead to significant cost reductions are made possible. As there are other variables (such as employees’ competences and qualification, raw material quality, technology level of the machinery in use, etc) which affect manufacturing group performance and are not included in our model, we can only hypothesize: Hypothesis 7: There is a positive relationship between IT support and the manufacturing group performance. The use of four IT functions (coordination of business tasks, support of decision making, facilitating teamwork and access to information in data bases) has been evaluated, in relationship to manufacturing performance, by means of an ad hoc question. The five variables incorporated in our model are structured upon a socio-technical perspective that adopts an holistic approach (Pan and Scarbrough 1998). Based on this view, mutual trust and influence, related to the organizational structure and culture as well as to the employees, are considered social variables while, on the other hand, IT is considered a technical variable. For purposes of clarity, most studies consider the impact of social and technical variables independently, a precaution we are also adopting in this study. In the next section, we are presenting the methodology of our research.

RESEARCH DESIGN In an ideal situation, investigation samples are selected randomly. This is done, among other reasons, for the external validity criteria to be a priori fulfilled. The maxim applies to the selection of companies, manufacturing units, and, to a certain extent, to the selection of individuals who answer the questionnaires. As the sample of our study included every company that has

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

accepted to participate we can not disregard a possible selection bias. Finally 51 medium to large size industrial companies, representing 5 sectors (alimentation, automotive, chemical and pharmaceutical, electro-mechanical, and textile) participated in the research. The unit of analysis is the manufacturing group, since the intent of this study is to explain the relationship of organizational subunits (the three collaborating groups) rather than that of individuals. The size of the company has been used as a criterion and it was convenient that several of the selected companies had multiple manufacturing groups (or departments/lines as they were named) who cooperated with a central R&D and/or quality group. This has allowed for the research to be addressed to a big number of manufacturing groups, out of which 112 have participated by responding to the relevant questionnaires. Table 1 shows the industrial sectors represented, the number of companies contacted and participated as well as the identified and participating manufacturing units for each one of them. The final sample size, of 112 manufacturing units, is considered sufficient in order to perform path analysis (Pedhazur 1982) and the participation rates achieved in our study (62% at company level and 68% at the unit of analysis level) are considered satisfactory (Cook and Campbell, 1979). The research responders have been chosen based on the key-informant methodology developed by Phillips and Bagozzi (1986) and in-

cluded–for each company–manufacturing, R&D and quality group managers or their deputies, as well as senior managers. As the measurement of organizational characteristics requires research methods different from those used for measuring the characteristics of individuals, key-informant methodology is a frequently adopted approach. (Table 1) Two symmetrical relationship questionnaires, worded in a reverse form, were addressed to Production and Quality or R&D managers -and their assistants- and aimed at portraying the opinion and the attitude of the two collaborating groups towards each other, in terms of sharing knowledge. In addition, the role and level of contribution of Information Technology, both as a tool and/or enabler in supporting sharing knowledge among the collaborating groups was investigated and a last, ad hoc question evaluated the use of commonly used IT infrastructure for inter-firm knowledge sharing. A third, performance questionnaire –attempting to measure manufacturing group performance– was addressed to senior managers or their assistants. They have been asked to compare the manufacturing group under question, to other comparable manufacturing groups they have managed. In addition, the level of contribution of Information Technology to manufacturing group performance was investigated and again, a last ad hoc question evaluated the use of specific IT functions on four knowledge sharing issues, closely

Table 1. Study participants by sector, company and unit of analysis Sector

Companies

Manufacturing Units

Contacted

Participated

Identified

Participated

Alimentation

26

14

47

31

Automotive

8

6

25

15

Chemical & Pharmaceutical

7

5

22

19

Electro-Mechanical

25

18

54

35

Textile

16

8

17

12

Total

82

51 (62%)

165

112 (68%)

1093

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

related to the group performance. The questions used, with their indicative numbers, are listed in Appendix I, where we analyze the indicators used for each construct measurement. The two relationship questionnaires were pilot tested using Production and Quality or R&D managers, and the performance questionnaire was tested using senior executives from a small group of companies not participating in the final phase of our research. Following the completion of each pilot questionnaire, the pilot test informant was debriefed to determine if any questions were confusing for any reason. They were also questioned, whether in their opinion, any significant indicators have been left out of the questionnaire. Based on the results of the pilot test, a number of initially used questions were determined to be poor and were deleted or rephrased. The most important lessons learned through design and pilot testing of the questionnaires are: 1.

2.

In designing the questions, it is essential, to word them in as simple terms as possible and to anchor each question to one specific relationship; Each question must be customized to include the exact name of the department, as it is used in the company in question.

Despite the above precautions we experienced that the key-informant does not always share the same understanding with the researcher regarding the terminology in use. Two types of measures have been used to assess the organizational characteristics of shared knowledge, mutual trust, mutual influence, information technology and manufacturing performance. General measures, where each informant is asked to assess the overall level of interaction for a specific characteristic of a particular relationship and multiplicative or interaction measures, where each informant is asked, for example, to assess the role of manufacturing and either R&D or quality group for each characteristic separately. Using the

1094

conceptualization of fit as interaction, proposed by Venkatraman (1989), the measurements have been operationalized as “manufacturing role X R&D or quality role”, by multiplying the two responses together. There are a number of advantages to this measurement scheme, as indicated by Churchill (1979) and Campbell and Fiske (1959): (a) the two types of measures (general and multiplicative) can be thought of as different methods; (b) it provides a stronger test of the validity of the measurement scheme, and (c) it balances possible threats to validity inherent in either type alone. Manufacturing group performance has been conceptualized in two parts; as operational and service manufacturing performance. Operational or ‘inward’ performance is operationalized as: (a) the quality of the manufacturing group’s work product; (b) the ability of the manufacturing group to meet its organizational commitment, and (c) the ability of the manufacturing organization to meet its goals (first three questions of the performance questionnaire). Service or ‘outward’ performance is operationalized as: (a) the ability of the manufacturing group to react quickly to R&D and/or quality needs, (b) its responsiveness to the R&D and/or quality group and (c) the contribution that the manufacturing group has made to the R&D and/ or quality group’s success in meeting its strategic goals (questions four to six of the performance questionnaire).

ANALYSIS OF THE RESULTS In order to assess the validity of our evaluation model (Figure 1) we empirically tested it using path analysis as the method for studying patterns of causation within the set of independent, mediating and dependent variables used in our evaluation model. For the casual model under consideration, the following preconditions, given by Pedhazur (1982) are essential:

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

1. 2.

The relations among the variables in the model are linear, additive and causal. Each residual is not correlated with the variables that precede it in the model. This implies that: a.

3. 4. 5.

The residuals are not correlated among themselves b. All relevant variables are included in the model c. Each endogenous variable is perceived as linear combination of exogenous and/or endogenous variables in the model plus a residual d. Exogenous variables are treated as ‘given’ and when are correlated among themselves, these correlations are also treated as ‘given’ and remain unanalyzed. There is a one-way causal flow in the system. The variables are measured on an interval scale. The variables are measured without error.

And Pedhazur concludes that “…given the above assumptions, the method of path analysis reduces to the solution of one or more multiple linear regression analyses” (p. 580). It is under these assumptions that we have concluded to the use of Figure 2, as the research model for our investigation. With one exception: Not all variables affecting Manufacturing Performance are included in the model. Essential variables like skills and qualification of workers, technological level of the machinery in use, and quality of the raw material –just to mention some very basic ones- have not been taken into consideration simply because they do not relate to the focus of our investigation, which is the contribution of shared knowledge and information technology to manufacturing performance. This means that the result of the regression of Manufacturing Performance versus Shared Knowledge could only be considered as a partial causal effect. Two multiple regressions were run for each of the two dependent variables, manufacturing performance and shared knowledge. Testing the hypotheses requires testing the significance of paths I, II, III, Va, Vb, VI and VII as presented in

Figure 2. Regressions in the evaluation model

1095

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Figure 1. The results of this analysis are schematically shown in Figure 2 and in the generic regression equations below: For manufacturing performance: MPC = α + β1 SKC + β2 MTC + β3 MIC + β4 ITmpC + e (8.1) For shared knowledge: SKC = α + β1 MTC + β2 MIC + β3 ITskC + e (8.2) Two points need to be clarified in the above equations: 1.

2.

1096

β’s, the normalized path coefficients, indicate the direct impact of a variable hypothesized as a cause on a variable taken as an effect. Wright (1934) defines a path coefficient as: “The fraction of the standard deviation of the dependent variable (with the appropriate sign) for which the designated factor [here, the independent or mediating variable] is directly responsible…” (p. 162). Under the previously analyzed preconditions, path coefficients take the form of ordinary least squares solutions for the β’s (Pedhazur 1982, pp. 582-584). The third letter C, added to the two-letter acronym used for each one of the variables, indicates that we are referring to its Construct. As at least two indicators have been used to assess every variable in the research model, the construct is the mean of these indicators. In the acronym of information technology, the indicators mp and sk are used to distinguish: (a) ITskC, the IT construct measured through the two relationship questionnaires, in reference to shared knowledge, and (b) ITmpC, the IT construct measured through the performance questionnaire, in reference to manufacturing performance. As these two

types of questionnaires have been filled in by different key-informants we could not use a possible IT Construct (ITC) produced as the mean of ITskC and ITmpC. Regressions in the evaluation model have been conducted in hierarchical order. First, we examined the relationship between manufacturing performance and each one of the variables affecting it; shared knowledge, mutual trust and influence, and information technology as described in the first regression equation. And the resulting equation is: MPC = 6.98 + 0.354 MTC – 0.0364 MIC + 0.225 SKC + 0.259 ITmpC + e1 At this point, and for the better understanding of the analysis following, some more statistical terms need to be clarified: 1.

2. 3.

R2, in the case of multiple independent variables, indicates the squared multiple correlation, i.e. the proportion of variance of the dependent variable accounted for by the independent variables. The t-value (-∞ < t >+.∞) determines the level of significance of the β’s, and finally, F, the ratio of the mean square regression to the mean square residual, provides a statistic for testing the null hypothesis. When the calculated F exceeds the tabled value of F, with the associated degrees of freedom and at a preselected level of significance p (i.e. p=0.000, or p> F(0.01; 4, 107) = 3.50 for the first regression equation, and F=50.55 >> F(0.01; 3, 108) = 3.96 for the second regression equation.

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

The statistical results of the two regression equations and those of the confirmatory tests are presented in Appendix II.

LIMITATIONS AND FUTURE RESEARCH We acknowledge two limitations for our study. The first one is theoretical: 1.

2.

The development of mutual trust and influence leading to shared knowledge and the influence of information technology are all ongoing phenomena. In our study, these constructs were measured at a static point in time rather than as they develop. A future research could possibly investigate the relationship of ongoing changes to manufacturing group performance, maintaining the same company sample. It would also be interesting to possibly relate the changes noted over time, with actual changes in both the social (mutual trust and influence) and the technical (information technology) subsystems within the organization. The study was conducted in Spain. A new multinational study in three more European Union countries, namely Finland, Greece and Hungary is currently under development and we hope that it will further support our findings.

CONCLUSION The results of this study demonstrate the positive contribution of shared knowledge and information technology to manufacturing performance. Based primarily on the above results and to a certain extent on the literature reviewed, we come to the following socio-technical conclusion. Sharing knowledge in a meaningful manner requires a well balanced merge of technology with the company’s culture,

in a way that creates an environment supporting collaboration. Trust has been identified, through our study, as one of the company’s core values. Management has to create a climate of trust in the organization, for knowledge sharing to become reality. In such an environment scientists from different groups (Manufacturing, Quality and R&D) feel comfortable to look for others with the ‘missing piece of individual and social knowledge’ to share. As shown by this study, influence is the second necessary condition for, and can lead to cooperative behavior among individuals and groups, especially where tacit knowledge has to be shared. It is only in such an environment that the IT made available may lead to innovative products. The findings of this study indicate that Manufacturing, Quality and R&D groups have the opportunity to develop mutual trust and influence through repeated periods of positive face-to-face or IT-based communication, social interaction and common goal accomplishment. Such behavioral features result to increased shared knowledge regarding the groups’ common problems, procedures and know-how. It is clearly illustrated that it is in the hands of management to increase manufacturing performance by improving the channels for individual and social knowledge to be shared among the three groups and by selecting the information technologies that best fit the innovative efforts and competitive strategy of their organization.

REFERENCES Anderson, M. (2002). Measuring Intangible Value: The ROI of Knowledge Management. Retrieved July 12, 2004, from http://www1.astd. org/news_letter/november/Links/anderson.html Applegate, L. M., McFarlan, F. W., & McKenney, J. L. (1999). Corporate Information Systems Management: Text and Cases (5th ed.). USA: Irwin/McGraw-Hill.

1099

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Armistead, C. (1999). Knowledge Management and Process Performance. Journal of Knowledge Management, 3(2), 143–154. doi:10.1108/13673279910275602 Badaracco, J. (1991). The Knowledge Link: How Firms Compete through Strategic Alliances. Boston, MA: Harvard Business School Press. Brown, J. (1989). The Rational and the Social. London: Routledge. Byrne, B. M. (1988). Measuring Adolescent Self-Concept: Factorial Validity and Equivalency of the SDQ III Across Gender. Multivariate Behavioral Research, 23(7), 361–375. doi:10.1207/ s15327906mbr2303_5 Campbell, D. T., & Fiske, D. W. (1959). Convergent and Discriminant Validation by the MultitraitMultimethod Matrix. Psychological Bulletin, 56(2), 81–105. doi:10.1037/h0046016 Chong, C. W., Holden, T., Wilhelmij, P., & Schmidt, R. A. (2000). Where Does Knowledge Management Add Value? Journal of Intellectual Capital, 1(4), 366–380. doi:10.1108/14691930010359261

Corlett, J. A. (2007). Analyzing Social Knowledge. Social Epistemology, 21(3), 231–247. doi:10.1080/02691720701674049 Davenport, T. H., & Prusak, L. (2000). Working Knowledge: How Organizations Manage what they Know. Cambridge, MA: Harvard Business School Press. Davenport, T.H., & Short, J.E. (1990). The New Industrial Engineering: Information Technology and Business Process Redesign. Sloan Management Review, 31(summer), 11-27. Davidow, W. H., & Malone, M. (1992). The Virtual Corporation. London: Harper Collins. Despres, C., & Chauvel, D. (2000). A Thematic Analysis of the Thinking in Knowledge Management. In Despres, C., & Chauvel, D. (Eds.), Knowledge Horizons: The Present and the Promise of Knowledge Management. Boston, MA: Butterworth-Heinemann. Draper, N. R., & Smith, H. (1980). Applied Regression Analysis (2nd ed.). New York: John Wiley & Sons.

Churchill, G. A. (1979). A Paradigm for Developing Better Measures of Marketing Constructs. JMR, Journal of Marketing Research, 16(1), 64–73. doi:10.2307/3150876

Firestone, J. M. (2001). Estimating Benefits of Knowledge Management Initiatives: Concepts, Methodology and Tools. Journal of the KMCI, 1(3), 110–129.

Cohen, A. R., & Bradford, D. L. (1989). Influence without Authority: The Use of Alliances, Reciprocity and Exchange to Accomplish Work. Organizational Dynamics, 17(3), 4–18. doi:10.1016/0090-2616(89)90033-8

Fukuyama, F. (1999). The Great Disruption. Simon & Schuster.

Cook, T. D., & Campbell, D. T. (1979). QuasiExperimentation: Design & Analysis Issues for Field Settings. Boston, MA: Houghton Mifflin Company. Corlett, J. A. (1996). Analyzing Social Knowledge. Totowa: Rowman & Littlefield Publishers.

1100

Gold, A. H., Malhotra, A., & Segars, A. H. (2001). Knowledge Management: An Organizational Capabilities Perspective. Journal of Management Information Systems, 18(1), 185–214. Goldman, A. (1995). Psychological, Social and Epistemic Factors in the Theory of Science. In M. Forbes & R. Burian (Eds.), PSA 1994: Proceedings of the 1994 Biennial Meeting of the Philosophy of Science Association (pp. 277-286). East Lansing, MI: Philosophy of Science Association.

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Grant, R. M. (1996a). Prospering in Dynamicallycompetitive Environments: Organizational Capability as Knowledge Integration. Organization Science, 7(4), 375–387. doi:10.1287/orsc.7.4.375 Grant, R. M. (1996b). Towards a Knowledgebased Theory of the Firm. [Special Issue entitled Knowledge and the Firm]. Strategic Management Journal, 17(4), 109–122. Grant, R. M. (1997). Knowledge-based View of the Firm: Implications for Management Practice. Long Range Planning, 30(3), 450–454. doi:10.1016/S0024-6301(97)00025-3 Grant, R. M., & Baden-Fuller, C. (1995). A Knowledge-based Theory of Inter-firm Collaboration. In Academy of Management Best Papers Proceedings. Hansen, M. T., & von Oetinger, B. (2001). Introducing T-Shaped Managers: Knowledge Management’s Next Generation. Harvard Business Review, 79(3), 107–116. Jovchelovitch, S. (2007). Knowledge in Context: Representations, Community and Culture. London: Routledge. Kingsley, M. (2002). Measuring the Return on Knowledge Management. Retrieved on July 14, 2004, from http//:www.llrx.com/features/kmroi. html Knight, D. J. (1999). Performance Measures for Increasing Intellectual Capital. Strategy and Leadership, 27(2), 22–27. doi:10.1108/eb054632 Larsen, H. T., Bukh, P. N. D., & Mouritsen, J. (1999). Intellectual Capital Statements and Knowledge Management: ‘Measuring’ ‘Reporting’ ‘Acting’. Australian Accounting Review, 9(3), 15–26. doi:10.1111/j.1835-2561.1999.tb00113.x Laudan, L. (1984). The Pseudo-Science of Science? In Brown, J. (Ed.), Scientific Rationality: The Sociological Turn (pp. 41–74). Dordrecht, Holland: Reidel.

Lee, H., & Choi, B. (2003). Knowledge Management Enablers Processes and Organizational Performance: An Integrative View and Empirical Study. Journal of Management Information Systems, 20(1), 179–228. Liebowitz, J., Rubenstein-Montano, B., McCaw, D., Buchwalter, J., Browning, C., Newman, B., & Rebeck, K.Knowledge Management Methodology Team. (2000). The Knowledge Audit. Knowledge and Process Management, (1): 3–10. doi:10.1002/ (SICI)1099-1441(200001/03)7:13.0.CO;2-0 Madnick, S. E. (1991). The Information Technology Platform. In Morton, S. (Ed.), The Corporation of the1990s (pp. 27–60). New York: Oxford University Press. Nelson, K. M., & Cooprider, J. G. (1996). The Contribution of Shared Knowledge to IS Group Performance. Management Information Systems Quarterly, 20(4), 409–429. doi:10.2307/249562 Neter, J., Kutner, M. H., Nachtsheim, C. J., & Wasserman, W. (1996). Applied Linear Statistical Models. USA: Irwin. Nonaka, I. (1991). The Knowledge-Creating Company. Harvard Business Review, 69(6), 96–104. Nonaka, I., & Takeuchi, H. (1995). The Knowledge-Creating Company. Boston, MA: Oxford University Press. Nunnally, J. C. (1978). Psychometric Theory (2nd ed.). New York: McGraw-Hill. Pan, S., & Scarbrough, H. (1998). A Socio-technical View of Knowledge-sharing at Buckman Laboratories. Journal of Knowledge Management, 2(1), 55–66. doi:10.1108/EUM0000000004607 Papoutsakis, H., & Salvador Valles, R. (2006). Linking Knowledge Management and Information Technology to Business Performance: A Literature Review and a Proposed Model. Journal of Knowledge Management Practice, 7(1).

1101

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Pedhazur, E. J. (1982). Multiple Regression in Behavioral Research. New York: CBS College Publishing.

Wright, S. (1934). The method of path coefficients. Annals of Mathematical Statistics, 5, 161–215. doi:10.1214/aoms/1177732676

Phillips, L. W., & Bagozzi, R. P. (1986). On Measuring Organizational Properties of Distributional Channels: Methodology Issues in the Use of Key Informants. Research in Marketing, 8, 313–369.

Wright, S. (1971). Path Coefficients and Path Regressions: Alternative or Complementary Concepts? In Blalock, H. M. (Ed.), Causal Models in the Social Sciences (pp. 101–114). Chicago: Aldine Publishing Co.

Popper, K. R. (1959). The Logic of Scientific Discovery. New York: Basic Books. Riggins, F. G., & Rhee, H. (1999). Developing the Learning Network Using Extranets. International Journal of Electronic Commerce, 4(1), 65–83. Roberts, J. (2000). From Know-how to Showhow?: Questioning the Role of Information and Communication Technologies in Knowledge Transfer. Technology Analysis and Strategic Management, 12(4), 429–443. doi:10.1080/713698499 Sveiby, K. E. (1997). The New Organizational Wealth: Managing and Measuring KnowledgeBased Assets. Berrett-Koehler Publishers Inc. Sveiby, K. E. (2001). A Knowledge-based Theory of the Firm To Guide Strategy Formulation. Journal of Intellectual Capital, 2(4), 344–358. doi:10.1108/14691930110409651 Szulanski, G. (1996). Exploring Internal Stickiness: Impediments to the Transfer of Best Practice within the Firm. Strategic Management Journal, 17(10), 27–43. Turiel, E. (1983). The Development of Social Knowledge: Morality & Convention. Cambridge, UK: Cambridge University Press. Venkatraman, N. (1989). The Concept of Fit in Strategy Research: Toward Verbal and Statistical Correspondence. Academy of Management Review, 14(3), 423–444. doi:10.2307/258177 von Krogh, G., Ichigo, K., & Nonaka, I. (2000). Enabling Knowledge Creation. How to Unlock the Mystery of Tacit Knowledge and Release the Power of Innovation. New York: Oxford University Press.

1102

Zucker, L. (1986). Production of Trust: Institutional Sources of Economic Structure 1840-1920. In B.M. Staw & L.L. Cummings, (Ed.), Research in Organizational Behavior, 8, 53-111.

KEY TERMS AND DEFINITIONS Influence: Nelson and Cooprider (1996, p. 414) define mutual influence as “the ability of groups to affect the key policies and decisions of each other.” As organizational groups engaged in joint work are often dependent upon each other, influence relationships are created. One way influence is developed, is through the law of reciprocity. People expect payback for contribution to an exchange. The perception of reciprocal benefits leads to mutual influence and success in future exchanges among the groups. In our study, trust and influence have been recognized as antecedents of shared knowledge. Information Technology: Davenport & Short (1990, p. 11) define Information Technology (IT) as “…the capabilities offered by computers, software applications, and telecommunications” and further explain that “IT should be viewed as more than an automating or mechanizing force; it can fundamentally reshape the way business is done” (p. 12) and that “IT can make it possible for employees scattered around the world to work as a team” (p. 19). Applegate, McFarlan & McKenney (1999; p. vii) identify IT as: “… computing, communications, business solutions and services…” and further down (note in p. 3) they explain that “…IT refers to technologies of

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

computers and telecommunications (including data, voice, graphics, and full motion video).” Performance: Under an industrial business management approach, manufacturing performance has three main activities: (i) the selection of goals; (ii) the consolidation of measurement information relevant to an organization’s progress against these goals, and (iii) the interventions made by managers in light of this information with a view to improving future performance against these goals. Although presented here sequentially, typically all three activities will run concurrently, with the interventions made by managers affecting the choice of goals, the measurement information monitored, and the activities being undertaken within the organization. Scientific Knowledge: Science is the process used everyday to logically complete thoughts through inference of facts determined by calculated experiments. As science itself has developed, the so produced scientific knowledge has developed a broader usage within scientists. The development of scientific methods has made a significant contribution to our understanding of scientific knowledge. To be termed scientific, a method of inquiry must be based on the collection of data through observation and experimentation, and the formulation and testing of hypotheses. Shared Knowledge: “… an understanding and appreciation among [collaborating] groups and their managers, for the technologies and processes that affect their mutual performance” (Nelson & Cooprider 1996, p. 411). Appreciation and understanding are the two core elements of shared knowledge. Appreciation among diverse groups must be characterized by sensitivity to the point of view and interpretation of the other

group, in order to overcome the barriers caused by the different environments and languages used. A deeper level of knowledge must be shared in order to achieve mutual understanding and this is often characterized as organizational knowledge Badaracco (1991). Social Knowledge: Individuals develop social knowledge through their interactions with the social environment. Stable systems of social knowledge are organized around certain domains; the collaborating groups in our study. According to Turiel (1983) the acquisition of social knowledge can be interpreted in two different ways: (i) it can be knowledge transmitted to the individual by other persons, and in this case the knowledge acquired is dependent on what is transmitted; or (ii) it can be knowledge constructed by individuals specifically about certain social phenomena. The social dimension of scientific knowledge is of significant importance, as well. We perceive the social character of science as a matter of the aggregation of individuals, not their interactions, and social knowledge as simply the additive outcome of mostly scientists, members of the three groups, making sound scientific judgments. Trust: has been defined as “a set of expectations shared by all those in an exchange” (Zucker, 1986) or as “the expectation shared by the [involved] groups that they will meet their commitments to each other” (Nelson and Cooprider, 1996, p. 413) or finally as “… maintaining reciprocal faith in each other in terms of intention and behaviors” (Lee and Choi, 2003, p. 190). The significance of trust has been given considerable attention and has even been described as a ‘business imperative’ (Davidow and Malone, 1992; Drucker, 1993 among others).

1103

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

APPENDIX A: QUESTIONNAIRES AND CONSTRUCT MEASUREMENT For reasons of economy of space the three Questionnaires (Relationship Questionnaires Type A and B and Performance Questionnaire Type C) are not presented separately. All the questions are listed, with their indicative number, upon analyzing the Indicators used for each Construct Measurement. In every question below, titles in brackets were customized to reflect the exact names of the participating organizations and functional groups, as they are used in every firm. Relationship Questionnaires (Type A and B) included twelve questions aiming to measure: ◦ Dependent or mediating variable Sharing Knowledge (3 questions) ◦ Independent variable Mutual Trust (2 questions) ◦ Independent variable Mutual Influence (4 questions) ◦ The role and level of contribution of Information Technology (ITsk), both as a tool and/ or enabler in supporting sharing knowledge among Manufacturing, Quality and/or R&D groups (2 questions) ◦ The use of IT infrastructure –under the above described concept (1 question with multiple sub questions. Results are given in pie-chart form and are not presented here.)

1.

Please characterize the general working relationship that currently exists between the [Manufacturing] group and the [Quality or R&D] group → (Questionnaire Type A), or [Quality or R&D] group and the [Manufacturing] group → (Questionnaire Type B). Use Table 2 to measure constructs. Table 2. 1

2

3

4

5

6

7

Extremely Weak

Weak

Moderately Weak

About Average

Moderately Strong

Strong

Extremely Strong

Shared Knowledge* The three indicators of shared knowledge have been designed to assess the level of understanding or appreciation which the members of the three groups have of each others’ work environments. Indicators 1 and 3 assess the level of appreciation that each participant has for what their partners (in the other group) have accomplished, by using general and multiplicative assessments respectively. The second indicator measures the level of understanding that the members of the three groups have of each others’ work environments.

1104

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Shared Knowledge Indicator 1: (General Assessment, Mean 5.2991; SD 0.6957; Range 4) A1/B1: The level of appreciation that the [Manufacturing] group and the [Quality or R&D] group have for each other’s accomplishments is: A1: (Mean 5.35714; SD 0.79250; Range 4) B1: (Mean 5.24107; SD 0.84091; Range 4) Shared Knowledge Indicator 2: (Multiplicative Assessment, Mean 25.152; SD 8.604; Range 44) The product of the responses to the following: A2: The level of understanding of the [Quality or R&D] group for the work environment (problems, tasks, roles, etc) of the [Manufacturing] group is: (Mean 4.84821; SD 1.10045; Range 6) B2: The level of understanding of the [Manufacturing] group for the work environment (problems, tasks, roles, etc) of the [Quality or R&D] group is: (Mean 5.17857; SD 0.91252; Range 5) Shared Knowledge Indicator 3: (Multiplicative Assessment, Mean 26.652; SD 8.157; Range 40) The product of the responses to the following: A3: The level of appreciation that the [Quality or R&D] group has for the accomplishments of the [Manufacturing] group is: (Mean 5.07143; SD 0.97458; Range 4) B3: The level of appreciation that the [Manufacturing] group has for the accomplishments of the [Quality or R&D] group is: (Mean 5.17857; SD 0.91252; Range 5) Shared Knowledge Construct: The mean of the above indicators (Mean 19.034; SD 5.180; Range 23.667).

Mutual Trust* The two indicators of predisposition measure the extent to which the two partner groups trust each other. The first indicator directly assesses the level of trust between the groups, through a general assessment. The second indicator is a multiplicative assessment that evaluates the reputation of each group for meeting its commitments. Mutual Trust Indicator 1: (General Assessment, Mean 5.4509; SD 0.8620; Range 4) A4/B4. The level of trust that exists between the [Manufacturing] group and the [Quality or R&D] group is: A4: Mean 5.54464; SD 1.10599; Range 5 B4: Mean 5.35714; SD 0.92860; Range 4 Mutual Trust Indicator 2: (Multiplicative Assessment, Mean 28.304; SD 8.374; Range 43)

1105

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

The product of the responses to the following: A5: The reputation of the [Quality or R&D] group for meeting its commitments to the [Manufacturing] group is: Mean 5.44643; SD 0.96646; Range 4 B5: The reputation of the [Manufacturing] group for meeting its commitments to the [Quality or R&D] group is: Mean 5.13393; SD 0.97256; Range 6 Mutual Trust Construct: The mean of the above indicators, Mean 16.877; SD 4.452; Range 21.5.

Mutual Influence* The three indicators of mutual influence assess the level of influence and the ability to affect that members of the groups have on each others’ key decisions and policies. The first indicator directly assesses the level of influence and the ability to affect between the groups, through a general assessment. The second indicator is a multiplicative assessment that evaluates the level of influence that the members of the groups have on each other’s key decisions and policies. The third indicator is a multiplicative assessment that evaluates the ability to affect that the members of the groups have on each other’s key decisions and policies Mutual Influence Indicator 1: (General Assessment, Mean 4.8973; SD 0.7478; Range3.75) The average of the responses to the following: A6/B6: In general, the level of influence that members of the [Manufacturing] Group and the [Quality or R&D] have on each other’s key decisions and policies is: A6: Mean 5.01786; SD 0.97705; Range 5 B6: Mean 4.85714; SD 0.98509; Range 5 A7/B7: In general, the ability of members of the [Manufacturing] group and the [Quality or R&D] group to affect each other’s key decisions and policies is: A7: Mean 5.00000; SD 1.04838; Range 5 B7: Mean 4.71429; SD 1.06904; Range 5 Mutual Influence Indicator 2: (Multiplicative Assessment, Mean 22.089; SD 7.986; Range 33) The product of the responses to the following: A8: In general, the level of influence that members of the [Quality or R&D] group have on key decisions and policies of the [Manufacturing] group is: Mean 4.81250; SD 0.92543; Range 4 B8: In general, the level of influence that members of the [Manufacturing] group have on key decisions and policies of the [Quality or R&D] group is: Mean 4.50893; SD 1.17017; Range 6 Mutual Influence Indicator 3: (Multiplicative Assessment, Mean 22.911; SD 7.905; Range 33)

1106

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

The product of the responses to the following: A9: In general, the ability of members of the [Quality or R&D] group to affect key policies and decisions of the [Manufacturing] group is: Mean 4.93750; SD 0.84129; Range 3 B9: In general, the ability of members of the [Manufacturing] group to affect key policies and decisions of the [Quality or R&D] group is: Mean 5.57143; SD 1.19845; Range 5 Mutual Influence Construct: The mean of the above indicators, Mean 16.632; SD 5.099; Range 22.750. (*) Questionnaire items for shared knowledge, mutual trust and mutual influence used in our study had been validated and used by Nelson and Cooprider (1996) upon exploring the concept of shared knowledge between Information Systems (IS) groups and their line customers as a contributor to IS performance.

Information Technology and Sharing Knowledge (ITsk) By means of the relationship questionnaires (Type A and B) we are measuring the role and level of contribution of IT in supporting shared knowledge. We, thus, use the marker (sk) to distinguish from the IT indicators used in the performance questionnaire. ITsk Indicator 1: (Multiplicative Assessment, Mean 27.732; SD 8.514; Range 40) The product of the responses to the following: A.10: In general, the role and the level of contribution of Information Technology (IT) as a tool and/or enabler, in supporting shared knowledge between [Manufacturing] group and [Quality or R&D] group is: (Mean 5.25893; SD 0.8776; Range 4) B.10: In general, the role and the level of contribution of Information Technology (IT) as a tool and/or enabler, in supporting shared knowledge between [Quality or R&D] group and [Manufacturing] group is: (Mean 5.19820; SD 1.10223; Range 5) ITsk Indicator 2: (Multiplicative Assessment, Mean 29.223; SD 8.379; Range 33) The product of the responses to the following: A.11: In general, the use of the Information Technology (IT) infrastructure in the [Manufacturing] group is: (Mean 5.21429; SD 0.90473; Range 5) B.11: In general, the use of the Information Technology (IT) infrastructure in the [Quality or R&D] group is: (Mean 5.54128; SD 0.95774; Range 4) Information Technology and Sharing Knowledge Construct (ITskC): The mean of the above indicators, Mean 28.478; SD 7.601; Range 34. 2. Performance Questionnaire (Type C) included nine questions aiming to measure: ▪ Operational manufacturing performance (3 questions) ▪ Service manufacturing performance (3 questions)

1107

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

▪ ▪

The level of contribution of Information Technology (ITmp) to Manufacturing group performance (2 questions) The use of IT functions –under the above described concept (1 question with multiple sub questions. Results are given in pie-chart form and are not presented here.)

The following questions ask you to compare the [Manufacturing] group to other such Manufacturing groups. In relation to other comparable groups you have observed, how does the [Manufacturing] group rate on the following: Use Table 3 to measure constructs. Table 3. 1

2

3

4

5

6

7

Non-Existent

Very Weak

Weak

About Average

Strong

Very Strong

Extremely Strong

Manufacturing Performance The indicators used to measure the two constructs of manufacturing performance in our study, are given in detail, here below. For reasons related to our initial study, we treated the answers separately (A for Manufacturing and B for Quality or R&D stakeholders), although this does not affect results here. As in approximately 95 per cent of the manufacturing units under investigation, the two stakeholders that completed the performance questionnaire were related, one to Production and the second to Quality or R&D (in most cases Production or Quality Directors) we have used multiplicative assessments of interaction for the questions relating manufacturing performance to collaboration among the groups.

Operational Manufacturing Performance Operational MP Indicator 1: (Multiplicative Assessment) The product of the two stakeholders’ responses (from Manufacturing and Quality or R&D) to the following: C1: In general, the quality of the work produced by the [Manufacturing] group for the [Quality or R&D] group is: CA1: Mean 5.29464; SD 0.77852; Range 4 CB1: Mean 5.50000; SD 0.69749; Range 3 Operational MP Indicator 2: (General Assessment) The average of the responses to the following: C2: In general, the ability of the [Manufacturing] group to meet its organizational commitments (such as project schedules and budget) is:

1108

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

CA2: Mean 5.33929; SD 0.87563; Range 5 CB2: Mean 5.33929; SD 0.72972; Range 3 Operational MP Indicator 3: (General Assessment) The average of the responses to the following: C3: In general, the ability of the [Manufacturing] group to meet its goals is: CA3: Mean 5.41964; SD 0.74300, Range 3 CB3: Mean 5.37500; SD 0.77256; Range 3 Operational MP Construct: The mean of the above indicators, Mean 13.385; SD 2.641; Range 14.333.

Service Manufacturing Performance Service MP Indicator 1: (Multiplicative Assessment) The product of the two stakeholders’ responses (from Manufacturing and Quality or R&D) to the following: C4: In general, the ability of the [Manufacturing] group to react quickly to the [Quality or R&D] group’s changing business needs is: CA4: Mean 5.29464; SD 0.92647; Range 4 CB4: Mean 5.41964; SD 0.71834; Range 4 Service MP Indicator 2: (Multiplicative Assessment) The product of the two stakeholders’ responses (from Manufacturing and Quality or R&D) to the following: C5: In general, the responsiveness of the [Manufacturing] group to the [Quality or R&D] group is: CA5: Mean 5.18750; SD 0.92543; Range 4 CB5: Mean 5.27027; SD 0.79711; Range 4 Service MP Indicator 3: (Multiplicative Assessment) The product of the two stakeholders’ responses (from Manufacturing and Quality or R&D) to the following: C6: In general, the contribution that the [Manufacturing] group has made to the accomplishment of the [Quality or R&D] group’s strategic goals is: CA6: Mean 5.41071; SD 0.95441; Range 5 CB6: Mean 5.25893; SD 0.86728; Range 4 Service MP Construct: The mean of the above indicators, Mean 28.591; SD 7.294; Range 37.667. Manufacturing Performance Construct: The mean of Operational MP and Service MP constructs, Mean 20.988; SD 4.658; Range 21.25.

1109

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Information Technology and Manufacturing Performance (ITmp) By means of the performance questionnaire (Type C) we are measuring the role and level of contribution of IT in supporting the performance of the manufacturing group. We therefore use the marker (mp) to distinguish from the IT indicators used in the relationship questionnaires (Type A and B). ITmp Indicator 1: (Multiplicative Assessment, Mean 28.348; SD 7.673; Range 41) C.A7: In general, the level of the Information Technology (IT) Contribution to the [Manufacturing] group performance is: (Mean 5.17857; SD 0.91252; Range 5) C.B7: In general, the level of the Information Technology (IT) Contribution to the [Manufacturing] group performance is: (Mean 5.38393; SD 0.72591; Range 4) ITmp Indicator 2: (General Assessment, Mean 5.3170; SD 0.8383; Range 3.5) CA/B8: In general, the use of the Information Technology (IT) infrastructure, among the three groups is: (Mean 5.22321; SD 0.94640; Range 4) Information Technology and Manufacturing Performance Construct (ITmpC): The mean of the above indicators, Mean 16.833; SD 4.069; Range 21.75. It is noticeable that no significant difference is observed between responders of questionnaires A and B, regarding questions C.1 to C.7. Questions CA/B.8, due to their nature, have been analyzed as one.

APPENDIX B: REGRESSIONS AND CONFIRMATORY TESTS General Note: Symbols used in our study and in the MINITAB extracts, included in the Appendixes, correlate as following: β = Coef, t = T, p = P, r = R-Sq, R2 = R-Sq(adj), and F = F. ANOVA Table symbols: DF=Degrees of Freedom, SS=Sums of Squares, MS=Mean Squares (SSR = SS Residual, SSTO = SS Total)

First Regression: MPC vs (MTC, MIC, SKC, ITmpC) The regression equation is MPC=media(OMPC,SMPC) = 6.98 + 0.354 MTC=media(MT1,MT2) - 0.0364 MIC=media(MI1,MI2,MI3) + 0.225 SKC=media(SK1,SK2,SK3) + 0.259 ITmpC=media(ITmp1,ITmp2) Predictor Coef SE Coef T P Constant 6.981 1.873 3.73 0.000 MTC=media(MT1,MT2) 0.3535 0.1136 3.11 0.002

1110

VIF 2.1

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

MIC=media(MI1,MI2,MI3) -0.03643 0.08470 -0.43 SKC=media(SK1,SK2,SK3) 0.2248 0.1034 2.17 ITmpC=media(ITmp1,ITmp2) 0.25948 0.09151 2.84 S = 3.72201 R-Sq = 38.5% R-Sq(adj) = 36.2%

0.668 0.032 0.005

1.5 2.3 1.1

Analysis of Variance Source DF SS Regression 4 926.38 Residual Error 107 1482.31 Total 111 2408.69 Source DF MTC=media(MT1,MT2) 1 MIC=media(MI1,MI2,MI3) 1 SKC=media(SK1,SK2,SK3) 1 ITmpC=media(ITmp1,ITmp2) 1

MS 231.60 13.85

F 16.72

P 0.000

Seq SS 730.61 12.91 71.49 111.38

Unusual Observations Obs 38 58 59 107 Obs 38 58 59 107

MTC=media(MT1,MT2) 15.3 8.0 18.0 20.8 St Resid 2.36R -2.35R -2.36R -1.45 X

MPC=media(OMPC,SMPC) 28.083 9.250 13.583 18.917

Fit 19.417 17.651 22.143 23.830

SE Fit 0.619 1.010 0.849 1.523

Residual 8.666 -8.401 -8.559 -4.913

R denotes an observation with a large standardized residual. X denotes an observation whose X value gives it large influence.

Second Regression: SKC vs (MTC, MIC, ITskC) The regression equation is SKC=media(SK1,SK2,SK3) = 1.08 + 0.639 MTC=media(MT1,MT2) + 0.258 MIC=media(MI1,MI2,MI3) + 0.101 ITskC=media(ITsk1,ITsk2) Predictor Coef SE Coef T P VIF Constant 1.078 1.594 0.68 0.500 MTC=media(MT1,MT2) 0.63865 0.08285 7.71 0.000 1.3

1111

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

MIC=media(MI1,MI2,MI3) 0.25800 0.07177 3.59 ITskC=media(ITsk1,ITsk2) 0.10137 0.04486 2.26 S = 3.38672 R-Sq = 58.4% R-Sq(adj) = 57.3%

0.000 0.026

1.3 1.1

Analysis of Variance Source DF SS Regression 3 1739.43 Residual Error 108 1238.74 Total 111 2978.17 Source DF MTC=media(MT1,MT2) 1 MIC=media(MI1,MI2,MI3) 1 ITskC=media(ITsk1,ITsk2) 1

MS 579.81 11.47

F 50.55

P 0.000

Seq SS 1496.66 184.22 58.56

Unusual Observations Obs 3 10 13 42 48 58 64 68 74 107 Obs 3 10 13 42 48 58 64 68 74 107

MTC=media(MT1,MT2) 17.5 21.5 23.8 15.0 15.5 8.0 17.3 10.5 21.5 20.8 St Resid -2.19R 2.22R -2.19R 2.15R 0.02 X 0.74 X -2.99R -2.09R -2.71R 2.30RX

SKC=media(SK1,SK2,SK3) 9.333 30.333 17.833 23.833 16.167 14.167 10.000 6.667 15.333 25.833

Fit 16.556 23.139 25.144 16.661 16.116 11.786 20.044 13.650 24.367 18.535

R denotes an observation with a large standardized residual. X denotes an observation whose X value gives it large influence.

1112

SE Fit 0.763 1.005 0.603 0.578 1.227 1.121 0.461 0.561 0.585 1.201

Residual -7.222 7.195 -7.311 7.172 0.051 2.380 -10.044 -6.984 -9.034 7.299

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

Confirmatory Tests 1 Cronbach’s alphas Have been calculated, for all variables involved, according to the formula:  σχ2 ∑ n  i α≡ 1 − n −1  σx2 

    

Where for the variable: χ1,..., χi ,..., χn

σχ2 = variance of χi and σx2 = variance of x = ∑ χi i

Shared Knowledge (SKC) = 0.9980971 Mutual Trust (MTC) = 0.99893219 Mutual Influence (MTC) = 0.99789307 Information Technology (ITskC) = 0.78191053 Information Technology (ITmpC) = 0.99919877 Manufacturing Performance (MPC) = 0.99870396 Operational Manufacturing Performance (OMPC) = 0.99935936 Service Manufacturing Performance (SMPC) = 0.81379442 2 MTMM Correlation Matrix Correlations: MT1; MT2; MI1; MI2; MI3; SK1; SK2; SK3; OMPC; SMPC; ITskC; ITmpC

MT2=A5*B5 MI1=media(MI MI2=A8*B8 MI3=A9*B9 SK1=media(A1 SK2=A2*B2 SK3=A3*B3 OMPC=media(C SMPC=media(C ITskC=media( ITmpC=media( SK1=media(A1 SK2=A2*B2 SK3=A3*B3 OMPC=media(C SMPC=media(C

MT1 0.682 0.574 0.260 0.371 0.581 0.608 0.612 0.524 0.457 0.279 0.057 MI3 0.464 0.449 0.574 0.390 0.303

MT2

MI1

MI2

0.478 0.327 0.493 0.612 0.569 0.650 0.486 0.506 0.287 0.247 SK1

0.691 0.737 0.583 0.485 0.603 0.515 0.477 0.338 0.262 SK2

0.714 0.400 0.375 0.373 0.301 0.163 0.156 0.319 SK3

0.597 0.767 0.448 0.395

0.603 0.448 0.351

0.532 0.490

1113

Sharing Scientific and Social Knowledge in a Performance Oriented Industry

ITskC=media( ITmpC=media( SMPC=media(C ITskC=media( ITmpC=media(

0.335 0.217 OMPC 0.691 0.390 0.471

0.348 0.208 SMPC

0.273 0.197 ITskC

0.281 0.284

0.460

0.407 0.233

This work was previously published in Social Knowledge: Using Social Media to Know What You Know, edited by John P. Girard and JoAnn L. Girard, pp. 207-235, copyright 2011 by Information Science Reference (an imprint of IGI Global).

1114

1115

Chapter 60

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral Cengiz Kahraman Istanbul Technical University, Turkey Selçuk Çebi Karadeniz Technical University, Turkey İhsan Kaya Selçuk University, Turkey

ABSTRACT Advanced manufacturing technology (AMT) is defined as a modern method of production incorporating highly automated and sophisticated computerized design and operational systems. Hence, an investment decision to adopt AMT is a strategic decision. A group decision making process is stressful when group members have different views under multiple and conflicting criteria. Satisfying group members’ opinions has a critical impact on a decision. In this chapter, a multiple criteria group decision making problem under a fuzzy environment is used for the selection among AMTs. Choquet integral methodology is used for this selection. A strategic investment problem of a company for a suitable Automated Storage/Retrieval System (AS/RS) is considered and discussed.

INTRODUCTION The developments of science and technology have led to many new concepts and products, which are replacing the old ones. Flexibility, improvement in productivity and quality, faster response DOI: 10.4018/978-1-4666-1945-6.ch060

to market shifts, shorter throughput and lead time and savings in inventory and labor costs, enable customer demands to be met in a shorter time. Changing customer preferences and tastes oblige the manufacturer to change his products frequently. Increased consumer awareness has led to the manufacture of high-quality goods.

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

The manufacturing process has to be faster to meet market demands at the appropriate time and to overcome competition. All these factors have led to changes in manufacturing processes, which have prompted many manufacturers to adopt computer-integrated manufacturing, namely advanced manufacturing technology (Aravindan & Punniyamoorthy, 2002). The term ‘‘advanced manufacturing technology’’ (AMT) is broadly defined to include any automated (usually computer oriented) technology used in design, manufacturing/service, and decision support. Components of AMT include computer-aided engineering, factory management and control systems, computer-integrated manufacturing processes, and information integration. Different researchers have defined AMT in various ways. Typical items constituting AMT include: robotics, automated guided vehicles (AGVs), computer numerically controlled machines (CNC) machines, flexible manufacturing systems (FMS), computer-aided design (CAD), computer-aided manufacturing (CAM), information technology, cellular manufacturing and the use of just-intime (JIT)/kanban system in the plant (Das and Jayaram, 2003) AMTs provide many important benefits such as greater manufacturing flexibility, reduced inventory, reduced floor space, faster response to shifts in market demand, lower lead times, and a longer useful life of equipment over successive generations of products. Like many real-world problems, the decision of investing in advanced manufacturing technology frequently involves multiple and conflicting objectives, such as minimizing costs, maximizing flexibility, minimizing machine down times and maximizing efficiency. Application of traditional capital budgeting methods does not fully account for the benefits arising from intangible factors of AMTs (Kahraman et al., 2000). The rapid growth of the AMT industry is creating problems in new directions. Prospective firms face the situation of having to make a decision among several AMTs, all of which are

1116

capable of performing a specific task. Since the development and use of appropriate assessment approaches are crucial to ensuring that the analysis of each AMT project considers all benefits and costs, the selection of a suitable AMT is becoming a more and more complex problem. Therefore the evaluation of the AMT is a multiple criteria problem and group decision making is generally preferred to solve these kinds of problem (Chuu, 2009). Group decision making requires considering multiple perspectives obtained from a group consisting multiple members (Lu et al., 2007). A group decision is required in two situations: (1) when a problem becomes too complex such that the knowledge of a decision maker is inadequate, as in product design, investment decisions and supplier selection and (2) when there are conflicting ideas that influence the decision makers like presidential elections. While the first one is called cooperative group decision making, the second one is called non-cooperative decision making. The common features of both decision making problems is to satisfy multiple decision makers’ preferences. Therefore, there are several kinds of group decision making methods in the literature such as authority rule, majority rule, negative minority rule, ranking rule, and consensus rule. In this chapter, a multiple criteria group decision making problem under fuzzy environment is discussed to select an appropriate AMT by using the Choquet integral since the problem is related to both subjective and objective criteria. For a long time, it has been recognized that an exact description of many real-life physical situations may be virtually impossible. This is due to the high degree of imprecision involved in real-world situations. Zadeh (1965, 1968) in his seminal papers proposed fuzzy set theory as the means for quantifying the inherent fuzziness that is present in ill-posed problems. Fuzziness is a type of imprecision which may be associated with the sets in which there is no sharp transition from membership to nonmembership. Many problems

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

in the real world deal with uncertain and imprecise data so conventional approaches cannot be effective to find the best solution. To cope with this uncertainty, fuzzy set theory has been developed as an effective mathematical algebra in a vague environment. Although humans are comparatively efficient in qualitative forecasting, they are not good at making quantitative predictions. Since fuzzy linguistic models permit the translation of verbal expressions into numerical ones, thereby dealing quantitatively with imprecision in the expression of the importance of each criterion, some methods based on fuzzy relations are used. When the system involves human subjectivity, fuzzy algebra provides a mathematical framework for integrating imprecision and vagueness into the models (Zimmermann, 1991; Kaya & Çınar, 2008). A major contribution of fuzzy set theory is its capability in representing vague knowledge. In multiple criteria decision making problems, data very often are imprecise and fuzzy. Hence, a decision maker may encounter difficulty in quantifying such linguistic statements so they can be used in deterministic decision making. The rest of this chapter is organized as follows: In Section 2, the basics of group decision making and a literature review on AMTs are presented. The selection criteria are discussed in Section 3. The fundamentals of the Choquet integral are given in Section 4. A case study is presented in Section 5. Finally, concluding remarks are given in Section 6.

LITERATURE REVIEW: GROUP DECISION MAKING AND AMTS Group decision making consists of multiple individuals interacting to reach a decision. Each decision maker (expert) may have unique motivations or goals and may approach the decision process from a different angle, but have a common interest in reaching eventual agreement on selecting the “best” option(s). To do this, experts have to express

their preferences by means of a set of evaluations over a set of alternatives (Herrera-Viedma et al., 2007). Group decision-making studies reveal that differences in the process by which groups reached decisions may help explain the different findings, generally within the group decision making literature. In some cases, group members privately considered information about a problem and formed individual judgments before meeting as a group. In other cases, groups are exposed to problems for the first time as a group (Moon et al., 2003). The main characteristics of group decisions are described as follows (Lu et al., 2007): (1) A group performs a decision-making task. (2) A group decision covers the whole process of transfer from generating ideas for solving a problem to implementing solutions. (3) Group members may be located in the same or different places. (4) Group members may work at the same or different times. (5) Group members may work for the same or different departments or organizations. (6) The group can be at any managerial levels. (7) There can be conflicting opinions among group members in a group decision process. (8) The decision task might have to be accomplished in a short time. (9) Group members might not have complete information for decision tasks. (10) Some required data, information or knowledge for a decision may be located in many sources and some may be external to the organization. In the following, some studies on AMTs from the literature are briefly summarized. The results of the literature review will be summarized in a comparative manner later. Demmel and Askin (1992) introduced a multiple objective decision model for the evaluation of AMT, whereby the AMT selection attributes were classified into three categories: (1) pecuniary objective, (2) strategic objective, and (3) tactical objective. The pecuniary objective is based upon discounted cash flow techniques. Shang and Sueyoshi (1995) proposed a selection procedure for an FMS employing the AHP, simulation, and data envelopment analysis (DEA). Sambasivarao

1117

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

and Deshmukh (1997) devised a decision support system for selection of AMT. The system provided the decision-makers with a base of knowledge and approaches to building a justification model, and emphasized that AMT attributes could be classified into two groups, namely, tangible (objective) and intangible (subjective). They suggested AHP, TOPSIS, and a linear additive utility model as alternative multi-attribute analysis methods. Small and Chen (1997) discussed the results of a survey conducted in the US that investigated the use of justification approaches for AMS. According to their findings, manufacturing firms using hybrid strategies, which employ both economic and strategic justification techniques, attain significantly higher levels of success from advanced technology projects. Kotha and Swamidass (1998) hypothesized that nationality of the firm was an important factor in AMT use. To test the effect of the nationality variable on AMT use they compared the use of 18 AMTs in the US and Japan in an exploratory study using data from 160 US firms and 125 Japanese firms. There was clear evidence that the nationality of the firm was a factor in AMT use; that was AMT use is significantly different in the two countries. O’Kane et al. (2000) reported that simulation modeling could be used for the justification phase of AMT. Efstathiades et al. (2000) investigated the transfer and implementation processes of AMT in developing countries using the Cypriot manufacturing industry as a case study. They also addressed two critical steps: (1) the management processes followed during the transfer of technology into the manufacturing environment, and (2) the steps followed both before and after implementation and productive operation of the technologies. They indicated that despite the distance between the manufacturer and the technology suppliers, no difficulties were experienced in acquiring information about the available technologies and the suitability of these technologies for the specific manufacturing environment.

1118

Arvanitis and Hollenstein (2001) empirically investigated the decisions of firms to adopt AMT based on a comprehensive specification of a rank model of technology adoption using firm-level data for Swiss manufacturing. Chan et al. (2001) presented an overview and guidance for manufacturing companies which planned to invest in AMT. They explained the reasons why the company might encounter problems while adopting AMT and explained the many suggestions offered by the relevant literature for improving the performance of evaluation in AMT investment. According to the four major steps in adopting AMT (i.e. strategic planning, justification, training and installation, and routinization and implementation), they aimed at assisting managers or investors to recognize problems at each step, thus offering appropriate ways to avoid and/or solve those problems. Yusuff et al. (2001) applied the Analytic Hierarchy Process (AHP) to determine the priority weights in predicting the success of AMT implementation because of its capability to structure complicated, multi-decision maker, multi-alternative and multiattribute problems hierarchically. They applied AHP to forecast the success of AMT implementation by taking into account two possible outcomes, which were predicted according to seven influential factors and called the successful and unsuccessful implementation. Kengpol and O’Brien (2001) applied AHP and cost/benefit analysis to select an advanced technology by incorporating quantitative and qualitative factors. Karsak and Tolga (2001) proposed a fuzzy multiple attribute decision-making (FMADM) procedure for evaluating advanced manufacturing system investments. This approach applied fuzzy discounted cash flow analysis and decision-makers’ linguistic assessments to the economic criterion and strategic criteria, respectively. The decision algorithm aggregated the experts’ preference ratings for the economic and strategic criteria weights, and the suitability of AMS investment alternatives versus the selection criteria to calculate fuzzy suitability indices. The fuzzy indices were then used to rank

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

the AMS investment alternatives. Abdel-Kader and Dugdale (2001) proposed a FMADM model for assessing AMT investments. This model applied the mathematics of the analytical hierarchy process and fuzzy set theory to aggregate the two major dimensions of financial and non-financial attributes. Gupta and Whitehouse (2001) examined the impact of size of an organization on the manufacturing strategy. Aravindan and Punniyamoorthy (2002) developed a new model to justify the investment in AMT using the extended Brown–Gibson model which was developed for evaluating alternative plant locations using certain objective and subjective factors. They showed that investment in AMT was attractive if the benefits accruing from the subjective factors were considered. Yurdakul (2004) utilized a combination of AHP and goal programming for selection among computerintegrated manufacturing technologies. The model used AHP to determine attribute weights in goal programming. Sener and Karsak (2007) presented a new decision making approach based on fuzzy regression with non-symmetric coefficients and fuzzy optimization for FMS selection. Sener and Karsak (2008) proposed a decision model based on fuzzy linear regression and fuzzy multiple objective programming for AMT selection. Thakur and Jain (2008) explored the issues of measurement and comparison of the current state of AMT adoption in India, including important information technology (IT) factors. Comparisons were made between Indian firms, firms in a developed country (Canada), and in a developing country (China) for a worldwide perspective. Chang and Wang (2009) investigated the potential use of AHP by applying the consistent fuzzy preference relations to predict advanced AMT implementation. They pointed out that the AHP method was not efficient enough because it performed complex computation procedures in paired comparisons and obtaining consistency indicators. To reduce the judgment times and

avoid checking inconsistency, they employed the consistent fuzzy preference relations, which inherited some advantages of AHP (distinct hierarchy, effective numerical assessment), to overcome certain drawbacks resulting from this conventional pairwise comparison approach. They also reviewed several methods which are utilized to fulfill the requirements of AMT selection, evaluation, assessment, justification and prediction issues, such as simple estimation methods, breakeven approaches, systems value analysis, scoring models, analytic hierarchy process, simulation and strategic methods. Chuu (2009) developed a fuzzy multiple attribute decisionmaking method for the group decision-making to improve advanced manufacturing technology selection process. In the proposed approach, a new fusion method of fuzzy information was developed for managing information assessed in different linguistic (multi-granularity linguistic term sets) and numerical scales. Also an application for the Taiwanese bicycle industry was employed. Costa and Lima (2009) presented the rationality for the organizational design development related to AMT adoption. For the development of the organizational design process, it was suggested a specified sequence of activities, in order to consider all the dimensions that define the content of the organizational design. A result from the literature review is that the Choquet integral has not been used up to date although various MCDM methods have been used for the solution AMT selection problem. In this chapter, it is the first time that Choquet integral is used as a group decision making technique for the solution of the AMT selection problem. In Table 1, the results of the literature review are summarized in a comparative manner with respect to methods, purpose, and criteria.

1119

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Table 1. Categorization and comparison of related works for AMT decision-making The method or technique

Analytic hierarchy process (AHP)

Author(s)

Purpose

Criteria

Yurdakul (2004)

Selection

Intangible/tangible

Punniyamoorthy and Ragavan (2003)

Selection

Intangible/tangible

Yusuff et al. (2001)

Prediction

Intangible

Kengpol and O’Brien (2001)

Selection

Intangible/tangible

Mohanty and Deshmukh (1998)

Selection

Intangible/tangible

Chan et al. (1999)

Selection

Intangible/tangible

Albayrakoglu (1996)

Justification

Oeltjenbruns et al. (1995)

Selection

Intangible/tangible

Weber (1993)

Selection

Intangible/tangible

Datta et al. (1992)

Justification

Intangible/tangible

Fuzzy AHP

Chan et al. (2006)

Selection

Intangible/tangible

Fuzzy multiple attribute decision-making

Karsak and Tolga (2001)

Evaluation

Intangible/tangible

Abdel-Kader and Dugdale (2001)

Selection

Intangible/tangible

Fuzzy Group Decision Making

Chuu (2009)

Selection

Intangible/tangible

Chuu (2009a)

Selection

Intangible/tangible

Traditional financial justification method

Orr (2002)

Evaluation

Intangible/tangible

Decision support systems

Sambasivarao and Deshmukh (1997)

Selection

Intangible/tangible

Simulation Modeling

O’Kane et al. (2000)

Justification

Discounted cash flow (DCF)

Aravindan and Punniyamoorthy (2002)

Justification

Demmel and Askin (1992)

Evaluation

Tangible

Talluri et al. (2000)

Selection

Tangible

Talluri and Yoon (2002)

Selection

Tangible

Shang and Sueyoshi (1995)

Selection

Tangible

Verter and Dasci (2002)

Planning

Bokhorst et al. (2002)

Selection

Tangible

System wide benefits value analysis

Ordoobadi and Mulvaney (2001)

Selection

Tangible

Cluster analysis

Díaz et al. (2003)

Examining

Intangible/tangible

Chen and Small (1994)

Planning

Intangible

Efstathiades et al. (2002)

Planning

Intangible/tangible

Data envelop analysis (DEA)

Mixed nonlinear programming model

Integrated planning model Real option valuation technique Fuzzy regression

1120

MacDougall and Pike (2003)

Tangible

Intangible/tangible

Sener and Karsak (2007)

Selection

Intangible/tangible

Sener and Karsak (2008)

Selection

Intangible/tangible

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

SELECTION CRITERIA FOR ADVANCED MANUFACTURING TECHNOLOGY Many companies are currently strengthening their competitive positions by updating the technologies used in the manufacturing processes. The selection of the appropriate technology that achieves or matches with the organization objective must be made on the basis of sound decision-making process. The decision to implement AMT is a major decision for many organizations. The success or failure of the organization could be due to this decision, and therefore it is very important that proper consideration be given to all aspects of implementation before a final commitment is made. This is necessary to ensure that all the expected benefits of the decision are realized (Yusuff et al., 2001). Chan et al. (1999) used the following criteria in Table 2, to evaluate the tangible and intangible benefits for the selection of AMT.

2007; Berrah, et al., 2008a, 2008b; Schmitt et al., 2008; Saad et al., 2008; Narukawa & Murofushi, 2008; Shieh et al., 2009). The Choquet integral is a method which measures the expected utility of an uncertain event. In imprecise probability theory, the Choquet integral is used to calculate the lower expectation induced by a 2-monotone lower probability, or the upper expectation induced by a 2-alternating upper probability. A fuzzy integral is a sort of general averaging operator Table 2. Criteria to evaluate AMT Investments Criteria

Sub criteria • Product cost • Maintenance cost

1. Cost

• High rate of return • Labor cost • Material cost • Compatibility with existing machine • Work morale

2. Performance

THE CHOQUET INTEGRAL

• Productivity • Utilization • Machine breakdown • Human integration

In this chapter, we are interested in using the Choquet integral which is a generalization of the Lebesgue integral, defined with respect to a non classical measure, often called fuzzy measure, or non-additive measure or also capacity. When the underlying universe is finite, the Lebesgue integral reduces to a (convex) linear combination hence can be assimilated to a particular class of regression models, where the coeffcients are all positive and sum up to one. Hence, the Choquet integral offers a more general model, more precisely, it offers a set of (convex) linear models, each of them being defined in a simplex (Grabisch et al. 2007). The Choquet integral was applied to multi-criteria evaluation by many researchers (Ishii & Sugeno, 1985; Onisawa et al., 1986; Labreuche & Grabisch, 2003; Kojadinovic, 2004; Karsak, 2005; Tsai & Lu, 2006; Grabisch et al.,

• Scraped value 3. Quality

• Rework • Conformance • Consistency • Transportation • Customer services

4. Delivery

• Time scheduling • Delivery time • Inventory/work in progress • Lead time • Design change accommodation • Change in product mix

5. Flexibility

• Market responsiveness • Capacity growth • Routing and scheduling flexibility

6. Innovativeness

• Research and development • Introduce product variation

1121

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

that can represent the notions of importance of a criterion and interactions among criteria. The most important feature of a fuzzy integral is its ability to represent a certain kind of interaction among criteria, ranging from redundancy (negative interaction) to synergy (positive interaction). According to Grabisch (1998), there is almost no well established method to deal with interacting criteria, and usually people tend to avoid the problem by constructing independent criteria (or criteria that are supposed to be so). This ability of the fuzzy integral is the reason for its success in various fields of multi-criteria decision-making. The disadvantage of fuzzy integral is complexity of the model, since the number of coefficients involved in a fuzzy integral model grows exponentially with the number of criteria to be aggregated. The main difficulty is to identify all these coefficients, either by some learning data, or by a questionnaire, or both. To define fuzzy integrals, a set of values of importance is needed. This set is composed of the values of a fuzzy measure. So, a value of importance for each subset of attributes is needed. In the following, some definitions are given to explain the basics of the Choquet integral (Modave & Grabish, 1998): Definition 1. Let I be the set of attributes (or any set in a general setting). A set function µ: P(I) → [0, 1] is called a fuzzy measure if it satisfies the three following axioms: • • •

µ(Ø) = 0: an empty set has no importance, µ(I) = 1: the maximal set has a maximal importance, µ(B) ≤ µ(C) if B, C ⊂ I and B⊂ C: a new added criterion cannot make the importance of a coalition (a set of criteria) diminish.

Therefore, in a problem where card(I) = n, a value for every element of P(I) including 2n values is needed. Assuming that the values of

1122

the empty set and of the maximal set are fixed, (2n-2) values or coefficients to define a fuzzy measure are needed. So, there is clearly a tradeoff between complexity and accuracy. However, the complexity can be significantly reduced in order to guarantee that fuzzy measures are used in practical applications. A fuzzy integral is a sort of weighted mean taking into account the importance of every coalition of criteria. Definition 2. Let µ be a fuzzy measure on (I,P(I)) and an application f: I → ℜ+ . The Choquet integral of f with respect to µ is defined by:

(C ) ∫ fd µ = ∑ ( f (σ (i )) − f (σ (i − 1))) µ (A( ) ) n

I

i

i =1

(1)

where σ is a permutation of the indices in order to have

(

) ( ) A( ) = {σ (i ),..., σ (n )} and f (σ (0)) = 0

f σ (1) ≤ ... ≤ f σ (n ) , i

, by convention.

It is easy to see that the Choquet integral is a Lebesgue integral up to a reordering of the indices. Actually, if the fuzzy measure µ is additive, then the Choquet integral reduces to a Lebesgue integral. Now we will illustrate how fuzzy measures can be used in lieu of the weighted sum and other more traditional aggregation operators in a Multicriteria decision making framework. It is shown in Modave and Grabisch (1998) that under rather general assumptions over the set of alternatives X, and over the weak orders ≻i, there exists a unique fuzzy measure µ over I such that: ∀x , y ∈ X , x y ⇔ u (x ) ≥ u (y ) where

(2)

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

      n   u (x ) = ∑ u x  − u  x    µ A   i−1   i−1   i   i=1  (i )  (i ) (3)

which is simply the aggregation of the monodimensional utility functions using the Choquet integral with respect to μ. The global importance of a criterion is given by evaluating what this criterion brings to every coalition it does not belong to, and averaging this input. This is given by the Shapley value or index of importance. Definition 3. Let µ be a fuzzy measure over I. The Shapley value of index j is defined by: v (j ) =

(

)

γI (B ) µ B ∪ { j } − µ (B )   B ⊂I ( j )



(4) with γ

I

(B )

(I =

− B −1) !. B ! I !

B de-

,

notes the cardinal of B. The Shapley value can be extended to degree two, in order to define the indices of interactions between attributes. Definition 4. Let µ be a fuzzy measure over I. The interaction index between i and j is defined by: I (i, j ) = ∑ B ⊂I {i, j } ξI (B )

( (

)

(

)

(

)

)

. µ B ∪ {i, j } − µ B ∪ {i } − µ B ∪ {i } + µ (B )

(5) with ξI (B ) =

(I

) ( I − 1) !

− B − 2 !. B !

.

The interaction indices belong to the interval [−1,+1] and: •

I(i, j) > 0 if the attributes i and j are complementary;

• •

I(i, j) < 0 if the attributes i and j are redundant; I(i, j) = 0 if the attributes i and j are independent.

Interactions of higher orders can also be defined, however we will restrict ourselves to second order interactions which offer a good trade-off between accuracy and complexity. Definition 5. A fuzzy measure µ is called 2-additive if all its interaction indices of order equal or larger than 3 are null and at least one interaction index of degree two is not null. In this particular case of 2-additive measures, the following theorem can be given: Theorem. Let µ be a 2-additive measure. Then Choquet integral can be computed by:

(C ) ∫

I

fd µ = ∑ I +∑I

0J

( f (i ) ∧ f ( j )) I ( f (i ) ∨ f ( j )) I

0

0J ≺ 0

 n 1 + ∑ i =1 f (i ) I i − ∑  2

ij

j ≠i

ij

 I ij   (6)

Note that this expression justifies the above interpretation of interaction indices, as a positive interaction index corresponds to a conjunction (complementary) and a negative interaction index corresponds to a disjunction (redundant). The success of a Choquet integral depends on an appropriate representation of fuzzy measures, which captures the importance of individual criteria or their combination. In this chapter, the generalized Choquet integral proposed by Auephanwiriyakul et al. (2002) will be used, in which measurable evidence is represented in terms of intervals, whereas fuzzy measures are real numbers, and is an extension of the standard Choquet integral. In contrast to Auephanwiriyakul et al. (2002), Tsai and Lu (2006) proposes another generalization that involves linguistic expressions as well as information fusion between criteria to

1123

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

overcome vagueness and imprecision of linguistic terms in questionnaires.

Steps of the Methodology The methodology is composed of eight steps (Tsai and Lu, 2006): Step 1: Given criterion i, respondents’ linguistic preferences for the degree of importance, perceived performance levels of alternative locations, and tolerance zone are surveyed. Step 2: In view of the compatibility between perceived performance levels and the tolerance zone, trapezoidal fuzzy numbers are used to quantify all linguistic terms in this study. Given respondent t and criterion i, linguistic terms for the degree of importance which shows the weight of a criterion is t = a t , a t , a t , a t , parameterized by A i i1 i2 i3 i4

(

)

perceived performance levels which indicates score of an alternative by pit = pit1, pit2 , pit3 , pit4 , and the tolerance

(

)

zone which represents specifications of a criterion by eit = eit1L , eit2L , eit3U , eit4U . In

(

)

this case study, t=1,2,3,4,5, i=1,2,…,nj, j=1,2,3,4, n1=3, n2=2, n3=4, n4=3; where nj represents the number of criteria in dimension j. t , pt and et into A  , p , and Step 3: Average A i i i i i ei , respectively using Equation (7).  = A i



k t =1

k

t A i

k k k  k t   ∑ a ∑ t =1 ait2 ∑ t =1 ait3 ∑ t =1 ait4   =  t =1 i 1 , , ,  k k k k  

(7)

Step 4: Normalize the location value of each criterion using Equation (8). fi =

1124

α∈0,1

fi α =

α∈0,1

[fi,−α , fi,+α ]

(8)

where fi ∈ F (S ) is a fuzzy-valued function. F(S ) is the set of all fuzzy-valued functions p α − ei α + [1,1] α f , fi α =  fi,−α , fi,+α  = i , pi and ei α   2 are α-level cuts of pi and ei for all α=[0,1]. Step 5: Find the location value of dimension j using Equation (9).  = (C )∫ fdg

(C ) f −dg −, (C ) f +dg + ,  ∫ α α ∫ α α  α =[0,1]

(9)

where

gi :P (S ) → I (R + ),

giα = [gi−,α , gi+,α ],

gi = [gi−, gi+ ],

fi : S → I (R + ),

and

fi = [ fi , fi ] for i=1, 2, 3,..., nj. To be able to calculate this location value, a λ value and the fuzzy measures g(A(i)), i=1,2,...,n, are needed. These are obtained from Equations (10-12) (Sugeno, 1974; Ishii and Sugeno, 1985): −

+

( ) ({ }) = g

g A(n ) = g s(n )

( )

( )

(10)

n

( )

g A(i ) = gi + g A(i +1) + λgi g A(i +1) , where 1 ≤ i < n 1 = g (S )   n  1  1 + λg (A ) − 1 ∏ i   =  λ  i =1   n ∑ i =1 g (Ai ) 

(11)

if λ ≠ 0 if λ = 0 (12)

where, Ai ∩ Aj = ϕ for all i, j = 1,2,3,…,n and i ≠ j, and λ∈(-1,∞]. Step 6: Aggregate all dimensional performance levels of the location alternatives into overall performance levels, using a hierarchical process applying the two-stage aggregation process of the generalized Choquet integral.

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Equation 13. main criterion(1) = (C ) ∫ fdg  main criterion(m ) = (C ) ∫ fdg

〉 V = (C) ∫ main criterion dg

This is represented in Equation (13). The overall performance levels yields a fuzzy number V . Step 7: Assume that the membership of V is µ  (x ) ; defuzzify the fuzzy number V into

include high space utilization, labour savings, protection from pilferage and damage, better safety, and real-time material control (Park, 2004). There are five main categories of AS/RS. These

V

a crisp value v using Equation (14) and make a comparison of the overall performance levels of alternative locations.

Table 3. The criteria of AS/RS selection and their symbols Criteria

a +a +a +a 2 3 4 ) = 1 F (A 4

(14)

C11. Product cost C12. Maintenance cost C13. Labor cost

Step 8: Compare weak and advantageous criteria among the locations alternatives using Equation (8).

A CASE STUDY Flexible material flow, delivery to the point of use, small lots, short production lead times, and high inventory turnover characterize modern manufacturing and distribution operations. Therefore, an automated storage and retrieval system (AS/RS) is used to satisfy the necessities of modern manufacturing systems. AS/RS is a system which consists of a variety of computercontrolled methods for automatically placing and retrieving loads from specific storage locations. The high throughput and short processing times of Automated storage/retrieval systems, promote improved inventory management practices and rapid material handling, and play an essential role in integrated manufacturing and distribution systems. Additional AS/RS benefits generally

Sub criteria

C1. Cost

C14. Material cost C15. Operating cost C16. Breakdown cost C2. Performance C21. Compatibility with existing storehouse C22. Productivity C23. Utilization C24. Rate of breakdown C25. Human integration C3. Quality C31.Service Time C32. Consistency C4. Flexibility C41. Design change accommodation C42. Change in product mix C43. Market responsiveness C44. Capacity C5. Innovativeness C51.Research and development C52. Introduce product variation

1125

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Table 4. Individual importances of criteria, the tolerance intervals, and linguistic evaluations for alternatives Individual Importances of Criteria Criteria

Sub criteria

Tolerance Intervals

Deep Line AS/ RS

Unit Load AS/ RS

E1

E2

E3

E1

E2

E3

E1

E2

E3

VI

I

VI

C11 Product cost

I

I

VI

[M, H]

H

VH

VH

M

H

M

C12 Maintenance cost

I

M

I

[L, M]

H

VH

VH

M

M

H

C14 Labor cost

M

I

M

[VL, H]

VL

VL

L

VL

L

L

C15 Material cost

I

I

I

[M, H]

H

VH

H

M

H

M

C16 Operating cost

I

VI

I

[L, H]

H

VH

VH

H

VH

H

C17 Breakdown cost

VI

VI

VI

[L, H]

H

M

H

VH

H

VH

M

I

M

C21. Compatibility with existing storehouse

I

I

VI

[M, VH]

H

VH

H

M

M

H

C22. Productivity

I

VI

I

[M, VH]

VH

H

VH

H

M

H

C23. Utilization

M

U

M

[L, H]

H

VH

VH

H

H

M

C24. Rate of breakdown

M

M

M

[VL, L]

L

L

VL

VL

L

L

C25. Human integration

M

I

U

[M, M]

M

M

H

VH

VH

H

M

I

M

C31.Service Time

VI

I

I

[L, H]

VH

H

VH

VH

H

H

C32. Consistency

I

I

I

[M, VH]

H

VH

VH

H

H

H

M

M

I

C41. Design change accommodation

U

U

M

[M, H]

H

H

M

L

M

L

C42. Change in product mix

VU

U

U

[M, H]

L

L

M

M

L

L

C43. Market responsiveness

I

I

M

[M, H]

H

VH

H

M

M

H

C44. Capacity

M

I

M

[L, H]

VH

H

VH

M

H

H

M

M

I

C51.Research and development

M

I

I

[L, H]

M

H

M

H

VH

H

C52. Introduce product variation

M

M

M

[L, H]

L

L

M

M

M

L

C1. Cost

C2. Performance

C3. Quality

C4. Flexibility

C5. Innovativeness

Table 5. The relationship between trapezoidal fuzzy numbers and degrees of linguistic importances in a five-linguistic-term scale The degrees of importance Label

Low/High Levels

Linguistic terms

Label

Linguistic Terms

Trapezoidal fuzzy numbers

VU

Very Unimportant

VL

Very Low

(0,0,0,0.3)

U

Unimportant

L

Low

(0.2,0.3,0.4,0.5)

M

Middle

M

Middle

(0.4,0.5,0.6,0.7)

HI

High Important

H

High

(0.6,0.7,0.8,0.9)

VI

Very Important

VH

Very High

(0.8,1,1,1)

1126

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

are; (i) unit load AS/RS, (ii) mini load AS/RS, (iii) deep-lane AS/RS (iv) man-on-board AS/RS, and (v) automated item retrieval system. The unit load AS/RS is used to store and retrieve loads that are palletized or stored in standard-sized containers. The mini load AS/RS is designed to handle small loads such as individual parts, tools, and supplies that are contained in bins or drawers in the storage system. A mini load AS/RS is generally smaller than a unit load AS/RS and allows storing more material in less space. The deep-lane AS/RS is a high-density unit load storage system that is appropriate for storing large quantities of stock. The man-on-board AS/RS system allows storage of items in less than unit load quantities.

The system permits individual items to be picked directly at their storage locations. The automated item retrieval system is designed for retrieval of individual items or small product cartoons. In our case study, one of the biggest logistic firms in Turkey plans to install an AS/RS system into its warehouse. The firm will make a selection between a deep-lane AS/RS and a unit load AS/RS by using Choquet integral methodology. Step 1: The criteria given in Table 3 are selected from the literature to evaluate the alternatives (Chan et al., 1999). Three experts assign their linguistic preferences as seen in Table 4.

Table 6. Aggregated decision matrix belonging to the group which consists of three experts Criteria

Individual Importance

Tolerance Zone

Deep Line AS/RS

Unit Load AS/RS

C1

(0.733,0.9,0.933,0.967)

C11

(0.667,0.8,0.867,0.933)

(0.4,0.5,0.8,0.9)

(0.733,0.9,0.933,0.967)

(0.467,0.567,0.667,0.767)

C12

(0.533,0.667,0.733,0.833)

(0.2,0.3,0.6,0.7)

(0.733,0.9,0.933,0.967)

(0.467,0.567,0.667,0.767)

C13

(0.467,0.633,0.667,0.767)

(0,0,0.8,0.9)

(0.067,0.1,0.133,0.367)

(0.133,0.2,0.267,0.433)

C14

(0.6,0.7,0.8,0.9)

(0.4,0.5,0.8,0.9)

(0.667,0.8,0.867,0.933)

(0.467,0.567,0.667,0.767)

C15

(0.667,0.8,0.867,0.933)

(0.2,0.3,0.6,0.7)

(0.733,0.9,0.933,0.967)

(0.667,0.8,0.867,0.933)

C16

(0.8,1,1,1)

(0.2,0.3,0.6,0.7)

(0.533,0.633,0.733,0.833)

(0.733,0.9,0.933,0.967)

C2

(0.467,0.567,0.667,0.767)

C21

(0.667,0.8,0.867,0.933)

(0.4,0.5,1,1)

(0.667,0.8,0.867,0.933)

(0.467,0.633,0.667,0.767)

C22

(0.667,0.8,0.867,0.933)

(0.4,0.5,1,1)

(0.733,0.9,0.933,0.967)

(0.533,0.667,0.733,0.833)

C23

(0.267,0.4,0.4,0.567)

(0.2,0.3,0.8,0.9)

(0.733,0.9,0.933,0.967)

(0.533,0.667,0.733,0.833)

C24

(0.4,0.6,0.6,0.7)

(0,0,0.4,0.5)

(0.133,0.267,0.267,0.433)

(0.133,0.267,0.267,0.433)

C25

(0.333,0.433,0.467,0.633)

(0.4,0.5,0.6,0.7)

(0.467,0.633,0.667,0.767)

(0.733,0.9,0.933,0.967)

C3

(0.467,0.633,0.667,0.767)

C31

(0.667,0.8,0.867,0.933)

(0.2,0.3,0.8,0.9)

(0.733,0.9,0.933,0.967)

(0.667,0.8,0.867,0.933)

C32

(0.6,0.7,0.8,0.9)

(0.4,0.5,1,1)

(0.733,0.9,0.933,0.967)

(0.6,0.7,0.8,0.9)

C4

(0.467,0.633,0.667,0.767)

C41

(0.133,0.2,0.2,0.433)

(0.4,0.5,0.8,0.9)

(0.533,0.667,0.733,0.833)

(0.267,0.467,0.467,0.567)

C42

(0.067,0.133,0.133,0.367)

(0.4,0.5,0.8,0.9)

(0.267,0.467,0.467,0.567)

(0.267,0.467,0.467,0.567)

C43

(0.533,0.667,0.733,0.833)

(0.4,0.5,0.8,0.9)

(0.667,0.8,0.867,0.933)

(0.467,0.633,0.667,0.767)

C44

(0.467,0.633,0.667,0.767)

(0.2,0.3,0.8,0.9)

(0.733,0.9,0.933,0.967)

(0.533,0.667,0.733,0.833)

C5

(0.467,0.633,0.667,0.767)

C51

(0.533,0.667,0.733,0.833)

(0.2,0.3,0.8,0.9)

(0.467,0.633,0.667,0.767)

(0.667,0.8,0.867,0.933)

C52

(0.4,0.6,0.6,0.7)

(0.2,0.3,0.8,0.9)

(0.267,0.467,0.467,0.567)

(0.333,0.533,0.533,0.633)

1127

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Step 2: Trapezoidal fuzzy numbers given in Table 5 are used to quantify the linguistic terms in Table 4. Step 3: The aggregation of experts’ assessments obtained by Equation (7) is given in Table 6. Step 4: By using Equation (8), the evaluation values are normalized for each criterion. Step 5: At different α levels, the values of all criteria in the same dimension are aggregated by using Equation (9). To illustrate the calculation procedure in Steps 4 and 5,

an example is presented for deep line AS/RS alternative under innovativeness criterion. Using Equation (8), f , fi α =  f1−,0 , f1+,0    0.667, 0.93 − 0.2, 0.9 + 1,, 1       = 2 = 0.283, 0.783 and  f2−,0 , f2+,0  = [0.183, 0.683] are obtained. Their   corresponding degrees of importances are

Table 7. Defuzzified values of AS/RS alternatives Criteria

Deep Line AS/RS Deep Line AS/RS

Unit Load AS/RS

Unit Load AS/ RS

Rank1

Rank2

Overall Value

(0.468,0.631,0.811,0.882)

(0.458,0.634,0.812,0.882)

0.698

0.696

C1

(0.5,0.643,0.813,0.883)

(0.5,0.65,0.817,0.883)

0.71

0.713

C11

(0.417,0.55,0.717,0.783)

(0.284,0.383,0.583,0.683)

0.617

0.483

C12

(0.517,0.65,0.817,0.883)

(0.384,0.483,0.683,0.783)

0.717

0.583

C13

(0.084,0.15,0.567,0.683)

(0.117,0.2,0.633,0.717)

0.371

0.417

C14

(0.384,0.5,0.684,0.767)

(0.284,0.383,0.583,0.683)

0.583

0.483

C15

(0.517,0.65,0.817,0.883)

(0.484,0.6,0.783,0.867)

0.717

0.683

C16

(0.417,0.517,0.717,0.817)

(0.517,0.65,0.817,0.883)

0.617

0.717

C2

(0.379,0.506,0.753,0.839)

(0.366,0.513,0.687,0.792)

0.619

0.59

C21

(0.334,0.4,0.684,0.767)

(0.234,0.317,0.583,0.683)

0.546

0.454

C22

(0.367,0.45,0.717,0.783)

(0.267,0.333,0.617,0.717)

0.579

0.483

C23

(0.417,0.55,0.817,0.883)

(0.317,0.433,0.717,0.817)

0.667

0.571

C24

(0.317,0.434,0.634,0.717)

(0.317,0.433,0.633,0.717)

0.525

0.525

C25

(0.384,0.517,0.584,0.683)

(0.517,0.65,0.717,0.783)

0.542

0.667

C3

(0.397,0.52,0.797,0.873)

(0.35,0.455,0.757,0.855)

0.647

0.604

C31

(0.417,0.55,0.817,0.883)

(0.383,0.5,0.783,0.867)

0.667

0.633

C32

(0.367,0.45,0.717,0.783)

(0.3,0.35,0.65,0.75)

0.579

0.513

C4

(0.386,0.525,0.767,0.853)

(0.287,0.422,0.668,0.783)

0.633

0.54

C41

(0.317,0.434,0.617,0.717)

(0.184,0.333,0.483,0.583)

0.521

0.396

C42

(0.184,0.334,0.484,0.583)

(0.184,0.333,0.483,0.583)

0.396

0.396

C43

(0.384,0.5,0.684,0.767)

(0.284,0.417,0.583,0.683)

0.583

0.492

C44

(0.417,0.55,0.817,0.883)

(0.317,0.433,0.717,0.817)

0.667

0.571

C5

(0.233,0.383,0.643,0.753)

(0.283,0.447,0.717,0.822)

0.5

0.567

C51

(0.283,0.417,0.683,0.783)

(0.383,0.5,0.783,0.867)

0.542

0.633

C52

(0.183,0.333,0.583,0.683)

(0.217,0.367,0.617,0.717)

0.446

0.479

1128

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

g10 = 0.533, 0.833 , g 2 0 =  0.4, 0.6 , respectively. First, the sequence fi,0− is sorted, where i=1 and 2, as follows: f2−,0 < f1−,0 . The corresponding degrees of importances are g1,0 = 0.4 and g 2,0 = 0.533, respectively. By solving the equa 1  2 tion 0 = ∏ [1 + λgi ] − 1 , λ=0.313 is ob λ  i =1 tained. Then, their fuzzy measures are derived from as follow; g(A(2)) = g2 = 0.533 g(A(1)) = g1 + g(A(2)) + λ g1 g(A(2)) = 1 The aggregated Choquet integral values for the main criterion C5 are calculated as   = 0.237, 0.753 (C )∫ fdg   Step 6: Similar to Steps 4 and 5, the overall values are obtained for AS/RS systems, as shown in Table 7. Step 7: From Table 7, the defuzzified overall values of AS/RSs using generalized Chouqet Integral are obtained as 0.710 and 0.713. Hence Unit Load AS/RS is the best alternative for the firm. Step 8: Weak and advantageous criteria for alternatives are clarified in Table 7 by normal and bold characters, respectively. The numbers in bold characters in Table 7 represent that the alternative has more advantage than the other for that criterion.

CONCLUSION In this chapter, a multiple criteria group decisionmaking problem under a fuzzy environment has been discussed. To illustrate the chapter, a selection problem between AS/RSs has been taken into consideration by using the Choquet integral.

The characteristic feature of the problem is that it includes both objective and subjective and conflicting criteria. Hence, three experts’ expertise is used in order to cope with the complexity of the problem. To satisfy the group decision, the arithmetic mean method has been used. The Choquet integral has provided an excellent tool for the solution of our advanced manufacturing technology selection problem including conflicting criteria. For future research, another aggregation method from the literature may be used in order to compare the results and put forward the consistency of the group decision.

REFERENCES Abdel-Kader, M. G., & Dugdale, D. (2001). Evaluating investments in advanced manufacturing technology: A fuzzy set theory approach. The British Accounting Review, 33, 455–489. doi:10.1006/bare.2001.0177 Albayrakoglu, M. (1996). Justification of new manufacturing technology: A strategic approach using the analytic hierarchy process. Production and Inventory Management Journal, 37(1), 71–76. Aravindan, P., & Punniyamoorthy, M. (2002). Justification of advanced manufacturing technologies (AMT). International Journal of Advanced Manufacturing Technology, 19, 151–156. doi:10.1007/s001700200008 Arvanitis, S., & Hollenstein, H. (2001). The determinants of the adoption of advanced manufacturing technology. Economics of Innovation and New Technology, 10(5), 377–414. doi:10.1080/10438590100000015 Auephanwiriyakul, S., Keller, J. M., & Gader, P. D. (2002). Generalized Choquet fuzzy integral fusion. Information Fusion, 3, 69–85. doi:10.1016/ S1566-2535(01)00054-9

1129

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Berrah, L., Mauris, G., Montmain, J., & Cliville, V. (2008a). Efficacy and efficiency indexes for a multi-criteria industrial performance synthesized by Choquet integral aggregation. International Journal of Computer Integrated Manufacturing, 21(4), 415–425. doi:10.1080/09511920701574255 Berrah, L., Mauris, G., Montmain, J., & Cliville, V. (2008b). Monitoring the improvement of an overall industrial performance based on a Choquet integral aggregation. Omega-International Journal of Management Science, 36(3), 340–351. doi:10.1016/j.omega.2006.02.009 Bokhorst, J. A. C., Slomp, J., & Suresh, N. C. (2002). An integrated model for part operation allocation and investments in CNC technology. International Journal of Production Economics, 75(3), 267–285. doi:10.1016/S0925-5273(01)00118-9 Chan, F. T. S., Chan, H. K., Chan, M. H., & Humphreys, P. K. (2006). An integrated fuzzy approach for the selection of manufacturing technologies. International Journal of Advanced Manufacturing Technology, 27(7–8), 747–758. doi:10.1007/ s00170-004-2246-9 Chan, F. T. S., Chan, M. H., Lau, H., & Ip, R. W. L. (2001). Investment appraisal techniques for advanced manufacturing technology (AMT): A literature review. Integrated Manufacturing Systems, 12(1), 35–47. doi:10.1108/09576060110361528 Chan, F. T. S., Chan, M. H., & Mak, K. L. (1999). An integrated approach to investment appraisal for advanced manufacturing technology. Human Factors and Ergonomics in Manufacturing, 9(1), 69–86. doi:10.1002/(SICI)15206564(199924)9:13.0.CO;2-1 Chang, T. H., & Wang, T. C. (2009). Measuring the success possibility of implementing advanced manufacturing technology by utilizing the consistent fuzzy preference relations. Expert Systems with Applications, 36, 4313–4320. doi:10.1016/j. eswa.2008.03.019

1130

Chen, I. J., & Small, M. H. (1994). Implementing manufacturing technology: An integrated planning model. Omega, 22(1), 91–103. doi:10.1016/03050483(94)90010-8 Chuu, S. J. (2009a). Selecting the advanced manufacturing technology using fuzzy multiple attributes group decision making with multiple fuzzy information. Computers & Industrial Engineering, 57(3), 1033–1042. doi:10.1016/j. cie.2009.04.011 Chuu, S. J. (2009b). Group decision-making model using fuzzy multiple attributes analysis for the evaluation of advanced manufacturing technology. Fuzzy Sets and Systems, 160, 586–602. doi:10.1016/j.fss.2008.07.015 Costa, S. E. G., & Lima, E. P. (2009). Advanced manufacturing technology adoption: An integrated approach. Journal of Manufacturing Technology Management, 20(1), 74–96. doi:10.1108/17410380910925415 D’ıaz, M. S., Machuca, J. A. D., & Álvarez-Gil, M. J. (2003). A view of developing patterns of investment in AMT through empirical taxonomies: New evidence. Journal of Operations Management, 21(5), 577–606. doi:10.1016/j.jom.2003.03.002 Das, A., & Jayaram, J. (2003). Relative importance of contingency variables for advanced manufacturing technology. International Journal of Production Research, 41(18), 4429–4452. doi:1 0.1080/00207540310001595819 Datta, V., Sambasivarao, K. V., Kodali, R., & Deshmukh, S. G. (1992). Multi-attribute decision model using the analytic hierarchy process for justification of manufacturing systems. International Journal of Production Economics, 28(2), 227–234. doi:10.1016/0925-5273(92)90035-6

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Demmel, J. G., & Askin, R. G. (1992). A multiple-objective decision model for the evaluation of advanced manufacturing system technologies. Journal of Manufacturing Systems, 11(3), 179–194. doi:10.1016/0278-6125(92)90004-Y

Karsak, E., & Tolga, E. (2001). Fuzzy multicriteria decision-making procedure for evaluating advanced manufacturing system investments. International Journal of Production Economics, 69, 49–64. doi:10.1016/S0925-5273(00)00081-5

Efstathiades, A., Tassou, S., & Antoniou, A. (2002). Strategic planning, transfer and implementation of advanced manufacturing technologies (AMT): Development of an integrated process plan. Technovation, 22, 201–212. doi:10.1016/ S0166-4972(01)00024-4

Karsak, E. E. (2005). Choquet integral-based decision making approach for robot selection. Knowledge-Based Intelligent Information and Engineering Systems, 635-641.

Grabisch, M., Kojadinovic, I., & Meyer, P. (2007). A review of methods for capacity identification in Choquet integral based multi-attribute utility theory applications of the Kappalab R package. European Journal of Operational Research, 186(2), 766–785. doi:10.1016/j.ejor.2007.02.025 Gupta, A., & Whitehouse, F. R. (2001). Firms using advanced manufacturing technology management: An empirical analysis based on size. Integrated Manufacturing Systems, 12(5), 346–350. doi:10.1108/09576060110398500 Herrera-Viedma, E., Chiclana, F., Herrera, F., & Alonso, S. (2007). Group decision-making model with incomplete fuzzy preference relations based on additive consistency. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, 37(1), 176–189. doi:10.1109/ TSMCB.2006.875872 Ishii, K., & Sugeno, M. (1985). A model of human evaluation process using fuzzy integral. International Journal of Man-Machine Studies, 22(1), 19–38. doi:10.1016/S0020-7373(85)80075-4 Jahanshahloo, G. R., Lotfi, F. H., & Izadikhah, M. (2006). Extension of the TOPSIS method for decision-making problems with fuzzy data. Applied Mathematics and Computation, 181, 1544–1551. doi:10.1016/j.amc.2006.02.057

Kaya, İ., & Çınar, D. (2008). Facility location selection using a fuzzy outranking method. Journal of Multiple-Valued Logic and Soft Computing, 14, 251–263. Kengpol, A., & O’Brien, C. (2001). The development of a decision support tool for the selection of advanced technology to achieve rapid product development. International Journal of Production Economics, 69, 177–191. doi:10.1016/S09255273(00)00016-5 Kojadinovic, I. (2004). Unsupervised aggregation by Choquet integral based on entropy functionals: Application to the evaluation of students. Modeling Decisions for Artificial Intelligence, 3131, 163–174. Kotha, S., & Swamidass, P. M. (1998). Advanced manufacturing technology use: Exploring the effect of the nationality variable. International Journal of Production Research, 36(11), 3135–3146. doi:10.1080/002075498192337 Labreuche, C., & Grabisch, M. (2003). Choquet integral for the aggregation of interval scales in multicriteria decision making. Fuzzy Sets and Systems, 137(1), 11–26. doi:10.1016/S01650114(02)00429-3 Lu, J., Zhang, G., Ruan, D., & Wu, F. (2007). Multi-objective group decision making methods: Software and applications with fuzzy set techniques. Singapore: Imperial Collage Press.

1131

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

MacDougall, S. L., & Pike, R. H. (2003). Consider your options: Changes to strategic value during implementation of advanced manufacturing technology. Omega, 31(1), 1–15. doi:10.1016/ S0305-0483(02)00061-0

Onisawa, T., Sugeno, M., Nishiwaki, M. Y., Kawai, H., & Harima, Y. (1986). Fuzzy measure analysis of public attitude towards the use of nuclear energy. Fuzzy Sets and Systems, 20, 259–289. doi:10.1016/S0165-0114(86)90040-0

Modave, F., & Grabisch, M. (1998). Preference representation by Choquet integral: The commensurability hypothesis. Proc. 7th Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU), 1-8.

Ordoobadi, S. M., & Mulvaney, N. J. (2001). Development of a justification tool for advanced manufacturing technologies: System-wide benefits value analysis. Journal of Engineering and Technology Management, 18, 157–184. doi:10.1016/S0923-4748(01)00033-9

Mohanty, R. P., & Deshmukh, S. G. (1998). Advanced manufacturing technology selection: A strategic model for learning and evaluation. International Journal of Production Economics, 55(3), 295–307. doi:10.1016/S0925-5273(98)00075-9

Orr, S. (2002). A comparison of AMT strategies in the USA, South Africa and Germany. International Journal of Manufacturing Technology Management, 4(6), 441–454. doi:10.1504/ IJMTM.2002.002517

Moon, H., Conlon, D. E., Humphrey, S. E., Quigley, N., Devers, C. E., & Nowakowski, J. M. (2003). Group decision process and incrementalism in organizational decision making. Organizational Behavior and Human Decision Processes, 92(1-2), 67–79. doi:10.1016/S07495978(03)00079-7

Park, B. C. (2001). An optimal dwell point policy for automated storage/retrieval systems with uniformly distributed, rectangular racks. International Journal of Production Research, 39(7), 1469–1480. doi:10.1080/00207540010023583

Narukawa, Y., & Murofushi, T. (2008). ChoquetStieltjes integral as a tool for decision modeling. International Journal of Intelligent Systems, 23(2), 115–127. doi:10.1002/int.20260 O’Kane, J. F., Spenceley, J. R., & Taylor, R. (2000). Simulation as an essential tool for advanced manufacturing technology problems. Journal of Materials Processing Technology, 107, 412–424. doi:10.1016/S0924-0136(00)00689-0 Oeltjenbruns, H., Kolarik, W. J., & Schnadt– Kirschner, R. (1995). Strategic planning in manufacturing systems: AHP application to an equipment replacement decision. International Journal of Production Economics, 38, 189–197. doi:10.1016/0925-5273(94)00092-O

1132

Punniyamoorthy, M., & Ragavan, P. V. (2003). A strategic decision model for the justification of technology selection. International Journal of Advanced Manufacturing Technology, 21(1), 72–78. doi:10.1007/s001700300008 Saad, I., & Hammadi, S. (2008). Choquet integral for criteria aggregation in the flexible job-shop scheduling problems. Mathematics and Computers in Simulation, 76(5-6), 447–462. doi:10.1016/j. matcom.2007.04.010 Sambasivarao, K. V., & Deshmukh, S. G. (1997). A decision support system for selection and justification of advanced manufacturing technologies. Production Planning and Control, 8(3), 270–284. doi:10.1080/095372897235325

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

Schmitt, E., Bombardier, V., & Wendling, L. (2008). Improving fuzzy rule classifier by extracting suitable features from capacities with respect to Choquet integral. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, 38(5), 1195–1206. doi:10.1109/TSMCB.2008.925750 Sener, Z., & Karsak, E. E. (2007). A decision model for advanced manufacturing technology selection using fuzzy regression and fuzzy optimization. IEEE International Conference on Systems, Man and Cybernetics, 565-569. Sener, Z., & Karsak, E. E. (2008). A decision making approach based on fuzzy regression and fuzzy multiple objective programming for advanced manufacturing technology selection. IEEE International Conference Industrial Engineering and Engineering Management, 964-968. Shang, J., & Sueyoshi, T. (1995). A unified framework for the selection of a flexible manufacturing system. European Journal of Operational Research, 85, 297–315. doi:10.1016/03772217(94)00041-A Shieh, J. I., Wu, H. H., & Liu, H. C. (2009). Applying a complexity-based Choquet integral to evaluate students’ performance. Expert Systems with Applications, 36(3), 5100–5106. doi:10.1016/j. eswa.2008.06.003 Small, M. H., & Chen, I. J. (1997). Economic and strategic justification of AMT inferences from industrial practices. International Journal of Production Economics, 49, 65–75. doi:10.1016/ S0925-5273(96)00120-X Sugeno, M. (1974). Theory of fuzzy integrals and its applications. Unpublished doctoral dissertation. Tokyo Institute of Technology, Japan. Talluri, S., Whiteside, M. M., & Seipel, S. J. (2000). A nonparametric stochastic procedure for FMS evaluation. European Journal of Operational Research, 124(3), 529–538. doi:10.1016/S03772217(99)00188-5

Talluri, S., & Yoon, K. P. (2002). A cone-ratio DEA approach for AMT justification. International Journal of Production Economics, 66(2), 119–129. doi:10.1016/S0925-5273(99)00123-1 Thakur, L. S., & Jain, V. K. (2008). Advanced manufacturing techniques and information technology adoption in India: A current perspective and some comparisons. International Journal of Advanced Manufacturing Technology, 36, 618–631. doi:10.1007/s00170-006-0852-4 Tsai, H. H., & Lu, I. Y. (2006). The evaluation of service quality using generalized Choquet integral. Information Sciences, 176(6), 640–663. doi:10.1016/j.ins.2005.01.015 Verter, V., & Dasci, A. (2002). The plant location and flexible technology acquisition problem. European Journal of Operational Research, 136(2), 366–382. doi:10.1016/S0377-2217(01)00023-6 Weber, S. F. (1993). A modified analytic hierarchy process for automated manufacturing decisions. Interfaces, 23(4), 75–84. doi:10.1287/inte.23.4.75 Yurdakul, M. (2004). Selection of computerintegrated manufacturing technologies using a combined analysis hierarchy process and goal programming model. Robotics and Computer-integrated Manufacturing, 20, 329–340. doi:10.1016/j.rcim.2003.11.002 Yusuff, R. M., Yee, K. P., & Hashmi, M. S. J. (2001). A preliminary study on the potential use of the analytical hierarchical process (AHP) to predict advanced manufacturing technology (AMT) implementation. Robotics and Computer-integrated Manufacturing, 17, 421–427. doi:10.1016/S0736-5845(01)00016-3 Zimmermann, H. J. (1991). Fuzzy set theory and its applications. Dordrecht, Germany: Kluwer Academic.

1133

Group Decision Making for Advanced Manufacturing Technology Selection Using the Choquet Integral

KEY TERMS AND DEFINITIONS Advanced Manufacturing Technology (AMT): A modern method of production incorporating highly automated and sophisticated computerized design and operational systems. Automated Storage and Retrieval System (AS/RS): Consists of a variety of computercontrolled methods for automatically placing and retrieving loads from specific storage locations. Choquet Integral: A generalization of the Lebesgue integral. Choquet integral is a way of measuring the expected utility of an uncertain event.

Decision Making: Can be regarded as an outcome of mental processes leading to the selection of a course of action among several alternatives. Fuzzy Logic: A form of multi-valued logic derived from fuzzy set theory to deal with reasoning that is approximate rather than precise. Fuzzy Sets: Are sets whose elements have degrees of membership and are extension of the classical notion of set. Group Decision Making: Is decision making in groups consisting of multiple members. Multi-Criteria Decision Making (MCDM): A discipline aimed at supporting decision makers who are faced with making numerous and conflicting evaluations.

This work was previously published in Technologies for Supporting Reasoning Communities and Collaborative Decision Making: Cooperative Approaches, edited by John Yearwood and Andrew Stranieri, pp. 193-212, copyright 2011 by Information Science Reference (an imprint of IGI Global).

1134

1135

Chapter 61

Operator Assignment Decisions in a Highly Dynamic Cellular Environment Gürsel A. Süer Ohio University, USA Omar Alhawari Royal Hashemite Court, Jordan

ABSTRACT Operators are assigned to operations in labor-intensive manufacturing cells using two assignment strategies: Max-Min and Max. The major concern is to see how these two approaches impact operators’ skill levels and makespan values in a multi-period environment. The impact is discussed under chaotic environment where sudden changes in product mix with different operation times are applied, and also under non-chaotic environment where same product mix is run period after period. In this chapter, operators’ skill levels are affected by learning and forgetting rates. The Max-Min strategy improved operators’ skill levels more significantly than Max in this multi-period study; particularly in chaotic environment. This eventually led to improved makespan values under Max-Min strategy.

INTRODUCTION Cellular manufacturing is considered as a collection of manufacturing cells that is dedicated to manufacture part families or assembly cells that are dedicated to process product families (see Askin & Standridge, 1993). The cellular manufacturing systems can be either machine-intensive or laborDOI: 10.4018/978-1-4666-1945-6.ch061

intensive. In labor-intensive cells, it is easier to reconfigure cells when a product is ready to be processed. Moreover, moving equipment is much easier than it is in machine-intensive cells. Basically, in labor-intensive cells, most of the operations require light-weight, and small machines as well as equipment that require continuous operator attendance and involvement (Süer & Tummaluri, 2008). Labor-intensive manufacturing cells have been observed in apparel, jewelry manufactur-

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

ing, electromechanical assembly, sewing, shoe manufacturing, medical devices, and car seat manufacturing industries. The operator’s role in machine-intensive cells is limited due to the presence of automatic machines. On the other hand, the operator has a key role in labor-intensive cells, and the number of operators and their assignment to operations has a great impact on the cell’s production rate. In some cases, the number of operations is less than the number of operators. This creates the possibility that multiple operators are assigned to perform the same operation. It is important to control operator assignments; however, when the number of cells and the number of operators increase, keeping track of operator assignment becomes difficult. In this chapter, concepts such as learning and forgetting rates are discussed to show how operator skill level varies from time to time; thus, the assignment decision is affected. Forgetting and learning rates affect the operator’s skills and they are affected by their current skills. Learning takes place when the operator performs an operation continuously for a period of time, consequently, the operator will be more familiar with performing an operation. On the other hand, forgetting happens when the operator does not perform an operation in a number of consecutive periods. This chapter addresses both operator assignment and cell loading decisions. Operator assignment determines which operators are assigned to perform each task and cell loading identifies the products to be run in each cell. The work undertaken in this chapter is an extension of work by Süer and Tummaluri (2008). The operator assignment can be made by using two different strategies; 1) Max, 2) Max-Min. Max considers only the current state of the operator skills for operator assignment to maximize output rate. On the other hand, Max-Min considers long-term effect of assignment decisions and attempts to develop more homogeneous work force without sacrificing output rate. This homogeneous work force may be more effective in dealing with

1136

drastic variations in demand and product mix in the long-term. The objective of this chapter is to propose better mathematical models for operator assignment and also compare the performance of two major strategies, Max and Max-Min, in highly dynamic cellular environments. The main hypothesis is that Max-Min is a better strategy in operator assignment in the long-run. We want to show that long-term planning may help companies to better prepare their workforce for long-term operation than short-sided approach where only the immediate periods are considered. This approach is especially important in highly fluctuating demand environments and also in companies where product mix can quickly change. It is easier to implement such a strategy in companies where workforce is stable with low turnaround rate.

BACKGROUND In the literature, some researchers addressed areas related to this subject such as cell loading, operator assignment, skills, learning and forgetting rate and product sequencing. Süer (1996) discussed, in his paper, the subject of optimal operator assignment and cell loading in labor-intensive manufacturing cells. He stated that the operator assignment to cells influences production rate that each cell can produce. He proposed a two-phase methodology. In phase 1, he generated operator assignments for alternative manpower levels by using a mixed integer mathematical model. In phase 2, he found the optimal manpower levels for each cell and optimal product assignment to cells. Nembhard (2001) discussed a heuristic approach for assigning workers to task based on individual learning rate. Basically, he ran experiments based on two conditions: a long production run and a short production run. Results were interpreted and showed that the heuristic approach have an impact on overall productivity. Best results were found when workers learn more gradually.

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Nembhard and Osothslip (2002) discussed, in their paper, the operator behavior in terms of learning and forgetting rates; particularly, in the case of performing complex tasks. A study was conducted at a textile manufacturing plant, where different manual sewing tasks were available. Data was collected by studying the behavior of each worker over a period of one year. They used a model of individual learning and forgetting rate, which was introduced first by Nembhard and Uzumeri (2000) in order to measure the productivity rate. This model was applied to each operator, and operator learning and forgetting parameters were considered as dependent variables, whereas task complexity was considered as independent variable. Results were captured and then statistical analysis was done to find if there is a relationship between the variability of learning and forgetting rates with task complexity. Results indicated task complexity significantly affects the variance of worker learning and forgetting rates. For higher task complexities, workers are more varied in their learning and forgetting rate than they are at lower task complexities. The impact of task complexities on worker learning and forgetting affects worker assignment and productivity. Slomp et al. (2005) discussed cross-training decisions in a cellular manufacturing environment. They wanted to minimize the load of the bottleneck worker. In their study, they presented an integer programming model to calculate which workers have to be trained for which machines. Based on this model, they discussed the trade-off between the operating costs of the manufacturing cell, the costs of cross-training, and the workload among workers, they showed that the connection between workers and machines is really important to form chaining and this produces an efficient cross-training situation. In this case, workload can be shifted from heavier loaded worker to less loaded worker. Labor flexibility is needed in these environments. Unbalanced load may give feelings of unfairness in a team. Bidanda et al. (2003) presented the importance of focusing not

only on technical issues, but also human issues in cellular manufacturing environments. Technical issues include cell formation and cell design, whereas human issues involve such as worker assignment strategies, skill identification, training, communication, autonomy, reward system, conflict management, and teamwork. They conducted a survey to show the importance of human issues in cellular manufacturing. The number of participants in the survey was 40, and consists of workers, managers and academicians. They were asked to rank the human issues. Their response was analyzed. The results showed that three major human issues in cellular manufacturing are communication, teamwork and training. The degree of autonomy was found the least important among all. The reward system was in the middle. The assignment strategies were found significant among academicians. The skill identification was found significant among managers and academicians whereas conflict management is significant among workers. Shirase et al. (2001) developed a system of distributed production system which consists of some cell groups. They discussed a dynamic operator assignment method. In this cooperative method, they considered that whenever any cell in a group of cells is unable to meet the due date of certain part, it has the option to ask for one operator as a support from other groups. Eventually, cooperation is taking place between cell groups until all due dates are met. They generalized the idea in which some disturbances in a production system can be treated by cooperation between subsystems. Fan and Gassmann (1997) discussed allocation functions between worker and machines could influence the performance of manufacturing cells over a long period of time considered as 15 months. They concluded that skills development and knowledge are really important for keeping long-term competitiveness. Allocation of functions has an impact on the long term performance, in which the long period will give more vision to make the work smooth through absorbing

1137

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

the complexity of the nature of manufacturing environment. Wirojanagud et al. (2005) discussed a strategic way to model worker differences for workforce planning. Impact of individual differences on management decisions is considered and then discussed. A problem of job shop environment has been formulated in a form of a mixed integer programming model. The major concern was to identify number of workers who will be hired, fired and cross-trained at a minimum cost. Experiments are run and then results are analyzed. Workers differences are playing a major role in making decisions in manufacturing system environment. Suksawat et al. (2005) discussed the concept of evaluating the skill levels of workers. They developed a skilled worker–based scheduling method based on the skill evaluation and genetic algorithm application. They focused on the objective of improving the production rate by considering workers’ skill levels. Süer and Dagli (2005) discussed manpower decisions in their paper. They considered two issues in their paper; product sequencing in a cell and cell loading. For the first issue, they target to minimize intra-cell manpower transfers. A three-phase methodology is proposed, in which optimal manpower level for each operation is found by using a mathematical model, a matrix for manpower transfers between products is formed, and traveling salesman problem is solved. For the second issue, they aim to find the optimal assignment of products to cells. Their objective is to minimize makespan and number of machines. Cesani and Steudel (2005) presented a research concerning labor assignment strategies, and their impact on the cell performance in cellular manufacturing environment. The term labor flexibility was discussed and referred to the movement of operators between cells and inside the same cell. Labor assignments, such as dedicated assignment, in which an operator is assigned to one or more machines, shared assignment in which two operators or more are assigned to one or more machines, and combined in which an operator is assigned as

1138

dedicated and shared together. They made their discussions based on workload sharing, workload balancing and bottleneck operations. Experiments and simulation models were implemented and discussed. They concluded based on results, that the balance in the operators’ workload and the level of machine sharing are important factors to determine cell performance and behavior. They also referred to the importance of cross-training issue in improving cell performance. Mahdavia et al. (2010) developed an integer mathematical model to design cellular manufacturing systems. They consider a dynamic environment as well. Their model deals with worker assignment as well as dynamic configuration of the cellular system. The overall objective is to minimize the total cost of inventory holding and backorder costs, inter-cell material handling cost, machine and reconfiguration costs and hiring, firing and salary costs of workers. Süer, Arikan, Babayigit (2008) and Süer, Arikan, Babayigit (2009) developed fuzzy mathematical models for cell loading in labor-intensive cells subject to manpower restrictions. Süer, Cosner and Patten (2009) developed various mathematical models for cell loading and product sequencing in manufacturing cells. Süer, Subramanian and Huang (2009) developed several heuristic procedures and mathematical models for cell loading and product sequencing in a shoe manufacturing company. Süer and Tummaluri (2008) discussed the problem of operator assignment to operations in labor-intensive cells. The operators are assigned to operations in multi-period context considering their skill levels, forgetting and learning rates. They developed a three-phase hierarchical methodology to solve this problem. The first phase is generating alternative operator levels for each product using operation standard times. The second phase is determining cell loads and cell sizes using standard times. The third phase is assigning operators to operations. A mixed integer mathematical model is used in all phases. Two

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

different strategies (Max and MaxMin) are proposed for solving the operator assignment in the third phase. The results showed that when using Max Strategy, lower makespan allocation values are obtained, whereas Max-Min improved the skill levels more regularly. The work undertaken in this paper is an extention of their work. They found that Max-Min is superior to Max Strategy in terms of improving operator skill levels; however, they could not show that Max-Min Strategy is better in minimizing makespan as a result of improved skill levels. Their work assumed a static cellular environment, in which there are no new products entering the system and no product is leaving the system (i.e. product mix remained the same throughout the study). They have classified labor skills into nine categories following normal distribution. Their work also showed that some non-bottleneck operations became bottleneck after assigning operators. The reason for that is that they have done initial operator assignment based on standard times. When they reflected the effect of skills on processing times, some non-bottleneck operations became bottleneck and adversely affected output rates. In some cases, operators had to be re-assigned to fix the problem.

PROBLEM STATEMENT This chapter introduces several improvements to work by Süer and Tummaluri (2008) where, 1) Max and Max-Min strategies are compared in a highly dynamic environment where product mix changes. i.e., new products enter the system and some products leave the system. It is believed that this will show the benefits of Max-Min strategy better in terms of minimizing makespan, 2) number of skill levels are reduced to seven. It is believed that this is more practical and realistic approach than having nine skill levels, 3) skill-level based processing times are used directly during operator assignment process. This helps to avoid re-

computing of bottleneck operation, output rate and re-assignment of operators.

Methodology In this section, the methodology used is described in detail.

General Methodology This study is carried out in a multi-period environment. The number of periods included in this study is 16 and each period represents a week. This allows us to see the impact of the strategies in the long-term. Based on the assignments made in the previous periods, an operator’s skill level may be adjusted. Figure 1 includes the multi-period methodology in which the proposed approach is implemented and the results are captured. These results may affect the operator skill levels; hence, they need to be revised each period. Once all periods are considered for the first strategy, then the same procedure is applied for the second strategy, and finally the results are compared.

Overview of Strategies The major phases of the Max strategy as shown in Figure 2a are: (1) find optimal operator assignment using an integer mathematical model. In this phase, the number of workers needed for each operation is determined for each product so that production rate is maximized with the available manpower level. (2) determine cell loads to minimize makespan by using an integer mathematical model. In this phase, decision about what cell to use to produce each product is made. (3) determine product sequence in each cell by using a simple scheduling rule, SPT (Shortest Processing Time). In this phase, the products in each cell are sequenced in the increasing order of processing time to minimize average flow time as well. The first three phases of the Max-Min Strategy is similar to the Max Strategy. However,

1139

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Figure 1. Multi-period methodology

two more phases are included in the Max-Min Strategy (see Figure 2b), and they are: (4) identify bottleneck and non-bottleneck operations for each product in the cell. The slowest operation (lowest output) for each product is identified as bottleneck operation. (5) re-assign low-skilled operators to non-bottleneck operations using the Min-skill principle. In this phase, the focus is on non-bottleneck operations and workers are assigned to operations where their skill level is not very high as long as it does not adversely affect the production rate of the cell. By doing this, we expect that operators with low skills will get a chance to perform these operations and eventually improve their skill levels.

Performance Measures Used The performance measures used for each task are summarized in Table 1. Production rate measures the number of units manufactured per unit time. Makespan is defined as the maximum completion time of all jobs (Equation 1) and flowtime measures how long a job remains in the system (Equation 2). MS=Cmax

(1)

fi=ci-ri

(2)

where ci

1140

completion time of job i

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Figure 2. The general overview of the strategies

Table 1. Performance measures used for each task Task

Max Strategy

Max-Min Strategy

Operator Assignment

Maximize Production Rate

Maximize Production Rate

Loading Cells

Minimize Makespan

Minimize Makespan

Product Sequencing

Minimize Average Flow Time

Minimize Average Flow Time

Re-assigning low-skilled operators

-----

Minimize Total Skills Without Violating Original Production Rate

fi ri

flowtime of job i ready time of job i

Skill Levels In this study, each operator is assumed to have a skill level for each operation he performs. These skill levels follow the normal distribution (as shown in Figure 3), in which µ represents the mean value and σ represents the standard deviation. The skill levels are divided into seven categories and their corresponding probabilities are shown in Table 2. Level 4 represents the average skill, level 7 represents the best and level 1 is the worst. This is an assumption used in this study.

Süer and Tummaluri (2008) also used a similar classification except they used 9 skills as opposed to 7 suggested here.

Operation Times Operation times are calculated according to the operator skill levels. The standard processing time for each operation is considered to be the average time, hence, the operator with skill level 4 is considered to have the average operation time. Other skills below or over will follow the normal distribution. Table 3 provides an example for different skills of an operation with a standard deviation of 5% of the mean. The σ for operation 1141

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Figure 3. The normal distribution curve for skill levels

Table 2. The skill levels and probability Skill Level

Time

Probability

1

µ +3σ

0.0062

2

µ+2 σ

0.0606

3

µ+σ

0.2417

4

µ

0.383

5

µ -σ

0.2417

6

µ-2 σ

0.0606

7

µ -3σ

0.0062

1 for product X1 is 0.0035 (=.07*.05). As the skill level decreases by 1 level (4 → 3), the operator skilled-based time increases to 0.0735 (=.07 + σ).

Learning and Forgetting Rates In this study, a skill level is affected by learning and forgetting rates. An operator’s skill level increases when he performs an operation for many consecutive periods. In the same manner, the length of interruption interval affects the skill levels adversely; hence, if an operator does not perform a certain operation for a number of consecutive periods, his skill level decreases. Table 4 shows the assumed number of required periods for improving or lower-

ing skill level, and it also shows the probability for that skill to change. The probability values follow the notion that an operator who has been performing an operation for a long time will become more experienced operator, and it will take longer for that skill to deteriorate. As shown in the same table, a skill level can be improved from 1 to 2 with a probability of 0.7, if an operator keeps performing the same operation for 3 periods consecutively. On the other hand, improving skill level from 6 to 7 requires an operator to perform the operation for 6 periods consecutively (with a probability of 0.3). Meanwhile, an operator’s skill level can deteriorate from 2 to 1 with a probability of 0.7, if he does not perform the operation for 4 periods consecutively. Similarly, if he does not perform the operation for 7 periods consecutively, his skill will decrease from 7 to 6 with a probability of 0.3. No empirical study has been done to validate these assumptions due to time restrictions. However, it is believed that these numbers reflect the relations between learning and forgetting rates of workers at different skill levels along with associated probabilities reasonably well. The operator skill matrix is revised on learning and forgetting rates at the end of every period.

Table 3. The operation times for different skill levels with σ = 5% of the mean Skills Mean

Std. Dev.

7

6

5

4

3

2

1

0.07

0.035

.0595

0.063

0.0665

0.07

0.0735

0.077

0.0805

1142

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 4. The learning and forgetting rates Learning Skill level From

Forgetting

Periods

Prob.

Skill level

To

From

Periods

Prob.

To

1

2

3

0.70

2

1

4

0.70

2

3

3

0.65

3

2

4

0.65

3

4

4

0.6

4

3

5

0.6

4

5

4

0.5

5

4

5

0.5

5

6

5

0.4

6

5

6

0.4

6

7

6

0.3

7

6

7

0.3

PROPOSED MATHEMATICAL MODELS

ykj∈(0,1)

In this section, the proposed models are introduced.

Max Strategy The Max Strategy uses the skill-based times to assign operators into operations such that maximum output is achieved. An integer mathematical model is used to assign operators. The mathematical model is formulated with the objective function of maximizing the production rate as shown in Equation (3). Equation (4) determines which operators have to be assigned to each operation. Equation (5) ensures that each operator is assigned to only one operation within a cell. Equation (6) shows that ykj is a binary variable. The mathematical model is given below: Objective Function: MaxZ=R

(3)

Subject to:

∑ (a k ∈ fj

∑y j ∈ fk

y ) − R ≥ 0 j=1,2,3,…,m

(4)

= 1 k=1,2,3,..,n

(5)

kj kj

kj

(6)

Indices: k Operator index j Operation index Parameters: akj number of units operator k can process if assigned to operation j m number of operations in the cell fk set of operations that operator k can perform set of operators who can perform opfj eration j n Number of operators Decision variables: R production rate ykj 1 if operator k is assigned to operation j, 0 otherwise This assignment model is run for each product by using ILOG OPL software.

Max-Min Strategy The Max-Min strategy also uses the skill-based times to find the optimal operator assignment to maximize the output. In this Strategy, first the slowest operation, bottleneck operation, is identified. The same integer mathematical model is used to assign operators and it is solved by using ILOG/OPL. However, another constraint is added

1143

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

to determine production rate for each operation as given in Equation (7). Rj = ∑ (akj ykj ) j=1,2,3..m

(7)

k ∈ fj

Obviously, the lowest production rate operation is identified as the bottleneck operation as shown in Equation (8) Rb=min(Rj/j=1,2,3,…,m)

(8)

The Max-Min Strategy keeps the operators assigned to the bottleneck operation the same; however, it re-assigns other operators to nonbottleneck operations to minimize total skills such that the optimal output rate is maintained. A mathematical model is used where the objective function is to minimize the total skills for the remaining operators as given in equation (9). Equation (10) guarantees that the original production rate (optimal) is not violated. Equation (11) shows a constraint in which each operator is assigned to one operation within each cell. Equation (12) shows that ykj is a binary variable. The math model used is given below as: Objective function: MinZ = ∑ ∑ (skj ykj )

(9)

k ∈ fj j ∈ fk

k ∈ fj

y ) ≥ Rb j=1,2,3,….m

kj kj

Parameters: skj Skill level of operator k for operation j Rb Production rate of bottleneck operation The difference between these two strategies is illustrated in Figure 4 using a hypothetical case. Assume; that operation 2 is the bottleneck operation with operators 3 and 7 assigned to it as shown in Figure 4a and the output rate is 80 units/hr. The Max-Min Strategy shown in Figure 4b keeps the same operators in the bottleneck operation but reassigns other operators to minimize skills without sacrificing the optimal output rate.

Cell Loading and Product Sequencing Cell loading is the process of assigning products to cells. In this paper, a mathematical model is used to assign products to cells and the primary performance measure is to minimize makespan. Equation (13) shows the objective function, minimizing makespan. Equation (14) shows that the total processing time in each cell should be equal to or greater than the makespan. Equation (15) ensures that each product is assigned to a cell. Equation (16) shows the sign restriction. Objective function

Subject to:

∑ (a

where,

MinZ=MS (10)

Subject to:

(11)

MS − ∑ pij x ij ≥ 0 j=1,2,3,…,c

(12)



n

∑ j ∈ fk

ykj = 1 k=1,2,3,…n

ykj∈(0,1)

1144

(13)

(14)

i =1

c

j =1

x ij = 1 i=1,2,3,…,n

(15)

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Figure 4. Manpower assignment using both strategies

xij∈(0,1)

(16)

Where,

Indices: c Number of cells n Number of products Parameter: pij Processing time of product i in cell j Decision Variable: xij 1 if product i is assigned to cell j, 0 otherwise Product sequencing usually comes after cell loading in which products are arranged in such a way, that a selected performance measure is completed. In this chapter, average flow time is considered as a secondary measure and it is minimized by using the shortest processing time technique (SPT). SPT rule orders jobs in the increasing order of processing times.

DATA USED IN EXPERIMENTS In this section, the data used in experiments are given.

Standard Times The standard times for operations correspond to a skill level of 4. The standard times for product groups X and Y are randomly generated from a random uniform distribution in the intervals shown in Table 5. These two product groups are used to create the dynamic environment mentioned earlier.

Product Demand The product demand for all products is randomly generated from the uniform distribution in the interval of [2200, 8500] for all periods. Table 6 shows these demand values.

1145

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 5. Uniform distribution intervals for standard times for product groups X and Y Product

Operations Op.1

Op.2

Op.3

Op.4

Op.5

Op.6

X

[0.04-0.09]

[0.28-0.45]

[0.37-1.18]

[0.47-0.88]

[0.18-0.45]

[0.20-0.80]

Y

[1.10-1.30]

[1.20-1.40]

[0.09-0.15]

[0.04-0.09]

[1.00-1.20]

[0.04-0.08]

Table 6. Product demand and total demand for each period Period

Product Demand

Total Demand

1

2

3

4

5

6

7

8

9

10

1

3500

7500

3400

2700

2200

4000

4500

2200

2300

3000

35300

2

3500

7500

3700

2900

2200

4300

4600

2200

2500

3000

36400

3

3700

4200

3700

3000

2400

4300

4600

2400

2500

3100

37200

4

3100

6500

3700

2750

2150

3500

4400

1900

2500

3100

33600

5

3100

3750

3700

2750

2400

3900

4200

1900

2500

3300

34500

6

4200

8000

3700

3000

2500

4300

4800

2400

2500

3100

38500

7

2800

6400

3700

2250

2150

3200

4400

1900

2300

3100

32200

8

2800

6300

3700

2250

2150

3200

4400

1900

2300

3100

31800

9

3100

7750

3700

2750

2400

3900

4200

1900

2500

3300

35500

10

3300

8450

3700

3050

2600

3900

4400

2000

2500

3500

37400

11

2900

6400

3700

2300

2150

3250

4400

1900

2300

3200

32500

12

3500

7600

3400

3200

2200

3500

4500

2200

2300

3000

35400

13

4100

7700

3500

2500

2400

4800

4000

2400

2500

3400

37300

14

2800

6800

3450

2500

2150

3200

4200

2100

2300

3100

32600

15

3600

6750

3700

3000

2400

3900

4200

1900

2750

3500

35700

16

3400

6900

3700

2900

2200

4300

4600

2200

2500

3600

36300

Operator Skills

EXPERIMENTS

The total number of operators included in the study is 30. Each operator is assumed capable of performing 3 operations. These operators are divided into two cells with 12 operators in cell 1 and 18 operators in cell 2. The initial skill matrix is established randomly by following probabilities given in Table 2. The initial skill matrix for all operators is given in Table 7.

Several experiments are conducted using 10 products of type X and 10 products of type Y. Each product requires 6 operations. The experiments performed are listed below:

1146

1. 2. 3. 4.

Periods 1-14, no chaos Periods 1-9, no chaos; Periods 10-11, chaos Periods 1-14, no chaos; Periods 15-16 chaos Periods 1-14, no chaos; Period 15 chaos with new standard deviation

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 7. Initial operator skill matrix Operator

Operations Op1.

1

Op.2

5

2 3

3

Op.3

Op.4

5

4

3

4

4

5

3

4 5

4 2

3

6 4

8

4

9 4

1

4

12 1 1

5 3

17

5

18

5

1 20

5

21

4

22

4

3

4

3

24

4

4 5

5

3

5

3

5

5

5 3

3 4

5

2 3

4

5 5 3

Cell 1

6 7

3

Table 8. Comparison of two strategies in cell 1 at the end of period 14

3

1 2

It was found that Max-Min improves operator skill levels more significantly than Max did. Benefits come from the Max-Min assignment strategy in which it finds the optimal operator assignment to maximize the output. Later, this strategy keeps the operators assigned to the bottleneck operation the same but re-assigns the low skilled operators to non-bottleneck operations. This strategy allows an operator to perform an operation that he or she is not skilled at; hence his or her skill does not deteriorate and certainly improves. Tables 8 and 9 show the comparison of these two strategies at the end of period 14 in terms of average operator skill levels for cells 1 and 2, respectively. In cell 1, it was found that not only the average operator skill levels is greater than the initial average skill levels when Max-Min was used, but also is greater than operators average skill levels when Max was used. On the other hand, when Max was used, 5 operators had an average skill levels greater than their initial ones, 3 operators kept the

6 3

4

30

4

5

3 5

4

3

5

3

29

3

4

4

28

4 4

4

25 26

4

5

5

3

23

27

4

5

3

16

3

1 3

4

Op.6

6

4

10

Op.5

5 5

7

13

Impact on Operator Skill Levels

5 4

4

4 4

5

Experiment 1 This experiment includes runs using Max and Max-Min strategies starting from period 1 to period 14. The results are analyzed below:

Average operator skill levels (at the end of Period 14)

operator

Initial average skill level

Max-Min Strategy

Max Strategy

1

4.67

5.67

4.67

2

3.67

5.33

4.67

3

3.67

4.67

3

4

3.67

6

3.67

5

3.33

5.33

3.67

6

5

6.33

4.67

7

3.67

5.33

3.37

8

4

5

4.33

9

4

5.67

4.33

10

3.67

5.33

3.67

11

2.5

5.67

3.67

12

4

5.33

3.67

1147

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 9. Comparison of two strategies in cell 2 at the end of period 14 Cell 2 operator

Initial average skill level

13

4.33

Average operator skill levels (at the end of Period 14) Max-Min Strategy 5.33

Max Strategy 4.33

14

5

7

6

15

3.67

6.33

5.67

16

3.67

6.33

5.33

17

4

5.33

4.67

18

5

6

5.33

19

4.67

5.67

5

20

4.67

6

5

21

3

5.67

5.67

22

3.67

5.33

3.33

23

4

6

5

24

4

5

3.67

25

4.33

6.33

5.33

26

3.67

4

3

27

3.67

6

5.33

28

3.33

5.33

5.33

29

4.33

5.33

4.67

30

4.67

6

6.67

same average and 4 operators were found having lower average skill levels. In cell 2, it was found that the average operator skill level is greater than the initial average skill levels when Max-Min was used. However, 15 operators had greater average skill levels than when Max was used, but 2 operators had the same average skill levels and 1 operator had lower average skill levels. As to Max strategy, 14 operators had an average skill levels greater than their initial ones, 1 operator kept the same average and 3 operators were found having lower average skill levels than their initial ones.

1148

Impact on Operations Max-Min Strategy improved operator skill levels on each operation more apparently than Max did. Table 10 shows average skill levels on each operation. It was found that operation 1 deteriorated when Max was used; however average operator skills on all operations were greater when MaxMin was used.

Impact on Makespan The results have shown that both strategies were tied in terms of makespan for the first 11 periods. Starting period 12, Max-Min performed better in minimizing makespan by 0.2%, 0.2% and 0.1% compared to Max approach in periods 12, 13 and 14, respectively. Table 11 shows the comparison of these two strategies in terms of makespan from period 1 to period 14.

Experiment 2 In this section, chaos is applied in periods 10 and 11 to create a big shock in the system to see how both strategies behave in terms of makespan. This is accomplished by introducing new products with very different processing times. The results have shown that when 5 products of type Y entered the system and 5 products of type X left the system, Max-Min gave better results (with 0.4% reduction Table 10. Average skill levels on each operation Average operator skill levels (at the end of period 14) Operation

Initial average skill level

Max-Min Strategy

Max Strategy

Op.1

4

4.39

3.44

Op.2

3.92

5.23

4.23

Op.3

3.65

6

4.89

Op.4

4

6.25

5.63

Op.5

4.31

6.15

4.62

Op.6

4.25

5.92

4.62

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 11. The makespan from period 1 to period 14 period

Max

Max-Min

Difference(%)

1

58.07

58.07

0.0%

2

59.84

59.84

0.0%

3

61

61

0.0%

4

53.13

53.13

0.0%

5

55.92

55.92

0.0%

6

60.07

60.07

0.0%

7

50.23

50.23

0.0%

8

49.684

49.684

0.0%

9

55.76

55.76

0.0%

10

57.9

57.9

0.0%

11

50.64

50.64

0.0%

12

53.19

53.1

0.2%

13

56.25

56.15

0.2%

14

49.04

48.99

0.1%

in makespan) in period 10. In period 11, the entire set of products of type Y is manufactured (no product X in the system). Max-Min still worked better (with 0.4% improvement in makespan). Table 12 shows the results of makespan using these two strategies in periods 10 and 11. Table 13 shows another chaotic scenario by entering all products of type Y and releasing all products of type X in period 10. In this case, the improvement with Max-Min over Max was 0.7%. Table 12. Impact on makespan under chaos in periods 10 and 11 MaxMin

Max

71.15

71.43

0.4%

Period 11 (5+5 products)

81.68

82.02

0.4%

Table 13. Impact on makespan under chaos in period 10

Period 10 (10 products)

94.3

Max 94.92

The chaos was applied in periods 15 and 16 as we did in experiment 2. The results have shown that when 5 products of type Y entered the system and 5 products of type X left the system, MaxMin gave better results (with 1.7% reduction in makespan) in period 15. In period 16, the entire set of products of type Y is manufactured (no product X in the system). Max-Min still worked better (with 1.2% improvement in makespan). Table 14 shows the results of makespan using these two strategies in periods 15 and 16. Table 15 shows another chaotic scenario by entering all products of type Y and releasing all products of type X in period 15. In this case, the improvement with Max-Min over Max was 2.2%.

Experiment 4 In this phase, we wanted to show that the improvement in operators’ skill levels should have better impact than what happened in previous experiments. We changed the standard deviation from (0.05µ) to (0.2µ) to extend the gap of operatoroperation times, and then we applied these new times in period 15 by entering all products of type Y in the system. Results have shown that there was better gain in terms of minimizing makespan Table 14. Impact on makespan under chaos in periods 15 and 16

Difference (%)

Period 10 (5 products)

Max-Min

Experiment 3

Max-Min

Difference (%)

period 15 (5 products)

66.48

67.6

1.7%

period 16 (5+5 products)

88.6

89.7

1.2%

Table 15. Impact on makespan under chaos in period 15

Difference (%) 0.7%

Max

Max-Min Period 15 (10 products)

86.81

Max 89.01

Difference (%) 2.2%

1149

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Table 16. Impact on makespan under chaos in period 15 Max-Min period 15 (10 products) (SD = 0.2)

55.23

Max 58.02

Difference (%) 5%

and total time. Max-Min performed better by 5% than Max approach. Table 16 show these results.

CONCLUSION AND FUTURE WORK In this chapter, operators were assigned to operations to maximize the production rate using two assignment strategies: Max-Min and Max. The major concern is to see how these two approaches impact operators’ skill levels, as well as their impact on makespan values. The impact is discussed under a chaotic environment where sudden changes in product mix with different operation times are applied and under a non-chaotic environment where the same product mix is run period after period. In this study, a skill level is affected by learning and forgetting rates. An operator’s skill level increases when he performs an operation for many consecutive periods; on the other hand, if an operator does not perform a certain operation for a number of consecutive periods, his skill level decreases. Max-Min did improve operators’ overall skill levels more significantly than Max in multi-periods. This is due to the fact that MaxMin does not only assign operators to maximize production rate, but also it re-assigns operators to operations where they are not very skilled; thus, operator’s skill levels continue to improve. On the other hand, Max is only assigning operators to operations to maximize production rate. Moreover, the previous work assumed a stable cellular environment, in which they assumed that there are no new products entering the system. Thus, in this paper, we introduced a highly dynamic cellular environment, in which new products with

1150

different processing times entered the system and some of the existing products left the system. Max-Min acts well under chaotic environment because it increases operators’ skill levels well enough to face the shock applied, where the shock contains products with new processing times and this requires different manpower allocation. We also concluded that the standard deviation used in operator time matrix is an important factor for helping Max-Min approach to expand the gain in terms of minimizing makespan and total time. A standard deviation of 5% of the mean is used in operator matrix for experiments 1 and 2. In period 15, when we replaced the whole set of products of type X with products of type Y, we found the highest gain that shows Max-Min is better in minimizing makespan among all periods. This gain was captured using a standard deviation of (0.05µ); however, when we used a standard deviation of (0.2µ), we found that the gain is higher than using (0.05µ). A possible extension to this work is to take manpower level decision for each cell as a variable as opposed to using fixed manpower levels. A further expansion would be to allow operators shift from one cell to the next as opposed to fixing them to the same cell all the time. Obviously, this increases flexibility in assigning operators to cells. However, this increases computational complexity as well.

REFERENCES Askin, R., & Standridge, C. (1993). Modeling and analysis of manufacturing systems. New York, NY: John Wiley & Sons, Inc. Bhaskar, K., & Srinvasan, G. (1997). Static and dynamic operator allocation problems in cellular manufacturing systems. International Journal of Production Research, 35(12), 3467–3481. doi:10.1080/002075497194192

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Bidanda, B., Ariyawongrat, P., Needy, K. L., Norman, B. A., & Tharmmaphornphilas, W. (2005). Human related issues in manufacturing cell design, implementation, and operation: A review and survey. Computers & Industrial Engineering, 48(3), 507–523. doi:10.1016/j.cie.2003.03.002 Cesani, V. I., & Steudel, H. J. (2005). A study of labor assignment flexibility in cellular manufacturing systems. Computers & Industrial Engineering, 48(3), 571–591. doi:10.1016/j.cie.2003.04.001 Fan, S., & Gassmann, R. (1997). The affects of allocation of functions on the long-term performance of manufacturing cells—A case study. Human Factors and Ergonomics in Manufacturing, 7(2), 125–152. doi:10.1002/(SICI)15206564(199721)7:23.0.CO;2-5 Hyer, N. L. (1984). Management’s guide to group technology. In Hyer, N. L. (Ed.), Group technology at work (pp. 26–27). Dearborn, MI: SME. Mahdavia, I., Aalaei, A., Paydar, M. M., & Solimanpur, M. (2010). Designing a mathematical model for dynamic cellular manufacturing systems considering production planning and worker assignment. Computers & Mathematics with Applications (Oxford, England), 60(4), 1014–1025. doi:10.1016/j.camwa.2010.03.044 Nembhard, D. A. (2001). Heuristic approach for assigning workers to tasks based on individual learning rates. International Journal of Production Research, 39, 1955–1968. doi:10.1080/00207540110036696 Nembhard, D. A., & Osothsilp, N. (2002). Task complexity effects on between-individual learning/forgetting variability. International Journal of Industrial Ergonomics, 29(5), 297–306. doi:10.1016/S0169-8141(01)00070-1 Nembhard, D. A., & Uzumeri, M. V. (2000). Experiential learning and forgetting for manual and cognitive tasks. International Journal of Industrial Ergonomics, 25, 315–326. doi:10.1016/ S0169-8141(99)00021-9

Shirase, K., Wakamatsu, H., Tsumaya, A., & Arai, E. (2001). Dynamic co-operative scheduling based on HLA. Retrieved September 10, 2007, from http://www.springerlink.com/content/ r116462352k35gt0/fulltext.pdf Slomp, J., Bokhorst, J. A., & Molleman, E. (2005). Cross-training in a cellular manufacturing environment. Computers & Industrial Engineering, 48(3), 609–624. doi:10.1016/j.cie.2003.03.004 Süer, G. A. (1996). Optimal operator assignment and cell loading in labor-intensive manufacturing cells. Computers & Industrial Engineering, 31(12), 155–158. doi:10.1016/0360-8352(96)00101-5 Süer, G. A., Arikan, F., & Babayigit, C. (2008). Bi-objective cell loading problem with non-zero setup times with fuzzy aspiration levels in labourintensive manufacturing cells. International Journal of Production Research, 46(2), 371–404. doi:10.1080/00207540601138460 Süer, G. A., Arikan, F., & Babayigit, C. (2009). Effects of different fuzzy operators on fuzzy biobjective cell loading problem in labor-intensive manufacturing cells. Computers & Industrial Engineering, 56, 476–488. doi:10.1016/j. cie.2008.02.001 Süer, G. A., Cosner, J., & Patten, A. (2009). Models for cell loading and product sequencing in laborintensive cells. Computers & Industrial Engineering, 56, 97–105. doi:10.1016/j.cie.2008.04.002 Süer, G. A., & Dagli, C. (2005). Intra-cell manpower transfers and cell loading in labor-intensive manufacturing cells. Computers & Industrial Engineering, 48(3), 643–655. doi:10.1016/j. cie.2003.03.006 Süer, G. A., Subramanian, A., & Huang, J. (2009). Heuristic procedures and mathematical models for cell loading and scheduling in a shoe manufacturing company. Computers & Industrial Engineering, 56, 462–475. doi:10.1016/j.cie.2008.10.008

1151

Operator Assignment Decisions in a Highly Dynamic Cellular Environment

Süer, G. A., & Tummaluri, R. (2008). Multi period operator assignment considering skills, learning and forgetting in labour-intensive cells. International Journal of Production Research, 2, 1–25. Suksawat, B., Hilraokai, H., & Ihara, T. (2005). A new approach manufacturing cell scheduling based on skill-based manufacturing integrated to genetic algorithm. Retrieved October 15, 2007, from http://www.springerlink.com/content/ q885171535723r5q/fulltext.pdf

Wirojanagud, P., Gel, S., Fowler, J., & Cardy, J. (2005). Modeling inherent worker differences for workforce planning. Retrieved September 10, 2007, from http://ie.fulton.asu.edu/files/shared/ workingpapers/Wirojanagud_Gel_Fowler.pdf.

This work was previously published in Operations Management Research and Cellular Manufacturing Systems: Innovative Methods and Approaches, edited by Vladimir Modrák and R. Sudhakara Pandian, pp. 258-276, copyright 2012 by Business Science Reference (an imprint of IGI Global).

1152

1153

Chapter 62

Capacity Sharing Issue in an Electronic CoOpetitive Network: A Simulative Approach Paolo Renna University of Basilicata, Italy Pierluigi Argoneto University of Basilicata, Italy

ABSTRACT In recent years, manufacturing companies have entered a new era in which all manufacturing enterprises must compete in a global economy. To stay competitive, companies must use production systems that only produce their goods with high productivity, but also allow rapid response to market changes and customers’ needs. The emerging new paradigm of inter-firm relations involving both cooperative and competitive elements, called co-opetition, seems well face this issue. The chapter proposes a multi agent architecture to support different coordination policy in an electronic co-opetitive network in which plants are willing to exchange productive capacity. An innovative approach based on cooperative game theory is proposed in this research and its performance is compared with the prevalent negotiation approach. A discrete event simulation environment has been developed in order to evaluate the related performances. The case in which no relation exists among plants has been considered as a benchmark. The obtained results show that the proposed approach outperforms the negotiation mechanism form many point of view.

DOI: 10.4018/978-1-4666-1945-6.ch062

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Capacity Sharing Issue in an Electronic Co-Opetitive Network

INTRODUCTION In this era of downsizing and outsourcing, where the business landscape is changing rapidly, the management of the external relations of the firm and the combining of this with internal operations represent a new problem and opportunity. Increasingly firms are recognizing the development and management of relationships both directly and indirectly with other members of the value producing system, conscious of the fact that they cannot survive and prosper solely through their own individual efforts: each firm’s performance depends in important ways on the activities and performance of others and on the nature and quality of the direct and indirect relations as firm develops with these counterparts. Inter-firm relations involve a mix of cooperative and competitive elements. Firms simultaneously cooperate to expand the total amount of rewards and resources available to them and compete over the means to do this and over the division of rewards and resources. They compete to develop cooperative relations with counterparts such as customers and suppliers that are beneficial in order to create competitive advantages in creating value for final customers. Matching competition and cooperation (hereafter C&C) is a great and hard challenge for firms in this globalization era. The novelty of the topic proves the importance and the absolute relevance of research efforts aiming at providing a better understanding of this new organizational issue. Although some works are present in literature, identifying and providing evidence of the reasons of both competitive and cooperative relationships among firms, analyzing keys of success and causes of failures of these strategies are still open questions in both conceptual and empirical literature. On the other hand, determining strategies in these situations and studying their beneficial or detrimental effects on firms, consumers, suppliers and social welfare, designing proper mechanisms and providing incentives are absolutely open issues. When a rapport among firms includes elements

1154

of both cooperation and competition, i.e. these firms can compete and cooperate simultaneously, the relationship is called co-opetition. In order to better understand the reasons for assuming co-opetitive behavior, we should first reflect on the concepts of C&C. Competition occurs when many firms are producing the same or related products and are fighting for consumers, suppliers and/or resources, excluding collaboration with other companies. In pure competition the boundaries between competitors are sharp and distinct. Conversely, in pure cooperation there are frequent exchanges among partners, including business, information and social exchange. Interfirm cooperation produces strong ties that cross the boundaries between firms in order to share complementary capabilities, assets and interests. The functioning of cooperative relationships may be regulated by formal contracts and/or informal agreements built upon social relations and the development of trust. It should be noted that cooperation and competition between firms are neither spontaneous nor exogenous, but are actions that depend upon the contextual conditions, i.e. the determinants of their level evolve over time, thus it is likely that the level of C&C between organizations will undergo variations (Luo, 2005). Some of the various determinants of competition are: a high level of overlapping in the market, a slight differentiation between the products sold by different firms, a lack of entry barriers to a market and a strong contractual position enjoyed by clients and/or suppliers. Instead, some determinants of cooperation among firms are: an inability to achieve their goals with their own resources, a need to share the risks of a new initiative and an improvement in efficiency through the sharing of activities to obtain economies of scale. For all that reasons, various hybrid forms of co-opetitive relationships occur between the extreme forms of pure inter-organizational competition and pure inter-organizational cooperation. Following the seminal work of Brandenburger and Nalebuff (1996), a number of articles regarding the private

Capacity Sharing Issue in an Electronic Co-Opetitive Network

sector argued that C&C among firms cannot be considered mutually exclusive (Bengtsson and Kock, 1999; Luo, 2004; Oliver, 2004) but, in spite of authors’ suggestion few of them have used game theory to model situations in which both competitive and cooperative behaviors arise. Game theory makes it possible to move beyond overly simple ideas of C&C to reach a vision of co-opetition more suited to topical needs. This theory is a branch of mathematics that is concerned with the actions of individuals who are conscious that their actions affect each other. It is divided into two branches, called the non-cooperative and cooperative ones, differing in how they formalise interdependence among the players. In the non-cooperative theory, a game is a detailed model of all the moves available to the players. By contrast, the cooperative theory abstracts away from this level of detail, and describes only the outcomes that results when the players come together in different combinations. Though standard, the terms non-cooperative and cooperative game theory are perhaps unfortunate. They might suggest that there is no place for cooperation in the former and no place for conflict or competition in the latter. In fact, neither is the case: especially in business field the real value of game theory comes when both approaches are put into practice. It, therefore, could be used to study settings in which co-opetition might be used, determine strategies of co-opetition and analyze their effects on outcome in order to evaluate possible advantages and/or disadvantages from single firm, other stakeholders and the whole system. For example, consider a situation where a number of firms (players) are connected in some network relationship. The applications could be quite wide and varied, ranging from friendships and social relationships, to communicating information about job openings, to business partnerships, to international trade agreements and political alliances. What is in common to these situations is that the way in which players are connected to each other is important in determining the total productivity or generated value. How the total productive value

are allocated or transferred among players turns out to be important not only in terms of fairness considerations, but also because it determines players’ incentives to form various networks. Particularly the generated value by players depends not only on their identities but also on how they are connected to each other. That is, alternative network structures (e.g., communication lines, alliances, friendships, etc.) connecting the same set of players might lead to very different costs and benefits. Thus, in many situations it will be important to account for network structure and not just coalition structure. Myerson (1977) made a seminal contribution in adapting the cooperative game theory structure to accommodate information about the network connecting players: once the network is fixed its role is simply to define which coalitions can function. The feasible coalitions are the ones whose members can freely communicate via the given network (so that any two players in the coalition are path connected in the network via players in the coalition). In a sense, each network structure and characteristic function (indicating how much value a given coalition can generate) induces a particular cooperative game. In the last two decades, there have been many relevant examples of co-opetition situations. In the 90s GM and Ford, the major American carmakers, established an e-procurement platform for procuring basic components. The joint venture between Toyota and PSA Citroen-Peugeot, established in 2002, is another very relevant example of co-opetition in automobile industry. The two companies agreed in building a common plant in Czech Republic and using common components for the production of three new separately-owned city cars. In Italy, in 2002, the two biggest motorcycle companies, Aprilia and Piaggio made an alliance for joint-procurement, though competing in the final market. In ICT industry, the Simian joint-venture is among the main mobile wireless telephones manufacturers in the world, Nokia, Ericsson, Panasonic, Samsung, Siemens AG, and the leading company in the mobile digital com-

1155

Capacity Sharing Issue in an Electronic Co-Opetitive Network

puting, Psion, which, however, has sold its own shares in 2004. The phenomenon of co-opetition in R&D activities and co-promotion is also very common in pharmaceutical and biotechnology industry. Other straightforward examples of coopetition can be found in several other industries, such as furniture, tourism, healthcare and retailing. Moreover, all around the world industrial districts and both local or national consortia and co-ops can be considered examples of co-opetition of medium and small businesses in several industries, such textile and clothing, agri-food, hardware and retailing. Also, in the automotive industry, web-based technology opens up opportunity of cooperation among small and medium firms. For instance, Supply On is a successful provider of supply and engineering services founded by suppliers competing in the same market (Meder, 2005). Finally, an interesting empirical research from Quintana-Garcia and Benavides-Velasco (2004) provides evidence of co-opetition in new product development among European small and medium firms in biotechnology industry. Through cooperation in one or more activities, small firms can get same advantages as big companies, however, because of proximity, they inevitably keep competing each others in other activities. The contribution of our chapter is threefold: •





to apply concepts deriving from co-opetitions in an electronic network of geographically distributed plants willing to exchange their productive capacity; to evaluate different coordination policy, one based on negotiation and one based on cooperative game theory; to simulate the proposed approach by a Multi Agent System able to support the network.

The rest of the chapter is structured as follows: section 2 presents the motivations to develop a network; section 3 provides a general description of the network typology while the co-opetitive ap-

1156

proach is introduced in section 4. An overview of the literature is presented in section 5. In section 6 is described the problem context then the Multi Agent Architecture is illustrated in section 7. In section 8 the coordination mechanism proposed are described. The developed simulation environment and simulation results are respectively presented in sections 9 and 10. Finally, conclusions and further research paths are withdrawn in section 11.

WHY NETWORKS ARE NECESSARY Survival, performance and development of the entrepreneurial firm are at the heart of entrepreneurship research (Shane and Venkataraman, 2000). Entrepreneurial firms are characterized by a lack of internal resources and other startup handicaps, as expressed in the theoretical constructs of liability of newness and liability of smallness. The strategic use of external resources through inter-firm networks in many different industries (Jarillo, 1989), that are often embedded in regional clusters (Boari and Lipparini, 2000; Lechner and Dowling, 2000) is regarded as one effective means to overcome these liabilities. In this context, inter-firm networks are considered an important model of organization development to enable an entrepreneurial firm to grow and survive (Freel, 2000). The image of atomistic actors competing for profits against each other in an impersonal marketplace is increasingly inadequate (Granovetter, 1985; Gulati, 1998; Galaskiewicz and Zaheer, 1999). Such networks encompass a firm’s set of relationships, both horizontal and vertical, with other organizations—be they suppliers, customers, competitors, or other entities— including relationships across industries and countries. These strategic networks are composed of inter-organizational ties that are enduring and include strategic alliances, joint ventures, long-term buyer-supplier partnerships and a host of similar ties. Gulati et Al. (2000) highlight the importance of considering the role

Capacity Sharing Issue in an Electronic Co-Opetitive Network

of strategic networks resulting from inter-firm ties for fundamental issues in strategy research. Anand and Khanna (2000) provide a unique empirical assessment of the presence of experience benefits to firms from entering frequent alliances. Using a comprehensive data set on abnormal returns to alliance announcements they separate out alliances by contract type and find significant differences in the extent of learning, as measured by abnormal returns, across contract type. They look further within each type and also observe differences in learning effects within joint ventures depending upon the scope of activities within the alliance. They conclude that learning effects are more significant when there is contractual ambiguity. Doz et Al. (2000) examine the creation of R&D firms consortia to uncover the formation processes underlying the formation of cooperative interfirm networks. They identify an emergent process by which environmental changes and shared views among participants facilitate network formation, and also an engineered process in which a triggering entity actually recruits potential members. Together, the identification and elucidation of these two processes provide us with insights into the formation of networks. Baum et Al. (2000) utilize a unique data set on biotech start-ups to examine the role of start-ups’ network composition on their innovative performance. Their findings suggest that the structure of their network of ties and the identity of their partners can have a significant influence on their performance. They use these empirical findings to reflect on some implications that may follow for managers of startups. Ahuja (2000) draws on the resource-based view of the firm to study alliance formation. He flags an important issue when he argues that the proclivity of firms to form strategic alliances depends both on alliance opportunities and on the firms’ own inducements and resource endowments. Alliance formation thus results from a match between the value-creating resources of the focal firm and those available from other firms in its environment. He identifies three forms of capital resources

(technological, commercial and social) and shows that they each contribute to alliance formation. A combination of technical and commercial capital in a firm, however, reduces the likelihood of allying, since such firms are already well endowed and have less need to ally. Rowley, Behrens and Krackhardt (2000) empirically assess the interplay between relational and structural embeddedness for firm performance. While the former highlights the role of the relationships firms are in, they focus on the overall relational structure surrounding the firm. Using data on interfirm strategic alliances from the semiconductor and steel industries they find that the role of both forms of embeddedness on firm performance is influenced by the industry context. Afuah (2000) explains firm-level competitive advantage by explicitly invoking the capabilities available to a firm from its network of supplier and customer firms (co-opetitors). He shows how a firm’s performance is lowered in the face of technological change when a firm’s network members’ capabilities are made obsolete - a contrast from research which has focused on the capabilities of the firm itself in explaining performance. Kogut (2000) theoretically highlights two ways in which a firm’s performance can be influenced by the network in which it is embedded. The first is the value that a focal firm’s network of ties has for the range and quality of information it receives. This potential information advantage has been widely recognized. What has received less attention, Kogut argues, are the potential benefits that arise from being a part of a dynamic and evolving network that coordinates activities among specialized producers. By highlighting the importance of generative rules that shape the coordination among the members of a network over time.

NETWORK TYPOLOGIES Generally, research on inter-firm networks has focused on the role of the entrepreneur in network

1157

Capacity Sharing Issue in an Electronic Co-Opetitive Network

building, on the initial size of an entrepreneurial firm’s network in regard to firm performance, or on structural characteristics of networks. Different types of relations define different types of networks even if the same units are connected, that means that firms can be involved in different types of networks in different development phases. All this value-added networks go beyond economic relationships and includes: •





• •

social networks: relationships with other firms based on strong personal relationships with individuals such as friends, relatives, long-standing colleagues that became friends before foundation, and so forth; reputational networks: made up of partner firms that are market leaders, or highly regarded firms or individuals, and where one of the main objectives in entering into this relationship is to increase the entrepreneurial firm’s credibility; marketing information networks: relationships that allow for the flow of market information through distinct relationships with other individuals/firms; co-opetitive networks: relationships with direct competitors; cooperative technology networks: technology alliances involving joint technology development or innovation projects.

Moreover, network research distinguishes three components: network content, network structure and network governance in order to explain the role of networks in firm performance. According to the theory of structural embeddedness, network structure and a firm’s network position are considered to be both opportunities and constraints. Favorable positions are regarded as network resources; over-embeddedness, however, can lead to inability to act. A rich literature suggests that networks are a particular governance form in which the development of trust plays a major

1158

role in influencing resource exchange and costs compared to market coordination or integration of activities. In this sense, inter-firm networks constitute a third way of organizing the business, which is neither by markets nor by hierarchies. Recently, it was shown that specific kinds of relations (network content) are more important in a different economic context. Research on firm networks as a mode of transaction governance, the role of strong and weak ties, and the analysis of structural properties of networks has produced important insights. First of all, many studies have shown that the entrepreneurs’ personal and social networks probably are the most important strategic resources for the firms’ start-up (Ardichvili et al., 2003). Organizational networks or inter-firm networks are relations between organizations that can have various functions, also called sub-networks according to their relational content. The merging of personal and organizational networks seems to be a common feature of young firms. Founders and the firm are inseparable at startup. As the firm grows, the founders’ personal networks and firm networks merge (Cooper, 2002). These merged networks can be considered an organizational form. Initially, it seems that networking for entrepreneurial firms is based on pre-existing relationships, which become more complex over time by having different functions and being more socially embedded: social relations are transformed into socio-economic and, finally, into more complex relations. Previous research has shown that the overall network structure changes from an unplanned to a planned and, finally, a structured network. Once structured, however, it seems that both over-embeddedness and a firm’s limited relational capability, i.e. the capability to establish, maintain, and develop relationships (Lechner and Dowling, 2003), poses a potential barrier to growth. However, this research does not explain which network types are most important to manage or how firms should overcome these growth barriers. Research on network types and performance has focused on pre-start-up activi-

Capacity Sharing Issue in an Electronic Co-Opetitive Network

ties in order to explain nascent entrepreneurship (i.e. personal networks have been analyzed at or prior to the creation of the company in terms of size and networking activity). Context-specific research investigated the positive or negative role of social networks at start-up, but which types of network matter most for firm performance have not been studied extensively. Some research was conducted on the role of single network types such as reputational and cooperative technology networks, but little is known for example about the role of co-opetition networks. Lechner and Dowling (2003) proposed a network development model based on varying network types. Based on case study research in a German Information Technology cluster, they identified that firms use relationships for a variety of purposes and that every firm has an individual relational mix. They argued that the relational mix (i.e., the different types of networks) changes over time in order to enable firm growth. Finally, they proposed a four-phase development model of entrepreneurial firms. In phase 1, firms seek to overcome liability of newness by basing the development of the network mainly on social (understood as strong ties such as family and friends) and reputational networks (relationships with prominent firms that can lend the young firm reputation). While the relative importance of social and reputational networks decreases with the firms’ development, electronic co-opetitive networks (i.e., cooperation with competitors) increase over time. In phase 2, firms use marketing information and electronic co-opetitive networks to overcome the usual period of unstable sales growth; in phase 3, co-opetition remains a relevant issue but cooperative technology networks are most important. Finally, in phase 4, firm growth is limited by path-dependent relational capability that eventually reaches its limits and leads to the reconfiguration of a more stable network by introducing hierarchic levels within the network or to the integration of activities previously performed outside the firm. This qualitative study seems to be one of the few where different

network types were used to understand the role of networks at and beyond foundation, but their impact on the performance of the entrepreneurial firm have not been tested empirically with larger samples.

WHY WE ARE INTERESTED TO THE ELECTRONIC COOPETITIVE NETWORK Electronic co-opetitive networks involve relationships with direct competitors. The management literature generally considers industries to be collections of firms bound together by rivalry, therefore questioning the value of relationships with competitors (Dollinger, 1985). Firms can use competitors as subcontractors in times where the firm has temporarily reached full capacity. This cooperative behavior, especially with regional competitors, will increase the likelihood of the favor being returned. Overall, relationships with competitors can give access to temporarily needed resources or lead to the temporary pooling of resources, which should positively influence firm performance especially in the years after foundation, when sales tend to grow discontinuously (Lechner and Dowling, 2003). While it has been argued that electronic co-opetitive networks at foundation might be harmful because such relationships could lead to the disclosure of competitive information, lack of co-opetition can also constrain firm development in the years following foundation. Entrepreneurial firms that view competitors not only as pure rivals but also as a potential resource should therefore be more successful. Building on the theoretical framework of co-opetition, according to which both C&C are needed in inter-organizational relationships to allow firms to obtain reciprocal advantages (Bengtsson and Kock, 2000), it could be useful to understand whether (and how) co-opetition could be applied in capacity sharing case. There are many areas of management in which co-opetition

1159

Capacity Sharing Issue in an Electronic Co-Opetitive Network

may be used. However it is far from being clear in which circumstances a strategy of co-opetition should be used and in which it should not, what the consequences of cooperating and competing at the same time are for all the stakeholders, which variables and parameters involved in coopetition decisions drive the main results and how managers should design and handle them to make co-opetition work out well. Literature is still far from providing complete analysis of the suitability of co-opetition strategies in given business settings. Research on the topic has just begun. Therefore, a substantial research effort is required to make the picture clearer and provide useful frameworks and tools for firms to decide, design and manage co-opetition. Our goal is to provide a first, and of course not exhaustive, analysis of co-opetition in given production areas. We analyze simultaneous cooperating and competing strategies in capacity sharing case: co-opetition in production, to the best of our knowledge, is not treated at all. The lack of (also) analytical works in this field (see the following section 5) opens up a high number of directions and questions to investigate. Many authors, in fact, have addressed the problem of capacity sharing in multi-unit factory geographically distributed, but nobody using co-opetitive approach. Tonshoff et al. (2000), for example, described a conceptual framework in a decentralized production environment concerning several sales unit and production sites geographically distributed. The capacity allocation plan in this case is made by a mediator in a centralized approach. Ip et al. (2000) proposed an approach concerning planning and scheduling problems in a multi-products manufacturing environment by using Genetic Algorithm. Therefore, the approach is centralized and not applicable in an environment with independent factories. Christie and Wu (2002) proposed an approach to manage the capacity planning in a multi-fabs environment. Each fab is modelled as a single resource with variable production level. Several discrete scenarios are considered in a multi-period, multistage and

1160

stochastic programming model. The goal is to minimize the expected mismatch between planned and actual capacity allocation as defined in the scenarios. Chen et al. (2007) proposed model enables a collaborative integration for resource and demand sharing. A negotiation algorithm is utilized to sharing capacity from factory that selling to factory that requires extra capacity. Each factory applies an economic resource planning model based on Genetic Algorithm to improve its local objectives. The model is tested only between two factories: one seller and one buyer. Renna and Argoneto (2008) proposed a distributed approach, for a network of independent enterprises, to facilitate the resources sharing process. The distributed architecture is based on Multi Agent Architecture paradigm and the coordination mechanism was performed by a negotiation protocol. In this work just two performance indexes has been considered: the total profit of the network and the total unsatisfied capacity. Anyway, co-opetition concepts from procurement, marketing and R&D can be also adapted to this area and, therefore, open many spaces for research. To use a sentence from Brandenburger and Nalebuff (1996), in any case, the goal of a firm is to do well for its self-interest, but, especially in a globalized context, to pursue an objective, such as new product development, procurement cost reduction as well as new market entrance, an ally might be necessary. Sometimes, factors such as the knowledge, expertise, technologies, processes and products, the market structure, features and dynamics induce a firm to find the best ally in a competitor getting the way for a win-win result. On the other hand, many other times, a firm can succeed only if able to win out the competition. Therefore a deep analysis of potential benefits and disadvantages of co-opetition strategy is necessary for managers to make proper decisions, design and manage such a new form of business interaction. The doctrine on co-opetition has overcome the positions according to which competition has only negative externalities, while collaboration has exclusively positive effects in

Capacity Sharing Issue in an Electronic Co-Opetitive Network

an inter-organizational network. In our work, specifically, co-opetition is meant to be jointed in resources procurement among competing firms. That is not the only form of co-opetition in procurement, but it is one of the most common in reality and allows us to easily model the coopetitive features. In our analysis we investigate advantages and disadvantages of cooperation in presence of competition, by analyzing effects on network, stakeholders and, obviously, on firms. Our results provide useful insights and guidelines for managers involved in this kind of issues, in presence of competition. Because of the novelty of the proposed approach, this work provides basic models to be used as a starting point for further needed developments in the area of production network co-opetition as well as for applications to other business areas.

CO-OPETITION, A LITERATURE OVERVIEW In spite of the success of the book from Brandenburger and Nalebuff (1996) and although, in the last years, the word co-opetition has been one of the most used and abused in business environments, only few works on the topic are present in the scientific literature. As stated in Dagnino and Padula (2002) scientific investigation on the issue of co-opetition has not gone much farther beyond naming, claiming or evoking it. Some relevant conceptual and/or empirical works in the strategic management literature indirectly face the issue of cooperation among competing firms analyzing strategic alliances formation and firms’ networks. One of the main questions addressed by researchers has concerned the reasons behind strategic alliances formation. We have already summarized the most relevant ones. In addition, the Resource Based View perspective contributes to the concept of co-opetition by pointing out that, coming the competitive advantage from resources and capabilities, firms eventually join competitors

and interact in networks in order to have access to external critical resources and capabilities and organizational skills (Gulati et al., 2000). The other main question has concerned the keys for an alliance being successful and investigation of failure causes. As pointed out by Bengtsson and Kock (2000), in such stream of literature primarily the cooperative dimension of the relationship is emphasized. In contrast, they argue that both cooperation and competition are simultaneously needed in horizontal relationships since the different relationships may provide the firm with different advantages. In one of the first papers explicitly facing the co-opetition issue, Bengtsson and Kock (2000) use an explorative case study of three industries to show that it is of crucial importance to separate the cooperative part from the competitive part of the relationship. They also show how to separate and manage these two conflicting logics of interaction. More recent contributions organize in a framework different kinds of co-opetitions (Garraffo, 2002), define a typology co-opetition and show how such a strategy may contribute to the value creation or construct a framework to measure intensity and diversity of co-opetition in order to provide some guidelines for building up co-opetitive relationships (Luo, 2007). The papers presented above, even those which explicitly study co-opetition issues, focus on analyzing inter-firms relationships within co-opetition frameworks. None follows the approach suggested by Brandenburger and Nalebuff (1996). They argue for the effectiveness of game theory as a tool to analyze co-opetition, but in spite of their suggestion, only restricted research areas have used game theory to model situations in which both competitive and cooperative behaviors arise. One of these is, surely, the stream of industrial organization which focuses on R&D cooperation in oligopoly: D’Aspremont and Jacquemin (D’Aspremont and Jacquemin. 1988), in their work consider the possibility of cooperation in certain activities among competing firms; specifically cooperation concerns R&D

1161

Capacity Sharing Issue in an Electronic Co-Opetitive Network

activities. They compare three cases in presence of spillovers effects: total competition, cooperation in R&D and competition in the final market, total cooperation (collusion), showing that coopetition and full cooperation do not always have negative impact on social welfare. In the same vein, cooperation in R&D among oligopolists and effects of such cooperation are analyzed in more recent papers (Suetens, 2005). If models of co-opetition are consolidated in the stream of industrial organization that focuses on R&D issues, game-theoretic approaches analyzing co-opetition strategies in other research areas are scattered and their use is far from being considered structured and consolidated. In marketing literature and in agricultural economics, there are some examples of game theoretic approaches, for instance, in analyzing effects and results of joint advertising or co-promotion in presence of competition among partners (Krishnamurthy, 2000; Bass et al., 2005; Isariyawongse et al., 2007). On the other hand, no work in operations and supply chain management seems to focus on co-opetition issues, except some papers (Gurnani et al. 2007) considering simultaneously common and conflicting interests among suppliers and buyers. However, even though in line with the broad definition of co-opetition proposed by Brandenburger and Nalebuff (1996), in these papers, co-opetition does not occur among real competitors, therefore the challenge of analyzing cooperation and competition simultaneously is still missing in operations and supply chain management and in other research areas as well. This research overcomes the previous researches in the following issues: •



1162

the co-opetion paradigm is efficiently applied to a capacity sharing issue in a network of indipendent plants geographically dispersed; a Multi Agent Architecture is properly developed in order to support the network infrastructure;

• •



game theory is utilised as a policy mechanism to coordinate the network; a negotiation approach is developed as a benchmark to compare game theoretical approach; a proper discrete event simulation environment, based on open source tool, is developed in order to test the proposed approaches in a dynamic environment.

The conducted experiments allow to highlight the added value of electronic co-opetitive network.

NETWORK GOVERNANCE PROPOSED Relationships are the focus of substantial investments in time, money and effort and are the means by which knowledge as well as other strategically important resources are both accessed and created. Furthermore, relations are connected to other relations resulting in systems of interdependent relations, henceforth referred to as business networks. Coordination between as well as within relationships becomes a central managerial concern and the means by which the joint productivity of the value system comprising a network of connected business relationships is improved. However, when firms are be involved in co-opetition, a ‘‘coordinator system’’ (or ‘‘intermediate actor’’) is needed in order to plan, coordinate, monitor and appraise inter-organizational relationships. The ‘‘coordinator system’’ should use both formal mechanisms (such as an incentive system) and informal mechanisms (for instance, promoting inter-firm social events for communication and interactions) to adequately balance cooperation and competition. Therefore, the doctrine on competitive relationships has argued that C&C may be managed either through a strategy of separation between competitive and cooperative individuals/subunits or by establishing a ‘‘coordinator system’’ that combines these

Capacity Sharing Issue in an Electronic Co-Opetitive Network

stimuli in order to maximize system gains from inter-firm co-opetition. Moreover, the emerging new organizational form, allowing for more coordination among quasi-independent actors, and, at the same time, more flexibility and autonomy in planning, production and distribution, needs to be properly managed applying right technologies. In the eyes of the authors, the most appropriate tools to model this kind of situation is given by Multi-Agent System (MAS) approach. Being the problem domain particularly complex, large and unpredictable, the only way it can reasonably be addressed is to develop a number of functionally specific and (nearly) modular components (agents) representing firms. This decomposition allows each agent to use the most appropriate paradigm for solving its particular problem. When interdependent problems arise, the agents in the network must coordinate with each other to ensure that interdependencies are properly managed. A MAS can be defined as a loosely coupled network of problem solvers that interact to solve problems that are beyond the individual capabilities or knowledge of each problem solver (Durfee and Lesser, 1989). The characteristics of MAS, and why we use it, are that: •

• • •

each agent has incomplete information or capabilities for solving the problem and, thus, has a limited viewpoint; there is no system global control; data are decentralized; and computation is asynchronous.

Sophisticated individual agent reasoning can increase MAS coherence because each individual agent can reason about non-local effects of local actions, from expectations of the behavior of others, or explain and possibly repair conflicts and harmful interactions. Indeed, as previously discussed, a distinguishing feature of a MAS is the fact that the decision making of the agents can be distributed. This means that there is no central controlling agent that decides what each agent

must do at each time step, but each agent is to a certain extent responsible for its own decisions. The main advantages of such a decentralized approach over a centralized one are efficiency, due to the asynchronous computation, and robustness, in the sense that the functionality of the whole system does not rely on a single agent. In order for the agents to be able to take their actions in a distributed fashion, appropriate coordination mechanisms must be additionally developed. A typical situation where coordination is needed is among co-opetitive agents that form a network, and through this network they make joint plans, compete for resources and try to pursue common goals. Planning for a single agent is a process of constructing a sequence of actions considering only goals, capabilities, and environmental constraints. However, planning in a MAS environment also considers the constraints that the other agents’ activities place on an agent’s choice of actions and the constraints that an agent’s commitments to others place on its own choice of actions. The development of protocols that are stable (nonmanipulable) and individually rational for the agents is the subject of mechanism design. The approaches utilized for this last issue, in this chapter, are negotiation and cooperative game theory. Negotiation is seen as a method for coordination and conflict resolution. It has also been used as a metaphor for communication of plan changes, task allocation, or centralized resolution of constraint violations. The main characteristic of negotiation is the presence of some sort of conflict that must be resolved in a decentralized manner by selfinterested agents under conditions of bounded rationality and incomplete information. When one does build negotiation and enforcement procedures explicitly into the model the results depend very strongly on the precise form of the procedures, on the order of making offers and counter-offers and so on. But problems of negotiation are usually more amorphous; it is difficult to pin down just what the procedures are. More fundamentally, there is a feeling that procedures are not really all

1163

Capacity Sharing Issue in an Electronic Co-Opetitive Network

that relevant; that it is the possibilities for coalition forming. Instead, cooperative game theory abstracts away altogether from procedures and concentrates on the possibilities for agreement. It deals with situations, where a group of players cooperate by coordinating their actions to obtain a joint profit. It is usually assumed that binding agreements between the players are the mean of the cooperation. In the following, we use both this coordination mechanisms to appreciate their weaknesses and strengths in a network co-opetitive circumstances.

THE AGENT-BASED NETWORK MODEL Sophisticated individual agent reasoning can increase MAS network coherence because each individual agent can reason about non-local effects of local actions, form expectations of the behaviour of others, or explain and possibly repair conflicts and harmful interactions. Numerous works in Artificial Intelligence (hereafter AI) research try to formalize a logical axiomatization for rational agents. This axiomatization is accomplished by formalizing a model for agent’s behaviour in terms of beliefs, desires, goals, and so on. An agent will be considered rational if it always selects an action that optimizes an appropriate performance measure, given what the agent knows so far. The performance measure is typically defined by the user (the designer of the agent) and reflects what the user expects from the agent in a specific task. For the purpose of this chapter, it will be considered a discrete set of time steps t = 1,…,n, in each of which the agent must choose an action at from a finite set of available actions A. The complete history of action up to time t to an optimal action is called the policy of the agent. Obviously, when the agent takes an action, the “external world” changes as a result of this action. A transition model specifies how the world changes when an action is executed. If the current world state is st

1164

and the agent takes action at, the transition model maps a state-action pair (st; at) to a single new state st+1. In classical AI, a goal for a particular task is a desired state of the world. Generally, an agent may hold preferences between any world states. A way to formalize the notion of state preferences is by assigning to each state s a real number U(s) that is called the utility of state s for that particular agent. Formally, for two states s and s’ holds U(s) > U(s’) if and only if the agent prefers state s to state s’, and U(s) = U(s’) if and only if the agent is indifferent between s and s’. Intuitively, the utility of a state expresses the desirability of that state for the particular agent; the larger the utility of the state, the better the state is for that agent. Equipped with utilities, the question now is how an agent can efficiently use them for its decision making. Namely, what we need is a mechanism able to give the better, in terms of related utility function, world state for each involved agent. Specifically two coordination mechanisms are utilised in this chapter: the first is a negotiational one and the second is given by cooperative game theory. They not only differ for the theoretical approach, but for the exchanged information too. The common variables considered for the network formalization are: the market price of the kth product, the associated productive cost, related to the price by a mark-up strategy, the productive capacity of each plant and the quantity of each product required by the market. Specifically: pricepk is the market price of kth product, for the generic pth plant; k cos t p is the production cost of kth product, for the generic pth plant. It is a function of productive and managerial costs. It also takes into account the efficiency of the plant and its relative geographical dispersion. It is obtained with a mark-up strategy; k C p is the productive capacity of kth product, for the generic pth plant;

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Rpk is the quantity of kth product required by the market, for the generic pth plant. For both approaches, agents are identified and classified in overloaded OG = {1,…,i…,N} and underloaded UG = {1,…,j…,M}. Afterwards each of them respectively compute the capacity it needs to produce a given product k, RC ik , or the one it can offer,OC jk , to other plants of the network. The only variable all agents take into account is the price to pay to obtain -or make over- their capacity.

COORDINATION MECHANISM Negotiation is one of the most common approaches used as a coordination mechanism when the actors involved have conflicting goals. In particular, when the system is distributed the negotiation is the prevalent approach used. For all these reasons, it is an opportune benchmark to evaluate different coordination mechanism, as the game theoretic one proposed in this chapter. The motivation in finding a new approach lies in the following drawbacks that affect the performance of the negotiation process: •

• •



Function Definition: The agents’ strategies and the generative function typology for each role (creative or reactive offer) have to be defined and the output strongly depends on this choice; Maximum Number of Rounds: the performance strongly depends on the number of rounds; Information Exchange: For example, one agent can simple refuse the proposal or could indicate the terms to improve and so on; the negotiation ending criteria, that has to be defined.

Instead, the proposed game theory approach could be implemented in only one step and no strategies have to be designed (a cooperative approach is considered). Therefore, the main advantages are the following: • • •

the reduction of time to reach an agreement; the reduction of information exchange; the “intelligence” of the agents can be limited.

Negotiation The agent architecture in here adopted is the following: each plant belonging to OG is represented by a Capacity Offering Agent (COA) who is in charge of negotiating the capacity with RCAs (i.e. Requiring Capacity Agent) representing the plants that require it, belonging to the UG set. Finally, there is a Mediator Agent (MA) who is in charge of allowing communication and coordination among COAs and RCAs. The negotiation process is characterized by the following constraints (Negotiation constraints): • •





the negotiation is a multi-lateral one and it involves one to many agents; the negotiation is an iterative process with a maximum number of rounds, rmax ; after that an agreement is reached or the negotiation fails; during each round (r) the OCA can submit a new counter-proposal (N) to the RCA while, at r= rmax, it can only accept (A), reject (R) or ask for last counter proposal. Obviously, the RCA answer at generic round r, can be referred as (A ∨ R ∨ N)r ; the agreement is reached only if the RCA accepts the OCA counter proposal at round r< rmax; in this case the agents sign an electronic contract; if there are multiple agreements, the first OCA that satisfies the RCA signs the agreement.

1165

Capacity Sharing Issue in an Electronic Co-Opetitive Network

• •

the agents behavior is assumed to be rationale according to their utility functions; the RCA does not know COAs’ utility functions and vice versa.

The UML activity diagram of Figure 1 shows the agents’ interaction workflow. As the reader can notice three swim lines, corresponding to the above described agents, have been located in the diagram. Specifically, for the COA the following activities can be highlighted: •

Wait: the agent is in its initial state of waiting for a proposal (from RCA);

Figure 1. UML Activity Diagram

1166



Evaluates proposal: the COA evaluates the proposal of the RCA in terms of required capacity and offered price. At the first round the COA communicates the amount of capacity it is willing to offer (the minimum value between the one requested by the RCA and its own unused capacity). Subsequently, the COA communicates to the RCA if it accepts or refuses the proposed price to exchange the promise amount of capacity. Then the COA evaluates the proposal of the RCA by a threshold function given by (1):

Capacity Sharing Issue in an Electronic Co-Opetitive Network

 r − 1  k =  price kj − (price kj − cos t jk ) ⋅ 1  ⋅ M ij − 1 r   max

k ,r1

val j

formation to the MA. The submitted price is obtained by the following expression:

(1) k ,r1

vali

being

(4)

k

k

k

i

j

M ij = min(RC ,OC )

(2)

Expression (1) computed by the COA is a threshold level. Starting from the market price value, during the negotiation the variable r1 increments and the threshold level decreases until the value of production costs. In this case the generated profit is null. At this point, the following expression is checked: k ,r1

vali

k ,r1

≥ val j

(3)

If (3) is verified, the jth plant supplies the requested capacity of the ith plant: they reach an agreement and each ones can update their available capacity. •



Updates threshold level: if the COA refuses the price submitted by the RCA, it updates the threshold level for the next round of negotiation (increases the value of r in expression (1)); if the algorithm reached the last round, the COA simply quits the negotiation. Updates capacity: if the negotiation reaches an agreement, the COA updates the capacity it owns. In case no more capacity resources are available it quits, otherwise it goes in its Wait state. The RCA performs the following activities:



 r − r1  k =  priceik − (priceik − cos tik ) ⋅ max  ⋅ M ij r − 1   max

Proposal elaboration: the RCA elaborates a proposal in terms of price and amount of capacity to acquire, and transmits this in-

Expression (4) computed by the RCA starts with a price equal to production costs: the generated profit is the same obtained when the products are produced by its own plant. During the negotiation the price is increased until the value given by the market price. In this case the generated profit is null. • •



Wait: the RCA waits for counter-proposal by the COAs. Counter-proposal computation: if the COA refuses the proposal and the negotiation is still running, the RCA computes a new counter-proposal (increases the value of r in expression (4)). Otherwise (i.e. it is the last round of negotiation), the process ends with no agreement. Updates capacity: if the negotiation reaches an agreement, the RCA updates their information; if the acquired capacity is exactly the required one it quits, otherwise it computes a new proposal for the residual capacity it needs.

The MA performs the coordination activities between COA and RCA. In particular it: • •

Wait: the MA is in its initial state of waiting for a proposal (from the RCA). Computes raking list: the MA computes a ranking list among all the plants that requested capacity. The way it does it is depending on several variables; in this research the ranking is done favoring first plants with high need of capacity, allow-

1167

Capacity Sharing Issue in an Electronic Co-Opetitive Network



• •

ing them to better satisfy the customers’ requests. Transmits proposal: the MA transmits the proposal computed by RCA, at the ranking list of COAs. Wait: the MA is in state of waiting for the counter-proposal by all the COAs. Transmits counter-proposal: the MA transmits the counter-proposal of the COA to the RCA.

After having uploaded all the necessary values, the generic ith plant that does not reach the entire capacity it needs is again inserted in the ranking list. At this point the negotiation starts again. To avoid a deadlock, the agent that does not reach any agreement at the end of the negotiation process is removed by the ranking list.

Game Theory In this coordination mechanism, all agents communicate each other the capacities they need (offer) and the price they are willing to pay (accept) for the exchange. Properly, the situation can be formalized like an assignment game. The outline is a quadruple AGt = (OGt, UGt, Val t, Ot), where: • •

OGt and UGt are the already defined vectors at generic time t; Val t is a vector whose element, val jk ,r , represents the monetary value (val) that jth plant, belonging to UGt, assigns to its own residual capacity to sell it. This value is calculated by the (5):

val jk ,t = cos t jk ⋅ M ijk

(5)

computed as the value under which the jth plant does not have any convenience to exchange capacity.

1168



Ot is a matrix whose generic element Oij t is the price the plant i ∈ OGt offers for the capacity offered by each player j∈ UGt using the Formula (6):

Oijk ,t = priceik ⋅ M ijk

(6)

computed as the maximum value above which the ith plant does not has any convenience to exchange capacity. Generally speaking, a subset of players is called a coalition. A function v, assigning a value to every possible coalition S, with v(∅) =0,is called a characteristic function. The value v(S) is interpreted as the maximum total profit that coalition S can obtain through cooperation. Assuming that the benefit of a coalition S can be transferred between the players of S, the situation is called a cooperative game with transferable utility (TU-game). In reality, the players are not primarily interested in benefits of a coalition but in their individual benefits that they make out of that coalition. A division is a payoff vector specifying for each player its benefit. A division has to be efficient and individually rational. Efficiency is achieved when some specific criterion is maximized and no allocation of utility could yield a higher value according to that criterion, while individual rationality means that every player gets a quantity of utility at least as much as what he could obtain by staying alone. The set of all individually rational and efficient divisions, the core game, is reported in the following Figure 2. In words, the core is the set of imputations under which no coalition has a value greater than the sum of its members’ payoffs. Therefore, no coalition has incentive to leave the grand coalition and receive a larger payoff. Specifically for the proposed model, a transfer utility game is associated to the quadruple AGt; in that the whole players are given by Γt = OGt ∪ UGt and the characteristic function is defined, given that plants i and j make a coalition, as it follows:

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Figure 2. The bargaining set

N

∑z i =1

t

v (i, j ) = hij O k ,t − val k ,t  j =  ij 0 

if Oijk ,t − val jk ,t > 0 if Oijk ,t − val jk ,t ≤ 0 (7)

The set of hij t defines the following assignment problem: N M  max ∑ ∑ hij t ⋅ z ij  ,   i =1 j =1

(8)

1, if i and j make a coalition z ij =  0, otherwise 

(9)

ij

= 1;

ij

≤ 1;

N

∑z i =1

M

∑z j =1

ij

≤ 1 , if N < M

(10b)

ij

= 1 , if N >M

(10c)

M

∑z j =1

After having created the assignment among plants, it is necessary to consider the way to subdivide the generated surplus. In order to do this, the optimizing charge belonging to the core game is computed. According to Owen’s theorem (Owen, 1975), a core allocation, in a linear assignment game, can be obtained starting from an optimal solution of the dual program: M N  min ∑ yi + ∑ y j  ,   i =1 j =1

(11)

subject to: yi + y j ≥ hij t

(12)

subject to the following constraints: N

∑z i =1

ij

= 1;

M

∑z j =1

ij

= 1 , if N = M

(10a)

The residual capacity are then uploaded for each plant involved in the game and the algorithm

1169

Capacity Sharing Issue in an Electronic Co-Opetitive Network

starts again (t=t+1), formulating a new assignment game, until one of the two following condition -no more workload excess (13) or no more extra capacity (14)- is verified: N

∑ RC i =1

t i

=0

(13)

= 0.

(14)

or M

∑ OC j =1

t j

SIMULATION ENVIRONMENT It has been developed a distributed simulation environment based on the proposed Multi Agent Architecture to simulate the electronic co-opetitive network represented in Figure 3. It consists of a simulation environment, developed by using Java development kit package, able to test the functionality of the proposed apFigure 3. The electronic co-opetitive network

1170

proaches and to understand the related advantages and/or limits. The modeling formalism in here adopted is a collection of independent agents interacting via messages. This formalism is quite suitable for MAS development. In particular, each object represents an agent and the system evolves through a message sending engine managed by a discrete event scheduler. Specifically, the following objects have been developed: the COA, the RCA, the MA deeply described in the previous section-, the Scheduler, the Model and the Statistical agents. The Scheduler agent is in charge of the system evolution by managing the discrete events of the simulation engine. Differently, Model agent is in charge of the agents’ interaction. Finally, the Statistical agent collects the output data -at the end of each run of simulation- and it generates reports and statistical analysis. A proper interface has been developed to connect the MA with a Lingo solver in order to solve the mathematical models related to the game theory approach. The simulation network reply 12 periods; in each of them the market price of kth product, for the generic pth plant (pricepk ) and

Capacity Sharing Issue in an Electronic Co-Opetitive Network

conditions, two levels of variability for each parameter have been considered: low (10% of variance) that leads to homogenous network (i.e. plants and markets with similar characteristics) and high (50% of variance) that leads to an inhomogeneous network (plants and markets have different characteristic). Thus, combining different levels of all four parameters, 16 simulation classes of experiments have been obtained (see the following Table 1). The following performance measures have been considered to compare the proposed coordination strategies:

the customers demand Rpk have been randomly generated by using uniform distributions (see Table 2). These parameters are external to the plants and they only depend on the market conditions in which each plant operates. Differently, the “internal” parameters (C k , plant capacity and p

cos t , the production cost of kth product, for the generic pth plant) have been randomly generated at the beginning of each period. Because of the use of a statistical distribution for all parameters, each experiment class has been replicated in order to achieve a confidence degree equal to 95% and a 5% of confidence interval for each performance index (see Table 3). In order to test the proposed coordination strategies, under different network k p

the total profit (TP) reached by the whole network; it has been computed as the sum



Table 1. Experimental classes Experiment classes

Mark-up

cos t

Plant capacity

k p

C

Customer demand

Price

k p

k

pricepk

R

p

1

L

L

L

L

2

L

L

L

H

3

L

L

H

H

4

L

L

H

L

5

L

H

L

L

6

L

H

L

H

7

L

H

H

H

8

L

H

H

L

9

H

L

L

L

10

H

L

L

H

11

H

L

H

H

12

H

L

H

L

13

H

H

L

L

14

H

H

L

H

15

H

H

H

H

16

H

H

H

L

1171

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Table 2. Input parameters (distribution of) low Mark-up

(cos t pk )

Plant capacity

k

(C ) p

Customers demand

(Rpk ) Price







(pricepk )

Table 3. Average results over all experimental classes

high

Uniform[0.9,1)

Uniform[0.7,1)

N(100,10%)

N(100,50%)

N(100,10%)

N(100,50%)

N(10,10%)

N(10,50%)

of each single profit generated by all the plants of the network; the total unsatisfied demand (TUD): it is the difference existing between the quantity of products required by the market and the one the network has been able to satisfy. It could be considered as a customer performance; the total unutilised capacity (TUC): it is the difference existing between the whole capacity of the network and the unallocated capacity; the profit distribution index (PDI): this value computes the capability of the proposed coordination mechanisms to distribute the whole profit among the plants of the network. To compute it, first is necessary to evaluate the average profit of the network as reported in (15):

average profit =

1 P

P

∑ profitp

(15)

P =1

being P the number of plants and profitp the generic profit reached by the pth plant. Then, the index is calculated as:

1172

Performance

no network vs Negotiation

negotiation vs game theory

TP

17.02%

14.89%

TUD

-44.62%

11.29%

TUC

-43.87%

10.56%

PDI

-55.78%

-61.59%

NAL

-

-31.18%

DAL

-

-85.28%

TV

-

47.09%

DOT

-

-94.50%

P

PDI = ∑

profitp − average profit

p =1







(16)

average profit

the number of activated links (NAL) among plants. This performance is an index explaining how much could be complicated the logistic management of the network; the transactions value (TV). It is the quantity of money exchanged among plants, depending on the particular coordination mechanism. the distribution of the activated links (DAL). This index point out if some preferential links is established. The performance index is computed as it follows. First we compute the number of average links as reported in (17):

average links =

1 1 P ⋅ (P − 1) 2

P

P

∑ ∑ link p =1 w >p

wp

(17)

1 being P ⋅ (P − 1) the total amount of links that 2 can be potentially activated among P plants. The variable linkwp is a binary one. It is equal to the

Capacity Sharing Issue in an Electronic Co-Opetitive Network

unit when the link is activated, otherwise it is null. Then we compute the unbalance index as (18): P

P

DAL = ∑ ∑

linkwp − average links average links

p =1 w >p

(18)

The distribution of transactions among plants (DOT). This index emphasizes the evaluation of preferential links by considering the amount of exchanged money. The performance index is computed evaluating (19): average transactions =

1 1 P ⋅ (P − 1) 2

P

P

∑ ∑ transactions p =1 w >p

wp

(19)

average over the all experimental classes) have been reported in Table 3. Specifically: in the first column the performance indexes are reported using the notation described in the previous paragraph; the second column reports the percentage difference between results obtained in case of no relations among plants and the one in which the network is coordinated by using a negotiation approach. Last column shows the dissimilarity obtained comparing the results coming from negotiation and game theory. The considered performances have been obtained as average values over all the experimental classes. From the analysis of the previous results, the following considerations can be summarised: •

being transactionswp the amount of money exchanged between the wth and pth plants. Then the index is computed as (20): •

DOT = P

P

∑∑

transactionwp − average transactions

w =1 w >p p

average transactions

(20)

• •

RESULTS All the simulations have been conducted with a network of six plants. Each index –except the mark-up that comes up from an uniform distribution- is considered randomly drawn from a normal distributions N(µ;σ) whose parameters are reported in Table 2 (low and high are referred to the standard deviations size): The difference among the standard deviations of the statistical input is done to easily highlight the difference between the coordination approaches in both homogenous (low level) and inhomogeneous (high level) network. The simulation results (the



coordination among plants is a real added value both for plants belonging to the network and for customers. Indeed, comparing the performances with the case in which no relations exist among plants, TP increases and, at the same time, TUC decreases; the negotiation approach leads to a better performances in terms of TUD and TUC; game theory coordination leads to better performance of TP reached by the network; game theory reduces the number of activated links (NAL): that means that this coordination approach simplifies the logistic management among plants involved in the network. At the same time, the high value of TV highlight that, in spite of this reduction, this coordination mechanism increases the amount of each single transaction; Considering the unbalanced indexes, PDI and DAL, as the reader can notice, game theory outperforms the negotiation mechanism: that means that there’s a better distribution of the added value of the network and a simpler logistic management.

More relevant results over all the experimental classes are graphically showed in the following Figures 4, 5 and 6.

1173

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Figure 4. Profit distribution over 16 experimental classes

Figure 5. Unsatisfied demand (TUD) over 16 experimental classes

Figure 6. Transactions distribution (DOT) over 16 experimental classes

1174

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Figure 4 shows the amount of TP-reached by the network- for each experiment. Differences among configurations become relevant in the following classes: 3,6,7,11,14 and 15. Being the mark-up variability low (in 3,6 and 7) and high (in 11,14 and 15), it is possible assert that the highlighted differences between negotiation and game theory are not affected by the variability of this parameter. Moreover, they have a comparable reaction when the high level variability of the price is combined with a high variability of the demand (see classes 3 and 11) or of the capacity (see classes 6 and 14), or both the two (see classes 7 and 15). Figure 5 shows the TUD performance of the network, for each experiment class. TUD is very low for classes with low variability of input parameters (see classes 1,2,3 and 4). In other cases, the coordination approaches lead to comparable values of this performance, with a little bit minor value for the negotiation approach. Figure 6 shows the DOT performance of the network. For experimental classes with low variance of input parameters (see 1,2, 9 and 10) the amount of transactions is very low. For all the other, the value of transactions obtained using game theory is always greater than in the negotiation case.

SUMMARY, CONCLUSION AND FUTURE DEVELOPMENT The chapter deals with the real added value in an electronic co-opetitive network of plants that can exchange productive capacity among them. Results of this research can be located at two levels. Concerning the specific plant coordination problem here addressed, the following conclusion can be drawn: •

different coordination strategies lead to a different performances for the networked enterprises; therefore the coordination





strategy play a fundamental role for the goal the network would pursue; the proposed negotiation mechanism leads to a better performance for the customers: the unsatisfied demand is very low. From the network point of view, the profit is minor than in the game theory case and the distribution among plants is less homogeneous. Moreover, the negotiation approach increases the number of activated links with a minor amount of individual monetary transaction. This characteristics leads to make more difficult the management of the network from the logistic point of view. Finally, the negotiation approach implies the creation of preferential links that could marginalize some plants. The proposed game theory approach leads to better performances, from the networked plants point of view. In particular, this approach outperforms the negotiation in terms of generated total profit, uniformity of profit distribution and homogeneity of monetary transactions concerning the activated links. The logistics problem is simpler than in the previous case, because game theory creates minor links with higher individual monetary exchange. At strategic level, this research shows that:





Multi Agent System is a suitable approach to implement a distributed architecture for the capacity exchange issue that often characterize electronic co-opetitive network of enterprises; Discrete event simulation is a powerful tool to test coordination strategies and highlight the real added value of these approaches in an electronic co-opetitive network. This tool reduces the risk related to the ICT investment and agent based technology. Moreover it can encourage enterprises to

1175

Capacity Sharing Issue in an Electronic Co-Opetitive Network

adopt the “stay together” approaches for real business applications. Obviously the network is not a permanent fixture, but is something that is either being formed or might change. From this point of view, and generally speaking, the allocation of value at a given network can and should depend on the value that might accrue to alternative potential networks. In particular, evaluating the contribution to value of a given link depends on the contribution of that link to various networks, and perhaps more importantly, which to what extent other links might serve as a substitute. To understand why this issue arises it is important to recognize that larger networks might have higher costs associated with them than some smaller networks. For instance, the value generated by the complete network might be much less than the value generated by some sub-network. This means that generally, the most efficient network (in a value maximizing sense) might not be the complete network. This introduces some important considerations in the allocation of value to consider in future development of this work.

REFERENCES Afuah, A. (2000). How much do your coopetitors’ capabilities matter in the face of technological change? Strategic Management Journal, 21(3), 387–404. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-1 Ahuja, G. (2000). The duality of collaboration: inducements and opportunities in the formation of interfirm linkages. Strategic Management Journal, 21(3), 317–343. doi:10.1002/(SICI)10970266(200003)21:33.0.CO;2-B

1176

Anand, B., & Khanna, T. (2000). Do firms learn to create value? The case of alliances. Strategic Management Journal, 21(3), 295–315. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-O Ardichvili, A., Cardozo, R., & Ray, S. (2003). A theory of entrepreneurial opportunity identification and development. Journal of Business Venturing, 18(1), 105–123. doi:10.1016/S08839026(01)00068-4 Bass, F. M., Krishnamoorthy, A., Prasad, A., & Sethi, S. P. (2005). Generic and Brand Advertising Strategies in a Dynamic Duopoly. Marketing Science, 24(4), 556–568. doi:10.1287/ mksc.1050.0119 Baum, J., Calabrese, T., & Silverman, B. (2000). Don’t go it alone: alliance network composition and startups’ performance in Canadian biotechnology. Strategic Management Journal, 21(3), 267–294. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-8 Bengtsson, M., & Kock, S. (1999). Cooperation and competition in relationship between competitors in business network. Journal of Business and Industrial Marketing, 14(3), 178–193. doi:10.1108/08858629910272184 Bengtsson, M., & Kock, S. (2000). Coopetition in business networks to cooperate and compete simultaneously. Industrial Marketing Management, 29(5), 411–426. doi:10.1016/S00198501(99)00067-X Boari, C., & Lipparini, A. (2000). Networks within industrial districts: organising knowledge creation and transfer by means of moderate hierarchies. Journal of Management and Governance, 99(3), 339–360.

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Brandenburger, A., & Nalebuff, B. (1996). CoOpetition: A Revolution Mindset That Combines Competition and Cooperation. New York: Harper Collins Business. Brandenburger, A. M., & Nalebuff, B. J. (1996). Co-opetition. New York: Doubleday Dell Publishing Group. Chen, J. C., Wang, K. J., Wang, S. M., & Jang, S. J. (2008). Price negotiation for capacity sharing in two-factory environment using genetic algorithm. International Journal of Production Research, 46(7), 1847–1868. doi:10.1080/00207540601008440

Durfee, E. H., & Lesser, V. (1989). Negotiating task decomposition and allocation using partial global planning. In Gasser, L., & Huhns, M. (Eds.), Distributed artificial intelligence (Vol. 2, pp. 229–244). San Francisco, CA: Morgan Kaufmann. Freel, M. (2000). External linkages and product innovation in small manufacturing firms. Entrepreneurship and Regional Development, 12, 245–266. doi:10.1080/089856200413482 Galaskiewicz, J., & Zaheer, A. (1999). Networks of competitive advantage. In Andrews, S., & Knoke, D. (Eds.), Research in the Sociology of Organizations (pp. 237–261). Greenwich, CT: JAI Press.

Christie, R. M. E., & Wu, S. D. (2002). Semiconductor capacity planning: Stochastic modeling and computational studies. IIE Transactions, 34(2), 131–144. doi:10.1080/07408170208928856

Garraffo, F. (2002). Types of Coopetition to Manage Emerging Technologies. EURAM Second Annual Conference – “Innovative research in Management”, Stockholm, 9-11 May.

Cooper, A. (2002). Networks, alliances and entrepreneurship. In Hitt, M., Ireland, D., Camp, M., Sexton, D. (Eds.), Strategic Entrepreneurship,203-222 Oxford, UK: Blackwell, Oxford.

Granovetter, M. (1985). Economic action and social structure: A theory of embeddedness. American Journal of Sociology, 91, 481–510. doi:10.1086/228311

D’Aspremont, C., & Jacquemin, A. (1988). Cooperative and Noncooperative R&D in Duopoly with Spillovers. The American Economic Review, 78, 1133–1137.

Gulati, R. (1998).Alliances and networks. Strategic ManagementJournal,19(4),293–317.doi:10.1002/ (SICI)1097-0266(199804)19:43.0.CO;2-M

Dagnino, G. B., & Padula, G. (2002). Coopetition Strategy. A New Kind of Interfirm Dynamics for Value Creation. EURAM Second Annual Conference – “Innovative research in Management”, Stockholm, 9-11 May.

Gulati, R., Nohria, N., & Zaheer, A. (2000). Strategic networks. Strategic Management Journal, 21(3), 203–215. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-K

Dollinger, M. (1985). Environmental contacts and financial performance of the small firm. Journal of Small Business Management, 23(1), 24–30.

Gurnani, H., Erkoc, M., & Luo, Y. (2007). Impact of Product Pricing and Timing of Investment Decisions on Supply Chain Co-opetition. European Journal of Operational Research, 180, 228–248. doi:10.1016/j.ejor.2006.02.047

Doz, Y., Olk, P., & Ring, P. (2000). Formation processes of R&D consortia: which path to take? Where does it lead? Strategic Management Journal, 21(3), 239–266. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-K

Huxham, C., & Vangen, S. (2005). Managing to collaborate. Theory and practice of collaborative advantages. London, New York: Routledge.

1177

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Ip, W. H., Li, Y., Man, K. F., & Tang, K. S. (2000). Multi-product planning and scheduling using magnetic algorithm approach. Computers & Industrial Engineering, 38, 283–296. doi:10.1016/ S0360-8352(00)00044-9 Isariyawongse, K., Y. Kudo, V. J. Tremblay. (2007). Generic and Brand Advertising in Markets with Product Differentiation. Journal of Agricultural and Food Industrial Organization 5(1), Article 6. Jarillo, C. (1989). Entrepreneurship and growth: the strategic use of external sources. Journal of Business Venturing, 4, 133–147. doi:10.1016/0883-9026(89)90027-X Kogut, B. (2000). The network as knowledge: generativerulesandtheemergenceofstructure. Strategic ManagementJournal,21(3),405–425.doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-5 Kotzab, H., & Teller, C. (2003). Value-adding partnerships and coopetition models in the grocery industry. International Journal of Physical Distribution & Logistics Management, 23(3), 268–281. doi:10.1108/09600030310472005 Krishnamurthy, S. (2000). Enlarging the Pie vs Increasing One’s Slice: an Analysis of the Relationship between Generic and Brand Advertising. Marketing Letters, 11, 37–38. doi:10.1023/A:1008146709712 Lechner, C., & Dowling, M. (2000). The evolution of industrial districts and regional networks: the case of the biotechnology region Munich/Martinsried. Journal of Management and Governance, 99(3), 309–338. Lechner, C., & Dowling, M. (2003). Firm networks: external relationships as sources for the growth and competitiveness of entrepreneurial firms. Entrepreneurship and Regional Development, 15, 1–26. doi:10.1080/08985620210159220

1178

Luo, Y. (2004). A coopetition perspective of MNC-host government relations. Journal of International Management, 10(4), 431–451. doi:10.1016/j.intman.2004.08.004 Luo, Y. (2005). Toward coopetition within a multinational enterprise: A perspective from foreign subsidiaries. Journal of World Business, 40(1), 71–90. doi:10.1016/j.jwb.2004.10.006 Luo, Y. (2007). A Coopetition Perspective of Global Competition. Journal of World Business, 42, 129–144. doi:10.1016/j.jwb.2006.08.007 Meder, K. (2005). Case Study SupplyOn: an e-Marketplace from Suppliers for Suppliers. EBusiness W@tch. Oliver, A. L. (2004). On the duality of competition and collaboration: Network-based knowledge relations in the biotechnology industry. Scandinavian Journal of Management, 20(1-2), 151–171. doi:10.1016/j.scaman.2004.06.002 Owen, G. (1975). On the core of linear production games. Mathematical Programming, 9, 358–37. doi:10.1007/BF01681356 Quintana-Garcia, C., & Benavides-Velasco, C. A. (2004). Cooperation, Competition, and Innovative Capability: a Panel Data of European Dedicated Biotechnology Firms. Technovation, 24, 927–938. doi:10.1016/S0166-4972(03)00060-9 Renna, P., & Argoneto, P. (2008). Capacity Allocation in Multi-Site Factory Environment: a Multi Agent Systems Approach. Intelligent Computation in Manufacturing Engineering Innovative and Cognitive Production Technology and Systems, 23 - 25 July. Italy: Naples. Rowley, T., Behrens, D., & Krackhardt, D. (2000). Redundant governance structures: an analysis of structural and relational embeddedness in the steel and semiconductor industries. Strategic Management Journal, 21(3), 369–386. doi:10.1002/ (SICI)1097-0266(200003)21:33.0.CO;2-M

Capacity Sharing Issue in an Electronic Co-Opetitive Network

Shane, S., & Venkataraman, S. (2000). The promise of entrepreneurship as a field of research. Academy of Management Review, 25, 217–226. doi:10.2307/259271

Billand, P., & Bravard, C. (2004). Non-cooperative networks in oligopolies. International Journal of Industrial Organization, 22, 593–609. doi:10.1016/j.ijindorg.2004.01.004

Suetens, S. (2005). Cooperative and Noncooperative R&D in Experimental Duopoly Markets. International Journal of Industrial Organization, 18, 63–82. doi:10.1016/j.ijindorg.2004.11.004

Chatterjee, P. (2002). Interfirm alliances in online retailing. Journal of Business Research, 57, 714–723. doi:10.1016/S0148-2963(02)00362-4

Tonshoff, H.K., Seilonen, L., Teunis, G. and Leitao, P. (2000). A mediator-based approach for decentralized production planning, scheduling and monitoring. Report of the Espirit program EP 24986.

Lechner, C., Dowling, M., & Welpe, I. (2006). Firm networks and firm development: The role of the relational mix. Journal of Business Venturing, 21, 514–540. doi:10.1016/j.jbusvent.2005.02.004 Roma, P. (2009). Models of Co-opetition in groupbuying and advertising. PhD Thesis, University of Palermo, Italy.

ADDITIONAL READING Belussi, F., & Arcangeli, F. (1998). A typology of networks: flexible and evolutionary firms. Research Policy, 27, 415–428. doi:10.1016/ S0048-7333(98)00074-2

This work was previously published in Electronic Supply Network Coordination in Intelligent and Dynamic Environments: Modeling and Implementation, edited by Iraj Mahdavi, Shima Mohebbi and Namjae Cho, pp. 291-318, copyright 2011 by Business Science Reference (an imprint of IGI Global).

1179

1180

Chapter 63

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation Goldstain Ofir Tel-Aviv University, Israel Ben-Gal Irad Tel-Aviv University, Israel Bukchin Yossi Tel-Aviv University, Israel

ABSTRACT The Internet developments as well as the increase in PCs’ capabilities and bandwidth capacity have made remote learning through the internet a convenient and practical learning preference, leading to a variety of new learning interfaces and methods. This chapter discusses a remote learning study conducted at the Computer-Integrated-Manufacturing (CIM) Laboratory in Tel-Aviv University. The goal is to provide remote end-users with an interface that enables them to teleoperate a robotic arm in conditions as close as possible to hands-on operation in the laboratory. This study evaluates the contribution of different interface components to the overall performance and the learning ability of potential end-users. Based on predefined experimental tasks, the study compares alternative interface designs for teleoperation. The three performance measures of the robot operation task are (1) the number of steps that are required to complete the given task, (2) the number of errors during the execution stage, and (3) the improvement rate of users. Guidelines for a better design of remote learning interfaces in robotics are provided based on the experimental results.

DOI: 10.4018/978-1-4666-1945-6.ch063

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

INTRODUCTION

BACKGROUND

This chapter focuses on the design of an interface for remote learning of robotics operations. The interface design, which is supported by technical guidelines, is general and applicable for a wide variety tools for teaching tele-robotic operation. It differs from previous research in the field, which often focuses on a specific applicative interface. The proposed interface includes aspects of remote manipulation of robots with aspects of remote learning. The motivation for such integration is to enable users to practice not only the remote activation of a robotic cell but also the availability of learning, redesigning and optimizing the work plan in the cell. The chapter starts by considering three possible design schemes: a “Home-based,” a “Lab-based” and a “Websitebased.” It identifies different interface components that support a remote telerobotic-learning. Then it measures and evaluates the interactions among these components as well as their effects and usability within a proposed remote learning interface. Such an evaluation is conducted by running a set of experiments, requiring the users to execute specific robotic tasks from a remote location while examining their performance over various interface settings. The performance of the remote users is also compared with hands-on operation, which is used as a benchmark setting. The evaluation tool of the web-based interface for the telerobotic learning is called the TestOriented-Interface (TOI). As the chapter unfolds, elements within this interface are evaluated, focusing on their contribution to the remote learning assignments. A full set of guidelines for designing a remote learning interface is extracted from the evaluation of the TOI. The objective of these guidelines is to maximize the benefits obtained from the interface for the users (e.g., students) as well as for the hosting institute (e.g., university). Finally, we present how the new web-interface for remote learning of robotic operations is implemented and fully operated in the CIM laboratory.

Remote Learning Interfaces for Robotics Remote control and manipulation of robots has been used to perform predetermined tasks, often in a hostile, unsafe, inaccessible or remote environment (Siegwart and Saucy, 1999 and Bukchin et al., 2002). NASA, for example, keeps track and provides free access to active telerobotic systems through the NASA Telerobotics Web-page. Architecture of a WWW-based system for a remote telerobotic operation was presented in 1999 by Belousov et al. (1999). Their system was mainly oriented for reliability and efficiency and was based on a 3D Java visualization tool that overcame bandwidth restrictions that existed at the time. Belousov et al. (2001) presented a similar architecture with an addition of a tool that supports the remote programming of the robot. Among many variations of teleoperation systems, one can mention Wang and Liu (2004) with a teleoperation paradigm for Human-Robot interaction, Hu et al. 2001)) with a system for remote controlling a robot with visual feedbacks over a simulated map, Kofman et al. (2005) with a hand-arm-gesture method for teleoperation, Hu et al. (2001) with a pioneering work networked telerobotic systems for tele-training, and Siegwart and Saucy (1999) with a modular framework for mobile robots on the web and many other applications. New integration protocols were used to combine 3D simulation tools with remote control and manipulation interfaces, enabling the management of complicated tasks in flexible robotic cells. Candelas et al. (2005) present a system focused on the training of kinematics and trajectory design of robotic arms. Their work was among the first to use a learning platform with full interactivity in the teleoperation process. Michau et al. (2001) present in detail the expected benefits of web-based learning for engineers. They express the need for

1181

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

remote learning tools within virtual laboratories, stating that although simulations cannot replace real experiments, remote laboratories provide new ways for practicing hands-on-experiments (Gravier et al., 2008). Integrating simulations within real implementation activities is considered a necessity in contemporary engineering education (Khan and Al-Kahtani, 2002; Fernandes and Martins, 2001). An example for such integration is found in Calkin et al. (1998), with visualization, simulation and control of a robotic system relying on internet technology. This virtual learning mechanism was later referred to by Goldstain et al. (2007) as the “Home-Based” design scheme. Similarly, Puente et al. (2000) ‎ use simulations as a learning tool when suggesting general system architecture. Yang et al. (2004) introduce an internet-based teleoperation scheme of a robot manipulator for educational purpose. Their system integrates a virtual off-line simulation with an actual teleoperation module, including a visual feedback. In their conclusion they suggest the development of a more general control system that later was presented in Goldstain et al. (2007) as the “Web-based” design scheme. Enrique Sucar et al. (2005) consider virtual modules as a primary tool in teaching robotics. In their work, a virtual laboratory based on simulation was developed and assessed without evaluating the required interface components as done in this chapter. A modern, fully developed interface for remote learning and programming of a robot arm was also presented by Marin et al. (2005).

Remote Interfaces Design Siegwart and Saucy (1999) describe the basic specifications and major difficulties when designing an interactive platform for remote learning of robotics systems. Their suggested modules include a video feedback module, a robot guidance module, and a virtual representation module. Enabling a user to learn and optimize a work plan in addition to remotely operating given robotic

1182

tasks requires more than basic manipulation tools for remote control (Kahn et al., 2002). A threedimensional (3D) simulation tool is one of the most popular tools when dealing with “on-site” learning (Candelas et al., 2005; Enrique Sucar et al., 2005). Candelas et al.(2005), Tzafestas et al. (2006), Marin et al. (2005), and others offered different variations of both off-line and online 3D simulation tools. More advanced simulation tools, like the one used in Goldstain et al. (2007), provide another important feature for the learning process, which is the ability to create and record a program for the simulated system and then implement it to run the physical system itself. The main feedback for a remote operation of a robotic cell is often considered to be visual feedback (‎Hu et al., 2001; Eliav et al., 2005; Aktan et al., 1996). Unlike local settings, the remote setting often relies on various visual sensors to provide visual feedback (Bukchin et al., 2002; Bochiccio and Longo, 2009). In virtual laboratories in particular, this feedback is gained through a 3D model, as implemented in Belousov et al. (1999) and Belousov et al. (2001) using Java. Another, more advanced feedback for robotics is presented in Tzafestas et al. (2006) and Goldstain et al. (2007). These works consider a positional feedback, providing the user with valuable information regarding the positioning of each axis in the robotic arm. This type of feedback can be used to reconstruct the robot’s movement or even to completely re-evaluate the robotic cell layout. Adams (2002) suggests critical considerations for human-robot interface development. His concepts of User Centered Design and Situation Awareness guided the design of the proposed framework. Goldstain et al. (2007) presented a methodology, which we used for the design of experiments presented in the next section. Their suggested methodology is based on the framework presented by Yang et al. (2004) and by Chen et al.(2001), preaching for the use of virtual laboratories as an essential tool for learning, prior to the execution of “on hand” experiments. This chapter

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

follows the above mentioned guidelines and suggestions. It focuses on the analysis of the main components in an interface for remote-learning of robotic operations and provides guidelines for an efficient design of such tool.

Integrating Remote-Learning and Teleoperation The main challenge addressed in this chapter is how to enable multiple remote users (e.g. students) to simultaneously design and optimize the operation of a robotic cell. The users are required to learn, mainly through a “trial-anderror” process, the appropriate way to operate the robotic cell previous to its online activation. This supports cognitive (or mental) learning, as opposed to a motoric (or mechanical) learning, which is required when operating a robotic cell on site (Bukchin et al., 2002; Calkin et al., 1998; Cooper et al., 2002). There are several alternative approaches that can be considered for remote teaching of robotic operation (Bukchin et al., 2002; Calkin et al., 1998; Tzafestas et al., 2006). The first that we considered is the Lab-based approach which uses designated software packages, i.e. Virtual VNC, to enable a full remote control of the robotic cell’s controller PC, while receiving all feedbacks remotely. The Lab-based approach allows users to operate and react as if they are physically located in the laboratory. On the other hand, it requires a considerably broad bandwidth from both the client’s and the server’s computers. It also lacks the possibility of administrating several users simultaneously, and bares with it many security IT problems. Second to be considered is a Home-based approach. This approach provides the users with a simulation program and an accurate scaled 3D model file of a cell’s layout, enabling them to initiate the learning at home without being dependent on the robot’s availability at the lab. In this approach the users are required to simulate and optimize the entire robotic operations

on their home computer and then send the final programmed tasks to the robot’s controller PC to be executed while they can observe the execution through a video feed (Calkin et al., 1998). The Home-Based approach, although safer and cheaper in terms of bandwidth demand, lacks the interactivity part that is so essential to support a real “trial-and-error” learning process. Finally, a third approach is investigated by combining the above-mentioned two approaches. It is named the Web-based approach and relies on a specially-designed website that is integrated with a 3D simulation program installed on the user’s computer. Goldstain et al (2007) claim that this approach has several significant advantages: it is safer, it enables future developments for other types of lab equipment, it supports administrating multiple users, it controls the system’s usage, and it keeps the laboratory’s computer invisible to the end users, thus, reducing the associated safety problems (Candelas et al., 2005; Chiculita and Frangu, 2002). The methodology used in the web-based design relies on client-server interactions (Chen et al., 1999; Chiculita and Frangu, 2002; Saliah et al., 2000). Such interactivity was found to be essential to provide the users with the necessary information to refine their actions to achieve a more accurate performance of the required task. A survey conducted among 60 users of a beta version of this website and presented in Goldstain et al. (2007) indicated a high level of user satisfaction.

METHODOLOGY In this section a conceptual methodology is proposed to integrate a remote learning process with teleoperation, based on the web-based approach presented above. The integration of remote learning and teleoperation within the web-based approach is presented in Figure 1. The web-based solution includes the following modules: (1) a user-client PC running a simulation

1183

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 1. “Web-based” design scheme

program and a web browser; (2) a website containing video capabilities, manual movement controls and files-upload protocol; and (3) a web server running the robotic cell controller as well as the simulation program. These combined modules support both the learning through “trial-anderror” and a teleoperation mechanism.

Basic Modules of Remote Learning of Robotic Operations Enabling a user to learn, operate and optimize given robotic tasks requires more than basic manipulation interface for remote control. For example, a 3D simulation tool is one of the most popular tools for dealing with hands-on learning (Candelas et al., 2005; Enrique Sucar et al., 2005). The simulation provides the users with the ability to analyze their performance and to redesign the process up to the required level. Advanced simulation tools provide another important feature to the learning process–namely, the ability to create and record a robotic program and then apply it to actually run the physical system. When using a recorded program, the users can easily implement changes to the system layout or to the sequence of performed actions by using historical knowledge. Both the simulation and the optimization processes

1184

are based on the interaction between the human operator and the manipulation interface of the robot itself. A feedback from the system allows the users to change and better design their program, thus supporting the process and improving learning skills and abilities. The basic feedback for a remote operation of a robotic cell is the visual feedback (Eliav et al., 2005; Aktan et al., 1996; Hu et al., 2001). In remote systems, several visual sensors can be used to provide such feedback, often a closed-loop TV system or a streaming video. The latter provides the operator with a somewhat rough knowledge regarding the work cell’s layout and the system state following each phase of the operation; however, it is rarely accurate enough to enable fine tuning of the robotic arm tasks, as it provides the users with a two dimensional picture of a three dimensional reality. When considering the visual feedback module, the main objective is to obtain a clear online feed from a few different view angles of the work cell. An issue that remains unsolved is the visual feedback gap between the 3D reality vs. a 2D image (Doulgeri. and Matiakis, 2006; Yang et al., 2004). It is found that it is almost impossible to operate a robotic task depending only on a single video feed that shows the cell from a single angle.

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

While investigating the learning process of robotic cells operation, it is found that the 3D view of the cell is a vital ingredient for the success of the learning process. A common way for teaching robotic operation is through 3D simulation programs, enabling the users to learn and experience with a variety of simulated work cells, some of which differ from the existing work cells in the lab. The next step was to find a way to combine the capabilities offered by the simulation program with basic teleoperation tools, creating not only a remote control interface but an actual remote learning mechanism. Goldstain et al. (2007) present a schematic integration between Remote-Learning and teleoperation, significantly improving the learning process of the users and yielding an efficient use of equipment and resources. This integration enables the user to participate in a remote learning process on top of being able to operate a robotic system. The added value of such integration is in the ability to apply it to the available equipment and to different lab experiments (Michau et al., 2001; Ammari and Ben-Hadj, 2006).

The Compared Interfaces Two different design platforms were used throughout the experiment: an INTERNET (Web-based) platform based on our Test Oriented Interface (TOI), as detailed later in the chapter, and a (wired) Robocell platform, which was operated either remotely or locally for comparison purposes. More specifically, four different interfaces were tested for the evaluation of teleoperation tasks: • • • •

INTERNET: a Test Oriented Interface operated remotely (based on our methodology‎) LRC: a Robocell interface operated locally VRC: a Robocell interface operated remotely with Virtual Real-Time Presentation (VRTP) RRC: a Robocell interface operated remotely without VRTP

It is important to note that although the LRC is included within the compared interfaces, it is not a remote interface setting. This setting represents the everyday hands-on execution of robot operation locally in the laboratory and is used here as a benchmark to compare to the other settings, enabling evaluation of their proximity to handson operation performance. The compared interfaces are described in the next sections.

The Design of Experiments As not all of the factors were technically able to be integrated into all the emulated interfaces, a partial factorial design was generated to include the available combinations of factors that could be tested. The partial design is motivated by practical and methodological considerations, as each interface was introduced with slight variations of the components for practical reasons. To avoid partial designs, the evaluation tests were divided into two congruent phases, each phase was evaluated as a full factorial model on its own, and together they covered all the design variations that were technically available. Figure 2 describes the experimental tree for the design phases. The transparent branches represent the excluded parts of the experiment in that phase (Phase 1 lacks the Axis-XYZ control parameter and, in Phase 2, we aim at maintaining a balanced hierarchical experiment that includes the internet interface). The first evaluation phase examines (1) the effect of the control method on the execution and the learning process and (2) the contribution of the VRTP module (integrated in the VRC interface). The second evaluation phase examines (1) the difference between the different RoboCell interfaces (LRC, RRC and VRC) and the designed remote Internet interface and (2) the contribution of the preliminary simulation tool in all the remote interfaces (Internet, RRC, VRC).

1185

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 2. The two evaluation phases of the experiment

THE EXPERIMENT: INTERFACE COMPONENTS ANALYSIS This section describes the experimental study. The goal of the experiment is to evaluate the effect of different interface components on the usability of the remote learning interface. This section presents the task and the performance measures and main design factors.

Figure 3. An example of a completed worksheet

1186

The Experimental Task The designed task for the experiment was a simple “Pick-and-Place” task. It was adjusted to suit a remote operated system in the following way: the users were instructed to manipulate a robotic arm with a marker attached to its gripper in a way they will reach and mark a dot within three pre-placed circles. Figure 3 shows an actual result page of an experiment, describing the three pre-placed circles and the actual marked points.

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

As the learning goal of the task is an efficient operation of the robot while using a remote interface, the users were instructed to try to perform the task in as an efficient way as possible, i.e. in the least number of movements possible, and with as few as possible markings outside the designated circles. The circles were placed in the exact same location in all different variations of the experiment. The starting point was defined as the “homing point” of the robotic arm. Three performance measures were considered in the experiments, as discussed next.

Number of Movements Required to Complete a Leg In the experiment, a user step (movement) was defined as a single press on one of the controller’s buttons. In the examined designs, the movement was recorded as long as the button was pressed and was stopped when the button was released. Obviously, several movements were required to move the robot’s arm from point to point. Every single movement was recorded on a data-recording sheet and was associated with one of the legs. A leg was defined as the period between reaching and marking each of the circles. Thus, the first leg includes all movements recorded from the homing point until marking the first circle; the second leg was defined between marking the first circle and the second circle, and so on. Such steps assessed the overall performance of an operator and provided quantitative input regarding the improvement in performance during the whole task execution.

Number of Errors Recorded Errors were defined as a marking made outside of the designated circles (for example, see the indicated error in Figure 3). Every mark on the worksheet was numbered and recorded, and the number of markings outside the circles was later analyzed. The number of errors provided informa-

tion regarding the complexity of the task or the settings and helped to evaluate the performance of various interface designs.

Improvement Measure (Learning Curve) An improvement measure was evaluated by using the number of movements measured in the legs. The learning rate (termed for simplicity reasons) was defined as follows: LR = (N total − N H 1 ) / N total to

where N total is the total number of movements and N H

to

1

is the number of movements in the first

leg (from the homing point to first circle). Dividing the number of movements required for legs two and three by the number of total required movements was used as an approximate criterion to assess the improvement and the learning pace during the execution. The lower this ratio was, the larger the part of the total movements was required by the first leg. In such a case, the subsequent two legs required significantly fewer movements, and therefore, one could conclude that the improvement from the first leg to the next ones was greater.

Evaluated Design Factors Three components were defined as main design factors and evaluated through a set of experiments. The design factors were (i) use of a simulation tool prior to the task’s execution; (ii) use of VRTP during the robotic task’s execution; and (iii) the type of control method for the robotic arm. Each of these factors is presented next.

1187

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Preliminary Simulation The used 3-D simulation tool is based on the Robocell© software cell-setup module. This tool allows the operator to experience and learn in advance (i.e., in an off-line mode) the robotic system and its environment. It provides the operator with a virtual working cell, similar to the one in the actual site, viewable from all directions and angles. It integrates the SCORBASE© robotic control software with interactive three-dimensional solid modeling simulation software in that it replicates the actual dimensions and functions of the real equipment, providing users with a fully simulated robotic learning environment and a graphic tracking view of the robot’s operations. This preliminary simulation tool was available in all the considered interfaces. It was expected to improve the execution of the task and to shorten the learning period required during the online operation of the robotic cell. For analysis purposes, the different interface settings were examined both with and without the use of preliminary simulation. To support a transfer of the practice from the simulation to the real operation, the preliminary simulation manual control modul was designed as similar as possible to the tele-robotic control interfaces.

Virtual Real-Time Presentation (VRTP) In certain settings, the 3-D simulation tool can be operated not only in advance but also during the online execution of a task. When operated in an online mode, we refer to it as VRTP. The VRTP tool provides the operator with an extra view of the working area, including the possibility of changing the viewpoint in direction and orientation during the actual execution. While this tool seems unnecessary when operating the robotic cell locally via eye contact, it may provide valuable information for a remote operator. The reason is that the remote operator receives 3D information on a 2D media channel, which may

1188

cause a misinterpretation or misperception of the environment. Unfortunately, due to its technical complexity, this tool is not integrated in most of the currently used remote learning interfaces. For this reason, it was not integrated into the TOI. Instead, for analyzing the contribution of this tool, we considered two different remote designs: one with and one without the VRTP tool, and called them the VRC and RRC interfaces, respectively.

Control Method of the Robotic Arm Two common methods are available for controlling a robotic arm: the Axis-XYZ control method and the Joints control method. The Axis-XYZ control method allows the operator to move the robotic arm along the axis of an imaginary Cartesian workspace. The linear movement is intuitive, but it requires greater computing resources as the robot’s controller needs to calculate the exact direction and force for each joint motor and operate multiple motors simultaneously to achieve a linear movement. For these reasons, it was technically impossible to implement the required matrix into the TOI, and therefore, the Axis-XYZ control method was tested only in the LRC, RRC and VRC interfaces. The Joints control method, although not as intuitive, is technically and mechanically much simpler. This control method is based on activating a single joint motor at a time, resulting in a nonlinear movement (in the case of a polaric joint). Consequently, it is more complicated for the inexperienced operator to control a robot using this method.

EXPERIMENTAL SETTINGS AND APPARATUS Using the “web-based” design scheme, a dedicated website interface was designed to represent the Internet interface in the experiments. The website combined a remote controlling interface for the

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

robot’s arm along with a module for simulating and optimizing the operation, recording it, and downloading a pre-tested program to the robot’s controller.

Physical Layout A dedicated remote workstation was assembled to support the experiments. The workstation was equipped with two screens to standardize the visual feedback size and its position for all the experimented settings. The first screen was used for visual feedback only, displaying two live video feeds of the robotic cell: an overall view - an isometric view of the cell, and a zoomin top-view of the work area. The second screen was used to support the actual teleoperation of the robotic cell. Both the Robocell software and an Internet browser were installed in the work station and alternately operated, depending on the experimental stage. Another workstation was used in the local site to host the servers of the website and the live video feeds. Four interface versions were examined, dictating four slightly different settings for the local and the remote stations, as explained next.

Local Robocell (LRC) Interface The local Robocell was used as a control group in the experiments. The LRC was executed right next to the robotic cell itself. In these experiments, the Robocell software was executed without adding any visual feedback. Since the experiments took place within eye-contact distance of the cell, it provided the operator with the opportunity to actually observe the robot and decide on the next required step.

Remote Robocell (RRC) Interface The Remote Robocell workstation was equipped with a dedicated software package that was installed on the remote teleoperation computer and

supported the preliminary simulation module. The teleoperation computer itself was wire-connected to the robot’s controller via an amplified USB port. The remote workstation was placed in a remote room, preventing any eye contact between the operator and the robotic cell. These interface settings were designed for the examination of a modern teleoperation system that can support both the Axis-XYZ control and the joints control methods. When using the RRC settings during the experiment, the operator had a presentation of the manual movement module of the Robocell software on one screen and two live video feeds of visual feedbacks on the other screen. In experiments where the preliminary simulation was used, the software was initially set to the off-line working mode. This means that the simulation module was presented to the users for an unlimited time, according to their choice, before turning into the online operation itself. Once changed to the actual online operation, the simulation module was shut down automatically.

VRTP-Enabled Remote Robocell (VRC) Interface The VRTP-enabled remote Robocell mainly refers to the online option for simulation with the Robocell software. The settings of both local and remote workstations were identical to those described for the RRC. The main difference between the two was the ability to keep the simulation module also running during the online operation stage. Such an option enabled the operator to have, on top of the visual feedback of the live feeds, a virtual visual feedback from the simulation module that was updated simultaneously with the movements of the robotic arm. Potentially, such an option provided the users with both an advantage and a disadvantage with respect to the RRC. On one hand, they could change the orientation, angles, and zooming of the view in the VRTP module and gain better information

1189

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

than that obtained from the video feeds alone. However on the other hand, the virtual simulation could never be as accurate as the real video feed and could have resulted in operation errors, especially when the users reached the edge of the robot’s working envelope.

INTERNET Interface The Internet workstation contained, in addition to the standardized visual feedback mentioned above, a web browser. Setting only the internet browser was used to execute the teleoperation at the remote site. The web browser was used to log into the proposed TOI and then to remotely operate the robotic arm through it. The visual-feed module in the TOI was not operated during the experiments to keep the same visual feedback for all interfaces, as required for analysis purposes. Instead the operator was provided with a separate visual feedback on the second screen, as happened in all the other remote settings. If a preliminary simulation had to be used in the experiment, the Robocell software was initially operated on the same computer, but only in an offline working mode, and the simulation module was presented to the operator for an unlimited time before turning into the online operation through the website. Once changing to the website TOI online operation, the Robocell software was shut down.

The Subjects Group The subject group for the conducted experiments were senior (fourth year) students, at the CIM Lab in the IE Department at Tel-Aviv University. Each subject performed only a single experiment, randomly allocated to him. Each valid permutation of the design (14 in total) was performed by nine different subjects, and the results shown later are a mean over these repetitions. The subjects’ age range lies from 20 to 30. The gender distribution was 53% males and 47% females. All subjects

1190

had a technical background resulting from their engineering education. Overall, 126 experiments were conducted throughout five semesters. The selection of this field of subject is obvious based on the results of this work, as engineering students are most likely the target end users of any system that might be developed.

EXPERIMENTAL RESULTS Findings from Phase 1 of the Experiments Phase 1 of the experiments focused on the evaluation of the three Robocell interfaces: the local Robocell interface (LRC), the remote Robocell interface that includes the VRTP (VRC), and the remote Robocell interface without the VRTP (RRC). As indicated earlier in the chapter, the evaluation focused on the XYZ vs. the Joints control method and on the contribution of the preliminary simulation tool prior to execution of the task.

The Total Number of Required Steps The total number of steps required to complete the task was measured from the homing point (the location of the gripper after the calibration of the robot) until the completion of the entire task, i.e., after marking the third circle. This measure provided information and an intuitive understanding regarding the complexity level of the used interface. Figure 4.a presents the average total number of steps required in each of the three Robocell-based interfaces, (starting from the RRC on the left hand side of the chart, continuing with the VRC in the middle and ending with the LRC on the right) and for each of the control methods. Note that as the interface approaches a realistic local setting, the number of steps decline. One can clearly see that the Joints control method results in a significantly

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 4. Selected graphs from phase 1 of the experiments

larger number of steps compared to the axis-XYZ control method. This trend is consistent through all the tested interfaces. This result is quite intuitive, as the Axis-XYZ control method is less complex and is more intuitive. Examining the influence of the preliminary simulation tool on the total number of required steps, one can see that the preliminary simulation often leads to lower number of steps.

Number of Errors During the Tasks’ Execution The number of errors was measured by counting the number of marks made by the operator out-

side the designated circles in the working area. As mentioned for the previous factor, this factor gives some indication regarding the complexity of the task and points to the expertise improvement of the operator while completing the task. We expect the number of errors to be lower in designs that support better learning, as the user is expected to adapt faster to better control over the system and therefore to perform more accurately and with fewer errors. The graphs in Figures 4.b and 4.c present again the average number of execution errors. This time, squares represent settings without a preliminary simulation and triangles represent settings with the use of a preliminary simulation.

1191

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figures 4.b and 4.c explore the interaction between the control method and the preliminary simulation module when using the axis-XYZ or the Joints control, respectively. These show the effect of the control method used by the operator on the number of errors. As expected, the axis-XYZ control method reduces the number of errors with respect to the Joints control method and for all the examined interfaces in this phase. Surprisingly, the highest number of errors was obtained in the VRC interface rather than the RRC interface (the one without the VRTP). This result can be explained by the attractiveness of the VRTP feature, causing the users to rely on the virtual feedback even when it causes errors. The virtual feedback (VRTP), as informative as it is, is not as accurate as the online video feedback. When using the RRC interface without the virtual feedback, the operator had to wait for the video buffering delay to end, and therefore every movement took a longer time to finish. A possible explanation is that the operators gave a higher attention to each robotic move before actually executing it. We assume that the extra time and attention led to fewer errors. Note that for the local-LRC interface, the advantage of the axis-XYZ with respect to Joints is smaller than the advantage in the remote interfaces. In fact, for the LRC interface, the control intervals overlap in contrast to the other interfaces. This fact emphasizes the importance of the control method when designing a remote interface. While using the axis-XYZ control method (Figures 4.c), the average number of execution errors for both the local LRC interface and the remote RRC interface (without the VRTP) are found to be almost indistinguishable, regardless of the use or absence of the preliminary simulation module. In the VRC interface, on the other hand, a lower number of execution errors is obtained when implementing a preliminary simulation. Note that for the Joints control method, the use of preliminary simulation results in a lower average number of errors in comparison to a situation without the preliminary simulation. This

1192

observation is consistent throughout all three different interfaces. In both Figures 4.b and 4.c, one can see that the contribution of the preliminary simulation tool to decrease the number of execution errors is affected by the chosen control method. Preliminary simulation results in better execution (hence, better learning) when using an interface operated by the Joints control method. The effect of the preliminary simulation on the number of errors while using the VRC interface seems to be almost indifferent to the control method in use. In this case, preliminary simulation results in a lower number of errors for both control methods, speculating that the VRTP module reduces the complexity gap between axis-XYZ and Joint control methods.

Learning and Improvement Measures The learning and improvement rate, as defined earlier, is calculated by dividing the total number of steps required to perform the second and third legs, by the total number of steps required to complete the task. Such a measure is expected to be lower for a higher learning rate. A lower measure will indicate better learning, as it represents a significantly higher number of movements for the first leg and, therefore, a greater margin between the legs. This will indicate improved performance and an efficient learning process. Figure 4.d presents the average learning function as calculated for the three interfaces that were considered. When using the local LRC interface, the Joints control method resulted in a better (lower) learning factor. This result is explained by the simplicity of the task when performed locally and with the simplest control method (axisXYZ). A simple task leaves very little room for improvement, as its execution requires almost the minimal number of steps possible, those already from the first leg. When using the remote interfaces (RRC and VRC), the axis-XYZ control method results in a

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

slightly better improvement rate than the Joints control method. The best improvement and learning rate is obtained for the RRC interface, either with the axis-XYZ or with Joints control method. This result can also be explained by the complexity of the task to the remote users, leaving much room for learning and improvement.

Findings from Phase 2 of the Experiments Phase 2 of the experiment focuses on the evaluation of the differences between the Web-based remote interface (INTERNET) and the three Robocellbased interfaces: the local settings (LRC), the remote settings with the VRTP module (VRC) and the remote settings without the VRTP (RRC). All four interface settings were operated with the Joints control method to evaluate the effect of the preliminary simulation tool prior to the online execution of the task. The results of Phase 2 are demonstrated in Figure 5.

The Total Average Number of Required Steps Figure 5.a addresses the interaction between the interface type and the use of a preliminary simulation tool. When considering the three remote interfaces (the three most left interfaces in the abscissa), we see that the effect of the preliminary simulation to decrease the average total number of steps is most significant for the INTERNET interface. Such an effect barely exists in the VRC interface. Yet, the preliminary simulation also affects the total number of steps when using the local LRC interface. Out of these three remote interfaces, the VRC interface is surprisingly indifferent to a preliminary simulation (as observed in Phase 1 of the experiment) and results in the same average number of steps both with and without using the preliminary simulation tool. Such a result may be explained by the fact that using the advanced VRTP tool

during the task’s execution makes the preliminary simulation, which is based on the same simulation tool, redundant regarding minimizing the number of steps. However, as explained in the next subsection, the preliminary simulation is useful regarding minimizing the number of errors.

The Number of Occurred Errors Figure 5.b presents the average number of execution errors for each of the four interfaces with or without the preliminary simulation tool. One can see that when using the preliminary simulation (marked by triangles), the difference between the three remote interfaces is negligible (less than 0.5 errors in average). Moreover, when the preliminary simulation is not used (marked by squares), the remote RRC interface lead to less errors than the other remote interfaces, obtaining an error rate, which is close to the one obtained by the local LRC test interface.

Learning and Improvement Measured Next, the improvement in the learning rate of the users is presented. We show the learning rate for each of the legs as a function of the interface type and the use of a preliminary simulation. Figures 5.c and 5.d present the number of steps required for the first leg (marked as “H-to-1”) and for the later legs (marked as “1-to-3”), respectively. The results in each graph separately represent the use (by squares) or the lack of use (by triangles) of the preliminary simulation. As seen from the total number of required steps (Figure 5.a) and from the learning graphs (Figure 6), the use of a preliminary simulation results in fewer steps required to complete the first leg for all interfaces except for the VRC interface. Figure 5.d shows that the number of steps required to complete the later legs is indifferent to the use of a preliminary simulation, i.e., resulting in a roughly similar performance that is measured both with and without the use of the preliminary simulation tool prior to the execution.

1193

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 5. Selected graphs from phase 2 of the experiments

These results suggest that the ability to simulate movements in the cell before an actual execution provides the operator with an early stage of learning. This results in an improved performance at the beginning of the execution. The effect of such learning diminishes in later stages when the operator gains experience in remote operation. Another interesting result that is evident from these two graphs is that when a VRTP module is available, the contribution of the preliminary simulation is limited. Therefore, the use of VRTP can save time when the task is learned.

1194

Learning Curves for Different Designs The graphs shown in Figure 6 emulate the learning curves of the task execution for four combinations of the examined factors. The horizontal axis represents the three legs of the experiments, starting from the homing point at the left hand side, through to the end of the execution on the right. The vertical axis shows the number of steps required for the specific leg of the execution. The right hand-side graphs show three different curves differentiated by the control

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 6. Learning curves for the experiments

method used and by the chosen interface (Internet interface excluded). The left hand-side graphs are showing four different curves differentiated by the use or absence of preliminary simulation and by the chosen interface (all four types of interfaces are included).

Comparison within ControlMethod Factor The two graphs on the right side of Figure 6 present the results of Phase 1 of the experiments. In each examined interface, both the Joint and the axis-XYZ control methods result in very similar slopes of the learning curve differing only in their heights. These differences result from the heights of start points, which are lower for the (more intuitive) axis-XYZ control method. The improvement throughout the steps seems to be almost unaffected by the actual control method in use. However, we see that while the end-points of all curves are bounded by a narrow margin between four to ten steps required to complete the last leg, the curves have greater margin at their starting points. This result leads to the conclusion

that the actual improvement of execution achieved during the task is greater in the RRC interface and the lowest in the local LRC benchmark interface.

Analyzing the PreliminarySimulation Factor The two graphs on the left side of Figure 6 present the results of Phase 2 of the experiments. Unlike the right side graphs, the learning-curve slopes in each interface are different when considered with or without a preliminary simulation. For both the RRC and the local LRC interfaces, we observe steeper curves for experiments without preliminary simulation, indicating that the actual improvement of execution achieved during the task is greater when operating the system without a preliminary simulation. Nevertheless, we note that the curves of each interface end closely to each other, as the users reach the same average number of steps in the last leg of the execution, regardless of the use of a preliminary simulation. This observation leads to conclude that preliminary simulation supports the learning prior to the tasks’ execution, thus it is recommended for use.

1195

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

These conclusions are also supported by the learning curves of the Internet interface. In this more difficult working interface, the difference between the curves is noticeable, suggesting that when using a preliminary simulation, most of the learning occurred during the simulation part and leaving little room for improvement and providing a better start point to the operators at the online stage. The curves for the VRC interface support our assumption that the VRTP tool and the preliminary simulation are superfluous to each other. This is supported by the almost identical curves (both in slope and in height), indicating indifference to the presence or absence of the preliminary simulation in the process. Although the Internet interface is the most complex interface among the considered ones, when combined with a preliminary simulation tool, it provides almost as good results as the rest of the remote interfaces (in terms of the number of steps required). The same conclusion is drawn with respect to the number of errors. We believe that the ability to reconsider a movement once it has been chosen, yet before it is executed has a significant influence when explaining the success of this interface.

GUIDELINES AND FINAL DESIGN When required to design a remote teleoperation interface, we need to choose the appropriate combination of components to meet our learning/ teaching goals. If the goal is, as in our study, an accurate operation, then the suggested control method should be the axis-XYZ control, as it leads to a lower number of errors. However, since sometimes part of our robot operation teaching would benefit from teaching alternative control methods and as it seems that selecting either control method will not affect the achieved learning rate of the user, it is suggested to have both control methods available if and whenever it is possible.

1196

A preliminary simulation module is highly recommended on the design of a remote telerobotic interface. The only module that was found to have the same impact as the preliminary simulation tool in terms of improving the operators learning and performance was the VRTP module. The VRTP module provides the user with the same learning qualities as the preliminary simulation tool, but this time during the actual online work. If the designed system has to service a large number of users by relying on short online time windows for each user, then a preliminary simulation is the most effective tool for learning. However, if one can provide each user with enough online access time to the robotic cell, then it is recommended to integrate a VRTP module into the interface. A main feature that differs in the Internet interface from the other considered interfaces was the ability to reconsider a movement prior to its execution. We believe that this feature affected the higher learning rate found for this interface and recommend facilitating such mechanism into future designs of remote telerobotic interfaces. The design of the revised TOI designed based on the presented research is now presented.

Proposed Implementation The web-based interface, shown in Figure 7, was programmed using html and PhP programming languages and was stored on an Apache server on the local computer (similar to Yang et al., 2004). The designed interface utilizes the Robocell simulation software as a platform. Users were administrated with an SQL database, and the sessions were limited in time to avoid blockage of the system. The code for a single movement was based on measuring the time a specific push-button was pressed in the control module, identifying the joint and the direction represented by this button and then sending an operation command through the server and to the robot’s controller. The response from the controller, composed of the new encoder values,

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Figure 7. The web interface (© 2007, Taylor & Francis. Used with permission)

was presented on the website interface in response. A detailed description of the TOI modules is given next.

The Joint Control and Data Feedback Module The Joint control and the data feedback module refer to the upper left hand-side of the interface, shown in Figure 7. This module controls directly each motor of the robot, enabling Joints control of the arm. Although the XYZ-axis control method was recommended as well for implementation, it does not appear in this design, as the implementation was too complex at the time of the actual design. A later work on the design should include the implementation of control method choice for the users. There is a major difference between these two controllers and the Robocell interface controller described in previous sections. When using the Robocell controller, the action of clicking on a direction of movement results in an immediate

response from the robotic arm, and the robot keeps moving in the desired direction until the push button is released. On the Internet interface presented here, pushing a controller button starts a timer (shown in Figure 7 under the Java application) running until the button is released. The operator can choose whether to send a movement signal to the robot for the selected axis and for that amount of time or to change the time/axis before sending the actual movement signal only after releasing the button. Such a feature potentially provides operators with greater control over their actions. Once a movement was completed, a data feedback from the robot’s encoders is presented to the operators (in the middle of Figure 7), enabling them to compare the actual positioning of the robotic arm to the one predicted by the virtual simulation.

The Visual Feedback Module Two video feeds, as seen in the right hand side of Figure 7, are available for the users. These feeds provide the users with two different viewing angles

1197

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

of the workstation: an isometric overall view (the lower feed) and a zoomed top view (the upper feed). The top view also enables the users to snap a picture of the work area and to save it in their folder on the server. This option is introduced to support the maintenance of lab reports by future users of the system.

The Data Interaction Module The bottom left side of Figure 7 shows the data interaction module serving three purposes: 1) uploading files to the server, 2) viewing the personal folder of the logged user, and 3) running simulation files stored in this folder. This module is not used during this research and was extensively described in Goldstain et al. (2007).

FURTHER RESEARCH This chapter addresses the learning aspects related to the usage of different interface settings for remote control and manipulation of a robotic cell. Various interface components are evaluated and recommendations are provided. Further research in this field can address the effect of integrating a VRTP tool and an axisXYZ control method into an Internet interface. Results drawn from this research suggest that such integration can yield the best remote learning performance. Other research could focus on visual aspects associated with remote telerobotic learning, examining positioning and orientation of cameras and their effect on the user’s comprehension of the three-dimensional work area as well as on their learning performance. In relation to teaching laboratories, useful work can be done for designing remote-compatible tasks for learning robotics, as not all available routines for teaching robotics are applicable for remote learning without the presence of an instructor on the site.

1198

REFERENCES Adams, J. A. (2002). Critical considerations for human–robot interface development. In Proc. Human–Robot Interaction, 2002 AAAI Fall Symp. Menlo Park, CA, 2002, (pp. 1-8). Aktan, B., Bohus, C. A., Crowl, L. A., & Shor, M. H. (1996). Distance learning applied to control engineering laboratories. IEEE Transactions on Education, 39(3), 320–326. doi:10.1109/13.538754 Ammari, A. C., & Ben Hadj Slama, J. (2006). The development of a remote laboratory for Internet-based engineering education. Journal of Asynchronous Learning Networks, 10(4), 12. Belousov, I., Chellali, R., & Clapworthy, J. (2001). virtual reality tools for Intemet robotics. In Proc. of the ZOO1 IEEE Internationnl Confererence on Robotics & Automation, Seoul, Korea Belousov, I., Tan, J., & Clapworthy, G. (1999). Teleoperation and Java3D visualization of a robot manipulator over the World Wide Web. In Proc. Information Visuualisation IV’99, July 1999 (pp. 543-548). Bochiccio, M. A., & Longo, A. (2009). Handson remote labs: Collaborative Web laboratories as a case study for IT engineering classes. IEEE Transactions on Learning Technologies, 2(4). Bukchin, J., Luquer, R., & Stubs, A. (2002). Learning in tele-operations. IIE Transactions, 34, 245–252. doi:10.1080/07408170208928866 Calkin, D. W., Parkin, R. M., Šafaric, R., & Czarnecki, C. A. (1998). Visualization, simulation and control of a robotic system using Internet technology. In Proc. 5th IEEE Int. Advanced Motion Control Workshop, Coimbra, Portugal, 1998, (pp. 339-404).

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Candelas, F. A., Puente, S. T., Torres, F., Segarra, V., & Navarrete, J. (2005). Flexible system for simulating and tele-operating robots through the Internet. Journal of Robotic Systems, 22(3), 157–166. doi:10.1002/rob.20056

Goldstain, O., Ben-Gal, I., & Bukchin, Y. (2007). Remote learning for the manipulation and control of robotic cells. European Journal of Engineering Education, 32(4), 481–494. doi:10.1080/03043790701337213

Chen, S. H., Chen, R., Ramakrishnan, V., Hu, S. Y., Zhuang, Y., Ko, C. C., & Chen, B. M. (1999). Development of remote laboratory experimentation through Internet. In Proc. of the 1999 Hong Kong Symposium on Robotics and Control vol. 2, (pp. 756-760).

Goldstain, O., Ben-Gal, I., & Bukchin, Y. (2010). Evaluation of tele-robotic interface components for teaching robot operation. Unpublished manuscript.

Chiculita, C., & Frangu, L. (2002). A Web based remote control laboratory. In Proc. Of 6th Multiconference on Systemic, Cybernetics and informatics, Orlando, Fl., 2002. Cooper, M., Donnelly, A., & Ferreira, J. (2002). Remote controlled experiments for teaching over the Internet: A comparison of approaches developed in the PEARL project. 19th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education (pp. 119-128). Doulgeri, Z., & Matiakis, T. (2006). A web telerobotic system to teach industrial robot path planning and control. IEEE Transactions on Education, 49(2). doi:10.1109/TE.2006.873975 Eliav, A., Lavie, T., Parmet, Y., Stern, H., Waches, J., & Edan, Y. (2005). Kiss human-robot interfaces. 18th International Conference on Production Research (ICPR). Enrique Sucar, L., Noguez, J., & Huesca, G. (2005). Project oriented learning for basic robotics using virtual laboratories and intelligent tutors. 35th ASEE/IEEE Frontiers in Education Conference, Indianapolis, IN, October 2005. Fernandes, A. S. C., & Martins, M. J. (2001). Self-training through the Internet. European Journal of Engineering Education, 26(2), 169–177. doi:10.1080/03043790110034429

Gravier, C., Fayolle, J., Bayard, B., Ates, M., & Lardon, J. (2008). State of the art about remote laboratories paradigms Foundations of ongoing mutations. [iJOE]. Int. J. Online Eng., 4(1), 19–25. Henry, J., & Schaedel, H. M. (2005). International co-operation in control engineering education using online experiments. European Journal of Engineering Education, 30(2), 265–274. doi:10. 1080/03043790410001065678 Hu, H., Yu, L. P., Tsui, W., & Zhou, Q. (2001). Internet-based robotic systems for teleoperation. Assembly Automation, 21(2), 143–151. doi:10.1108/01445150110388513 Khan, W. A., Al-Doussari, S. M. R., & AlKahtani, A. H. M. (2002). Establishment of engineering laboratories for undergraduate and postgraduate studies. European Journal of Engineering Education, 27(4), 425–435. doi:10.1080/03043790210166684 Kofman, J., Xianghai, W., Luu, T., & Verma, S. (2005). Teleoperation of a robot manipulator using a vision-based human-robot interface. IEEE Transactions on Industrial Electronics, 52, 1206. doi:10.1109/TIE.2005.855696 Marin, R., Sanz, P. J., Nebot, P., & Wirz, R. (2005). A multimodal interface to control a robot arm via the Web: A case study on remote programming. IEEE Transactions on Industrial Electronics, 52, 1506. doi:10.1109/TIE.2005.858733

1199

Evaluation of Remote Interface Component Alternatives for Teaching Tele-Robotic Operation

Michau, F., Gentil, S., & Barrault, M. (2001). Expected benefits of web-based learning for engineering education: examples in control engineering. European Journal of Engineering Education, 26(2), 151–168. doi:10.1080/03043790110034410 NASA. (n.d.). Space telerobotics program. Retrieved from http://rainer.oact.hq.nasa.gov/ telerobotics_page/telerobotics.shtml Puente, S. T., Torres, F., Ortiz, F. G., & Candelas, F. A. (2000). Remote robot execution through WWW simulation. In Proc. 15th International Conference on Pattern Recognition, ICPR 2000. Saliah, H., Saad, M., Villardier, L., Assogba, B., Kedowide, C., & Wong, T. (2000) Resource management strategies for remote virtual laboratory experimentation. 2000 Frontier in Education Conference, Kansas City. Siegwart, R., & Saucy, P. (1999). Interacting mobile robots on the web. In IEEE International Conference on Robotics and Automation (ICRA), 1999. Tel-Aviv University. (n.d.). CIM-Lab remote learning for robotic cells website. Retrieved from http://www.eng.tau.ac.il/remote Tzafestas, C. S., Palaiologou, N., & Alifragis, M. (2006). Virtual and remote robotic laboratory: Comparative experimental evaluation. IEEE Transactions on Education, 49(3). doi:10.1109/ TE.2006.879255 Wang, M., & Liu, J. N. K. (2004). A novel teleoperation paradigm for human-robot interaction. IEEE International Conference on Robotics, Automation and Mechatronics, (pp. 13–18).

Yang, X., Chen, Q., Petriu, D. C., & Petriu, E. M. (2004). Internet-based teleoperation of a robot manipulator for education. In Proc. of the 3rd IEEE International Workshop on Haptic, Audioand Visual Environments and Their Applications, 2004, (pp. 7-11).

KEY TERMS AND DEFINITIONS Home-Based: a design scheme for remote labs based on installing simulation and programming tools on the user’s computer, sending batches of pre-planed code to be implemented on laboratory equipment. INTERNET: A Test Oriented Interface operated remotely (based on our methodology‎) Lab-Based: a design scheme for remote labs based on enabling direct remote access to laboratory equipment and computers. LRC: A Robocell interface operated locally. RoboCell: Robotic control software providing a user-friendly tool for robot programming and operation. RRC: A Robocell interface operated remotely, without VRTP. Scorbase: The language used by RoboCell’s interface to program a robotic cell. VRC: A Robocell interface operated remotely, with VRTP. Web-Based: an integrated design scheme for remote labs based on a designated server running a pre-programmed website, detaching the user from the physical lab computer, while providing full control over laboratory equipment.

This work was previously published in Internet Accessible Remote Laboratories: Scalable E-Learning Tools for Engineering and Science Disciplines, edited by Abul K.M. Azad, Michael E. Auer and V. Judson Harward, pp. 163-184, copyright 2012 by Engineering Science Reference (an imprint of IGI Global).

1200

1201

Chapter 64

Cell Loading and Family Scheduling for Jobs with Individual Due Dates Gürsel A. Süer Ohio University, USA Emre M. Mese D.E. Foxx & Associates, Inc., USA

ABSTRACT In this chapter, cell loading and family scheduling in a cellular manufacturing environment is studied. What separates this study from others is the presence of individual due dates for every job in a family. The performance measure is to minimize the number of tardy jobs. Family splitting among cells is allowed but job splitting is not. Even though family splitting increases number of setups, it increases the possibility of meeting individual job due dates. Two methods are employed in order to solve this problem, namely Mathematical Modeling and Genetic Algorithms. The results showed that Genetic Algorithm found the optimal solution for all problems tested. Furthermore, GA is efficient compared to the Mathematical Modeling especially for larger problems in terms of execution times. The results of experimentation showed that family splitting was observed in all multi-cell solutions, and therefore, it can be concluded that family splitting is a good strategy.

INTRODUCTION Cell Loading is a decision making activity for planning the production in a Cellular Manufacturing System (CMS) including more than one manufacturing cell. The products are assigned to the manufacturing cells where they can be DOI: 10.4018/978-1-4666-1945-6.ch064

processed. This assignment is done based on the demand, processing times and due dates of the products and the production capacity and capability of the manufacturing cells (Süer, Saiz, Dagli & Gonzalez, 1995 and Süer, Saiz, & Gonzalez, 1999). Family Sequencing is a task of deciding the order by which product families are processed in a particular cell as determined by the Cell Loading process. In this chapter, a product

Copyright © 2013, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

family can be split and they can be sequenced in the same cell or different cells. Obviously, each time a new family starts in a cell, a new setup is required. Finally, Family Scheduling consists of determining start times and completion times of the product families and the individual products based on the family sequence established. Typically in a complex cellular system, we need to address Cell Loading, Family Sequencing and Family Scheduling tasks all in a satisfactory manner to obtain the desired results in terms of the selected performance measure. In this study, we are considering minimizing the number of tardy job as the performance measure. Even though the problem has been observed in a shoe manufacturing company, it is applicable to many cellular systems. The products are grouped into families based on their processing similarity. On the other hand, products in a family might have different due dates. The overall objective of this chapter is to solve cell loading and product sequencing problem in such a multi-cell environment. To accomplish this, we propose two different approaches to tackle this complex problem namely, mathematical modeling and genetic algorithms. An experiment is carried out using both approaches and later the results are compared and a sensitivity analysis is also performed with respect to due dates and setup times.

BACKGROUND Group Technology (GT) is a general philosophy where similar things are grouped together and handled all together. GT is established upon a common principle that most of the problems can be grouped based on their similarities and then a single solution can be found to the entire group of problems to save time and effort. This general concept has been also applied to the manufacturing world. This approach increases productivity by reducing work-in-progress inventory and improves delivery performance by reducing leadtimes, thus

1202

helping manufacturing companies to be more competitive. Thus, a Cellular Manufacturing System can be specified as an application of GT to the manufacturing system design (Askin & Standridge, 1993). Cellular Manufacturing System aims to obtain the flexibility to produce a high or moderate variety of low or moderate demand products with high productivity. CMS is a type of manufacturing system that consists of manufacturing cell(s) with dissimilar machines needed to produce part family/families. Generally, the products grouped together form a product family. The benefits of CMS are lower setup, smaller lot sizes, lower work-in-process inventory and less space, material handling, and shorter throughput time, simpler work flow (Suresh & Kay, 1998). In this chapter, the performance measure used is minimizing the number of tardy products (nT). If a product is completed after its due date, then it is considered as tardy product. If product is completed before its due date, then the tardiness for this product will be zero (early or on-time product). Therefore, tardiness for a product takes a value of zero or positive, Ti = max {0, ci - di}; where Ti is the tardiness for product (i), ci is the completion time of product (i), and di is the due date for product (i). The number of tardy jobs is n

computed as nT = ∑ g(Ti ) where g(x) = 1 if x i

> 0, and zero otherwise. The problem has been observed in a shoe manufacturing company where twelve product families have been already defined. There are multiple cells and the most critical component of each cell is the rotary injection molding machine. Even though Rotary Molding Machine is a single machine, scheduling shoes on that machine resembles to a parallel machine scheduling problem as it can hold multiple pairs of molds/shoes at any time. The Rotary Molding Machine is defined in detail in Section 3. Several researchers focused on cell loading problem [Süer, Saiz, Dagli & Gonzalez, (1995)

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

and Süer, Saiz, & Gonzalez (1999a)] developed simple cell loading rules to minimize number of tardy jobs. Süer, Vazquez, & Cortes (2005) developed a hybrid approach of Genetic Algorithms and local optimizer to minimize nT in a multi-cell environment. Süer, Arikan, & Babayigit (2008) and (2009) focused on cell loading subject to manpower restrictions and developed fuzzy math models to minimize nT and total manpower levels. A few works have been also reported where both cell loading and product sequencing tasks are carried out. Süer & Dagli (2005) and Süer, Cosner & Patten (2009) discussed models to minimize makespan, machine requirements and manpower transfers. Yarimoglu (2009) developed math model and genetic algorithms to minimize manpower shortages in cells with synchronized material flow. However, these work ignored setup times between products and families. Some other researchers focused on group scheduling problem with a single machine or a single manufacturing cell. Nakamura, Yoshida & Hitomi (1978) focused on minimizing total tardiness and considered sequence-independent group setup. Hitomi & Ham (1978) also considered sequence-independent setup times for a single machine. Ham, Hitomi, Nakamura & Yoshida (1979) developed a branch-and-bound algorithm for the optimal group and job sequence to minimize total flow time with the minimum number of tardy jobs. Pan and Wu (1998) considered a single machine scheduling problem to minimize mean flow time of all jobs subject to due date satisfaction. They categorized the jobs into groups without family splitting. Gupta and Chantaravarapan (2008) studied the single machine scheduling problem to minimize total tardiness considering group technology. Individual due dates and independent family setup times have been used in their problem with no family splitting. This paragraph summarizes the work done in the past which focused on scheduling jobs on the Rotary Injection Molding Machines. Süer. Santos, & Vazquez (1999b) have developed a three-phase

Heuristic Procedure to minimize makespan in the Rotary Molding Machine scheduling problem. Subramanian (2004) has attempted to solve this problem as a part of the cell loading and scheduling process. The objective was to minimize makespan and unlimited availability of the molds was assumed. Later, Urs (2005) introduced limited mold availability into the problem for the same objective. The most recent research was done by Süer, Subramanian, and Huang (2009) includes some heuristic procedures and mathematical models for cell loading and scheduling problem. The most important feature of the scheduling problem studied in this chapter is the presence of individual due dates for every job even in the same family (no common due dates), and family splitting is allowed to minimize the number of tardy jobs. To the best knowledge of the authors, this real problem observed in a cellular environment has not been addressed in the literature before. As a result, we decided to tackle this complex problem here and propose multiple solution approaches to deal with it.

THE PROBLEM STUDIED This section discusses the problem studied in detail.

Family Splitting vs. Setup Times Typically, in cellular manufacturing similar products are grouped together and processed together as a family to reduce the number of setups and thus total setup time. However, literature does not address the possibility of having different due dates for the jobs in the same family. Even though it is important to reduce the setup times, it is also important to meet the customer due dates. There is a natural conflict between meeting due dates of jobs versus reducing total setup times between families. This point can be illustrated

1203

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Figure 1. Family splitting not allowed vs. allowed

in Figure 1 where there are two families and two jobs in each family. If we do not split (preempt) the families, we get two jobs in the second family. On the other hand, if family splitting is allowed then number of tardy jobs is reduced to one even if the number of setups increases from two to four. We can observe that when all of the jobs for a family are scheduled all together, setup times are reduced. However, this may also force several other jobs in the following families to be delayed and increase the possibility of becoming tardy. On the other hand, when a family is split several times, the number of setups increases thus reducing the productive time and hence may adversely affect the number of tardy products. This study attempts to find a balance between family splitting and meeting due date such that the total number of tardy products is minimized. As mentioned before, among the published papers in the literature, there is no work reported about Cell Loading and Family Scheduling subject to Individual Due Dates with group splitting allowed considering more than one manufacturing cell.

1204

Case Study This section describes the problem in depth. The following subsections explain the important features of the problem.

Products This problem was observed in a shoe manufacturing plant. Products have five attributes; Gender, Size, Sole Type, Color, and Material. For shoes manufactured for men (Male (M)), there are 18 different sizes, 2 sole types (Full Shot (FS), Mid Sole (MS)), 4 colors (Black (B), Dark Green (G), Honey (H), Nicotine (N)), and 3 materials (Polyurethane (PU), Polyvinyl chloride (PVC), Thermo Plastic Rubber (TPR)). For shoes manufactured for women (Female (F)), there are 13 different sizes and the remaining attributes (Sole Types, Colors, and Materials) are similar to those of males. Besides these product types, there are also different upper designs that will be referred as models from now on. Each model will have its own identification designation (Model ID).

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Cells/Minicells

Rotary Molding Machine

There are six manufacturing cells in the plant and they are independent from each other (machine sharing, and thus inter-cell transfers are not allowed). In the plant, every manufacturing cell includes Lasting Minicell, Rotary Molding Machine Minicell (RMMM), and Finishing/ Packing Minicell as shown in Figure 2. Lasting Minicells prepare the shoes for injection molding process. Rotary Molding Machine Minicells inject the materials into the molds. Finishing/Packing Minicells remove extra materials from the injected shoes, finish the shoes, and also pack the shoes.

This study focuses only on scheduling Rotary Molding Machine Minicells (the bottleneck of the manufacturing cell). The Rotary Molding Machine has a capacity of six pair molds as shown in Figure 3. In Figure 2, P1 is the injection station where the material is injected inside the mold. P2, P3, P4, and P5 are the cooling off stations, so that worker can handle the shoes. P6 is the loading and unloading station that the worker removes the pair injected and cooled off, and then loads the new pair that will be injected in the injection station. The Rotary Molding Machine is rotated anti-clockwise, so it is rotated exactly one position at the end of every cycle time.

Figure 2. Manufacturing cells in the shoe manufacturing plant

1205

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Figure 3. The rotary molding machine minicell

Injection time is defined as the time required for injecting the material inside the mold. The injection time is affected by the size of the shoe, i.e. larger shoe sizes need longer injection times, because the material injected by the Rotary Molding Machine per minute is constant. Because of the schedule of products in the specific cell, different sizes can be run in the Rotary Molding Machine at the same time. When this happens, the cycle time is set to the injection time of the biggest size (maximum injection time).

Product Families A representation code is formed as “MC” to form and identify the product families. In the MC code form: M denotes the Material (PU: U, PVC: P, TPR: T), and C denotes the Color (Black: B, Dark Green: G, Honey: H, Nicotine: N). There are 12

1206

product families (= 4 colors * 3 material types) as shown in Figure 4. In this study, all of the sizes of a specific order (with the same Model ID, Gender, Sole Type, Material, Color, and Due Date) is called a job. Different sizes of a job can have different demand. All of the sizes included in a job are assumed to have the same due date. The reason for this is that the entire job will have to be shipped to the customer all together.

Example: Family Formation An example of customer orders that consists of 32 jobs is presented in Table 1. From the customer orders in Table 1, the families can be obtained as shown in Table 2.

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Figure 4. Structure of families

Example: Possible Cases This study focuses on assigning products to Rotary Molding Machine Minicells considering their families, i.e. cell loading. Family splitting among Minicells is allowed but job splitting is not. As an illustration of this, examples of the possible minicell loading cases are shown in Figure 4. The processing times of jobs in families F1, F5, and F12 are given in Table 3. While loading minicells, the families may not be divided as shown in Figure 5a, or families may be divided in the same minicell (like preemption) as shown in Figure 5b, or a family may be assigned to multiple minicells as shown in Figure 5c.

Other Issues The molds used in the Rotary Molding Machine for injection molding vary by size, gender, and sole type. It is assumed that there is not any restriction on the availability of molds. Therefore, the same size pairs of a job can be run on all locations of the Rotary Molding Machine simultaneously. In this study, setup times between jobs in the same family are assumed negligible. However, setup times (for material or color or both changes) between families is assumed to take 20 minutes. The jobs can be back-scheduled in the Lasting Minicells and forward-scheduled in the Finishing/Packing Minicells based on the schedule of Rotary Mold-

ing Machine Minicells. However, scheduling in Lasting Minicells and Finishing/Packing Minicells are not within the scope of this work.

PROPOSED SOLUTION TECHNIQUES The Cell Loading and Family Scheduling problems introduced in this chapter involve constraints on the number of product families, individual due dates, machine capacity, and sequence-independent setup times. This version of the problem is more difficult than the Classical Cell Loading and Group Scheduling problem. Both mathematical model and genetic algorithms approaches are proposed. The Mathematical Model guarantees the optimal solution, but it takes too much time to find the optimal solution. The Genetic Algorithm is a much faster procedure, but it cannot guarantee the optimal solution. The performance of these two procedures is compared with respect to execution time and the frequency of the optimal solutions. Genetic Algorithms Software Application (GASA) has been coded by using C# object-oriented programming language. The Mathematical Model is solved by using ILOG OPL 5.5. The methodology introduced in chapter is not restricted to the case study discussed here and can directly be used in similar cellular environments.

1207

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Table 1. An example of customer orders Job No.

Model ID

Gender

Sole Type

Material

Color

Size

Code

Total Demand

Due Date

1

K

F

FS

TPR

Black

5, 6, 7, …, 12

TB

269

10

2

U

M

FS

PVC

Dark Green

5, 6, 7, …, 15

PG

688

12

3

C

M

FS

TPR

Black

5, 6, 7, …, 15

TB

1045

22

4

C

M

FS

TPR

Black

5, 6, 7, …, 15

TB

208

11

5

L

F

MS

PU

Black

5, 6, 7, …, 12

UB

881

20

6

T

M

MS

PU

Black

5, 6, 7, …, 15

UB

831

17

7

T

M

MS

PU

Black

5, 6, 7, …, 15

UB

277

13

8

O

F

FS

PVC

Dark Green

5, 6, 7, …, 12

PG

250

15

9

E

M

FS

PVC

Dark Green

5, 6, 7, …, 15

PG

636

11

10

W

M

FS

PVC

Black

5, 6, 7, …, 15

PB

384

14

11

W

M

FS

PVC

Black

5, 6, 7, …, 15

PB

329

16

12

F

F

MS

TPR

Dark Green

5, 6, 7, …, 12

TG

440

17

13

F

F

MS

TPR

Dark Green

5, 6, 7, …, 12

TG

321

11

14

N

F

MS

PU

Black

5, 6, 7, …, 12

UB

355

14

15

N

F

MS

PU

Black

5, 6, 7, …, 12

UB

255

10

16

X

M

MS

PVC

Black

5, 6, 7, …, 15

PB

788

20

17

E

F

FS

PVC

Nicotine

5, 6, 7, …, 12

PN

574

16

18

E

F

FS

PVC

Nicotine

5, 6, 7, …, 12

PN

245

12

19

Y

F

FS

PVC

Honey

5, 6, 7, …, 12

PH

456

14

20

G

M

FS

PVC

Honey

5, 6, 7, …, 15

PH

345

13

21

G

M

FS

PVC

Honey

5, 6, 7, …, 15

PH

657

16

22

O

M

FS

TPR

Honey

5, 6, 7, …, 15

TH

234

11

23

M

M

FS

PU

Nicotine

5, 6, 7, …, 15

UN

621

16

24

W

M

FS

TPR

Nicotine

5, 6, 7, …, 15

TN

206

12

25

P

F

MS

TPR

Nicotine

5, 6, 7, …, 12

TN

657

17

26

P

F

MS

TPR

Nicotine

5, 6, 7, …, 12

TN

234

13

27

H

F

MS

PU

Dark Green

5, 6, 7, …, 12

UG

329

13

28

Z

F

MS

PU

Dark Green

5, 6, 7, …, 12

UG

574

15

29

L

F

MS

PU

Honey

5, 6, 7, …, 12

UH

116

10

30

J

F

MS

PU

Nicotine

5, 6, 7, …, 12

UH

432

14

31

V

F

MS

TPR

Honey

5, 6, 7, …, 12

TH

354

13

32

R

F

MS

PU

Nicotine

5, 6, 7, …, 12

UN

230

11

Mathematical Model The objective function is to minimize nT and it is given in Equation (1). Each job can be processed only once as shown in Equation (2). Equation (3) shows that each position in each cell can be assigned 1208

to at most one job. Equation (4) enforces jobs to be assigned consecutively in each cell. Equation (5) controls setup requirements. If the consecutive jobs are from different families, then this constraint adds setup between those consecutive jobs. In Equations (6), (7.a) and (7.b), If-Then constraints are used to

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

Table 2. Families for orders given in Table 1 Family No.

Jobs (due dates)

1

1 (10), 3 (22), 4 (11)

2

2 (12), 8 (15), 9 (11)

3

5 (20), 6 (17), 7 (13), 14 (14), 15 (10)

4

10 (14), 11(16), 16 (20)

5

12 (17), 13 (11)

6

17 (16), 18 (12)

7

19 (14), 20 (13), 21 (16)

8

22 (11), 31 (13)

9

23 (16), 32 (11)

10

24 (12), 25 (17), 26 (13)

11

27 (13), 28 (15)

12

29 (10), 30 (14)

Table 3. The processing times for examples of possible minicell loading cases Family No. F1

F5 F12

Processing Times (hrs)

Job No. J1

3

J3

9

J4

2

J12

5

J13

3

J29

2

J30

4

Figure 5. Examples of possible minicell loading cases

1209

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

eliminate the nonlinearity in the model. Equation (6) checks if a position is occupied by a job. If so, Equations (7.a) and (7.b) calculate the completion time of the job in that position. Equation (8) determines the tardiness value of a job. Equation (9) identifies if a job is tardy.

Notation

Objective Function: n

min Z = ∑ ∑ nTmk m =1 k =1

1210

M

n

∑∑x

ijmk

= 1 for i=1,…,f, j=1,…ni

(2)

ijmk

≤ 1 for m=1,…,M, k=1,…,n

(3)

m =1 k =1

f

ni

∑∑x i =1 j =1

Indices: i Family index j Job index k Position index m Cell index Parameters: n Number of jobs ni Number of jobs in family i f Number of families M Number of cells Pij Process time of job j from family i Dij Due date of job j from family i S Setup Time R Very big number (larger than maximum possible tardiness value) Decision Variables: Ymk 0 if kth position in cell m is occupied, 1 otherwise. Xijmk 1 if job j from family i is assigned to the kth position in cell m, 0 otherwise. Cmk Completion time of the job in kth position in cell m Tardiness value of the job in kth position Tmk in cell m Wmk 1 if setup is needed before the job in kth position in cell m, 0 otherwise. nTmk Coefficient for determining the tardiness of the job in kth position in cell m. 1 if the job which is assigned to the kth position in cell m is tardy, 0 otherwise.

M

Subject to:

(1)

f

f

ni

ni

∑ ∑ xijmk ≥ ∑ ∑ xijm (k +1) for m=1,…,M, i =1 j =1

i =1 j =1

k=1,…,n-1

(4) ni

nq

j =1

j =1

1 +Wmk ≥ ∑ x ijmk + ∑ m=1,…,M, k=2,…,n f

ni

∑∑x



q ∈( f /i )

x ijm (k −1) for (5)

≤ R * (1 −Ymk ) f o r m = 1 , … , M ,

ijmk

i =1 j =1

k=1,…,n f

(6) ni

−Cm 1 + ∑ ∑ xijm 1 * Pij ≤ R * Ym 1 m=1,…,M i =1 j =1

(7a)

Cm (k −1) − Cmk + S *Wmk f

ni

+∑ ∑ xijmk * Pij ≤ R * Ymk

for m=1,…,M,

i =1 j =1

k=2,…,n f

(7b) ni

Cmk − ∑ ∑ xijmk * Dij ≤ Tmk

(8)

i =1 j =1

for m=1,…,M, k=1,…,n Tmk≤R*nTnk; for m=1,…,M, k=1,…,n

(9)

Definition of Variables: xijmk∈{0,1}; for i=1,...,f, j=1,...,ni, m=1,...,M, k=1,...,n Wmk∈{0,1}; for m=1,…,M, k=1,…,n

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

nTmk∈{0,1}; for m=1,…,M, k=1,…,n

Ymk∈{0,1}; for m=1,…,M, k=1,…,n Cmk≥0; for m=1,…,M, k=1,…,n Tmk≥0; for m=1,…,M, k=1,…,n

Genetic Algorithm First, the initial population of n chromosomes is formed randomly. Then, mating partners are determined using mating strategies to perform crossover. The crossover and mutation operators are performed to generate offspring. For selecting the next generation, parents are added to the selection pool along with offspring. The next generation is selected from this pool based on their fitness function value. These steps are repeated until the number of the generations specified by the user is reached. Finally, the best chromosome obtained during the entire generations is determined as the final solution.

Chromosome Representation The chromosome representation is used as an individual solution including genes corresponding to jobs. For each gene, the following representation code is used: (X, Y) where X denotes the Job Number and Y denotes the cell to which Job (X) is assigned. The sequence of genes in a chromosome also determines the sequence of jobs in the cells. As an illustration, an example is shown in Figure 6 where Jobs 1 & 3 are assigned to the first cell

Figure 6. A Chromosome representation for a 4-job and 2-cell problem

and Jobs 4 & 2 are assigned to the second cell in the order stated.

Mating Three different mating strategies are used to determine mating pairs; Random (R), Best-Best (B-B), and Best-Worst (B-W). The reproduction probabilities of the chromosomes are calculated according to their fitness function. The next step depends on the selected mating strategy. If the Random Mating Strategy is selected, the mating pairs are determined randomly with respect to their reproduction probabilities by using Roulette Wheel approach. In this mating strategy, each chromosome and its randomly determined partner give one offspring. If the Best-Best Mating Strategy is selected, all chromosomes are ranked with respect to their reproduction probabilities in descending order. Then, the best chromosome is paired with the second best chromosome, the third chromosome paired with the fourth chromosome and so on. In addition, the first X% of the pairs produce 3 offspring, the next Y% of the pairs give 2 offspring, and the last Z% of the pairs produce 1 offspring. If the Best-Worst Mating Strategy is selected, all chromosomes are ranked with respect to their reproduction probabilities in descending order. Then, the best chromosome is paired with the worst chromosome; the second best chromosome is paired with the second worst chromosome and so on.

Crossover Two different strategies are used to perform the crossover operation. Those crossover strategies are Position-Based Crossover (P-B) (Syswerda, 1999) and Order Crossover (OX) Strategies (Davis, 1985). The crossover operation is applied to the identified pairs with a probability of PC. The first parent is copied as the offspring if crossover is not performed. The crossover operator affects the sequence of jobs but not their cell assignment.

1211

Cell Loading and Family Scheduling for Jobs with Individual Due Dates

In other words, the crossover is applied to the genes’ X element and not to Y element.

Mutation Two steps are used in the mutation operator. The first one is used for job sequence and only Reciprocal Exchange (R-E) Mutation Strategy is used [(see Gen and Cheng (1997)]. The mutation for job sequence is performed with a probability of PMJ. The second step involves mutating cell assignments. In the mutation of cell assignments, two different mutation strategies are used, Random (R), and Reciprocal Exchange Mutation. The mutation of the cell assignment is performed with a probability of PMC.

Selection In this study, selection pool consists of all offspring and some of the parents. The next generation is selected from this pool. The selection from parents is a two-step process. First, the parents are ranked with respect to their reproduction probability in descending order. Then, the best PE% parents are directly selected to advance to the selection pool. Then, the remaining (100-PE)% chromosomes are selected from the parents randomly based on their reproduction probability using Roulette Wheel Selection. Once the selection pool is identified, the chromosomes are ranked with respect to their reproduction probability and a final selection is made from this pool to generate the next generation. In some experiments, we also allowed a certain percentage of lowest performers (Pw%) to advance automatically to the next generation to avoid immature convergence of the population.

ANALYSIS OF RESULTS The results are grouped in four sections. 1) Genetic Algorithm Application, 2) Comparison of

1212

Mathematical Models with Genetic Algorithm, 3) Due Date Sensitivity Analysis, and 4) Setup Time Sensitivity Analysis. The experimental conditions are mentioned first and then the results obtained are discussed.

Data Sets Used Nine data sets are used in the experimentation. The details of data sets are listed in Table 4. The data sets 1, 2, and 3 are realistic data sets obtained directly from the shoe manufacturing company. The data sets 4, 5, and 6 are relatively smaller data sets and they are generated from data sets 1, 2, and 3, respectively for only one cell. Similarly, for multiple cells; smaller data sets 7, 8, and 9, are generated from data sets 1, 2, and 3, respectively, by reducing batch sizes and due dates.

Genetic Algorithm Application In this experiment, data sets 1, 2, and 3 are used. Initially, default parameters that are given in Table 5 are used. Then, the values of the GA parameters are changed one at a time in order to obtain better combinations. These parameters are listed in Table 6. Ten replications are performed. The best solution is 3 tardy jobs for all three data sets. The frequencies of the best solution as well as the average values for better combinations are given in Tables 7, 8 and 9. For data set 1, four combinations stood out as the best combinations. Combination 1 has the highest frequency of the best solution. However, there is no significant difference among them according to ANOVA test results (P=0.292> α-value=0.05). For data set 2, top six combinations are determined as given in Table 8. The combination 3 has the best frequency among all combinations. Since there is significant difference between six combinations (P=0.008