Prog Man.book

ProMAX Developer’s Programming Guide Contents ➲ Introduction 1 ➲ Quick Start 3 ➲ Overview of a ProMAX Process 4 ➲ Creati

Views 178 Downloads 6 File size 2MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

Citation preview

ProMAX Developer’s Programming Guide Contents ➲ Introduction 1 ➲ Quick Start 3 ➲ Overview of a ProMAX Process 4 ➲ Creating your own Directory Hierarchy 5 ➲ Writing a Menu 6 ➲ Installing the Menu 7 ➲ Overriding the Default ProMAX Files 8 ➲ Writing a ProMAX Program 10 ➲ Viewing Online Documentation 11 ➲ Writing Helpfiles 12 ➲ Self-guided Tutorial 13 ➲ Support Documentation 15 ➲ System Overview 17 ➲ Your Development Directory 18 ➲ Tool Anatomy 19 ➲ Programming Exercises: Simple Tools (amp_ratio) 20 ➲ amp_ratio Exercise 1: Adding Trace Headers 21 ➲ Menus 22 ➲ Global Parameters 22 ➲ amp_ratio Exercise 2: ordered parameter files 22 ➲ amp_ratio Exercise 3: time gates and tables 23 ➲ Debugging 24 ➲ C Programming Environment 25 ➲ Tool Types 26 ➲ Programming Exercise: Ensemble Tools (AVO) 27 ➲ Programming Exercise: Panel Tools 29 ➲ Programming Exercise: Input Tools 30 ➲ Programming Exercise: IPC Tools 31 ➲ System Overview 33 ➲ ProMAX Organization: Areas, Lines, and Flows 34 ➲ The User Interface: The Flow Builder 36 ➲ Menu Files 37 ➲ Flow Execution 38 ➲ Super Executive 38 ➲ Executive 39 ➲ Executive 51

Other Docs

Known Problems

Contents2

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Devloper’s Programming Guide

System Architecture 52 Headers and Global Variables 53 Input Tools 54 Re-entrancy 55 Common Blocks and Parms Structures 56 Executive Functions 57 Communication between Tools 58 OPF Database 59

➲ Make System 61 ➲ Working with ProMAX Systems 62 ➲ Getting Started 63 ➲ System Administrator Setup 63 ➲ User Setup 65 ➲ Converting to the New System 69 ➲ Understanding the Directory Structure 70 ➲ $PROMAX_HOME/port/include/make/ 70 ➲ $PROMAX_HOME/port/bin/ 73 ➲ $PROMAX_HOME/sys/bin/ 74 ➲ $PROMAX_HOME/port/src/exe/exec/ 76 ➲ Customizing the System 77 ➲ Toggling Products: .promax 78 ➲ Adding a New Tool 81 ➲ Making Your New Executable 84 ➲ Incorporating New Functionality 87 ➲ Creating Menus 88 ➲ Adding a ProMAX Menu 89 ➲ Changing Files 91 ➲ Understanding the Makefile System 94 ➲ C++ Template Instantiation 94 ➲ Terms and Variable Descriptions 95 ➲ Makefile Techniques 117 ➲ Directory Structure 121 ➲ Directory Hierarchy 122 ➲ Machine-dependent Directories 125 ➲ Directory Naming Conventions 126 ➲ Product-dependent Subdirectories 127 ➲ Third-party Software 128 ➲ Recompilation - GNU Make 129 ➲ Makefile Rules 130 ➲ Makefile Options 131 ➲ User and Master Versions 132 ➲ Master Versions for Landmark Clients 134 ➲ C Environment 135 Other Docs

Known Problems

Contents3

Devloper’s Programming Guide

➲ C Process Components 136 ➲ C and FORTRAN Links 137 ➲ Calling a FORTRAN Routine from a C Routine 137 ➲ Calling a C Routine from a FORTRAN Routine 137 ➲ Global Parameters 139 ➲ Trace Header Index Values 139 ➲ Re-Entrancy 140 ➲ Tool Types 141 ➲ Executive Tools 142 ➲ Simple Tools 145 ➲ Ensemble Tools 147 ➲ Panel Tools 148 ➲ Single Buffer Tools 153 ➲ Double Buffer Tools 156 ➲ Complex Tools 157 ➲ Stand-alone Tools 161 ➲ IPC Tools 163 ➲ IPC Tool Details 164 ➲ IPC Tool Debugging 165 ➲ Global Parameters 167 ➲ Overview of Global Parameters 168 ➲ Common Blocks and C Structure Descriptions 169 ➲ Ordered Parameter Files 171 ➲ Overview of the ProMAX Database 172 ➲ Standard Orders 176 ➲ Trace Headers 179 ➲ Overview of Trace Headers 180 ➲ Definition and Usage of Standard Header Entries 184 ➲ Alphabetical Reference of Trace Header Entries 184 ➲ Sequential Reference of Trace Header Entries 185 ➲ Parameter Tables 197 ➲ Overview of Parameter Tables 198 ➲ Structure of ProMAX Tables 198 ➲ Table Rules 200 ➲ Table Interpolation 200 ➲ X Values in Tables 204 ➲ Table Extrapolation 205 ➲ Table Subroutine Categories 206 ➲ Examples of Table Routines 208 ➲ FORTRAN Code Examples 208 ➲ C Code Examples 213 Other Docs

Known Problems

Contents4

Devloper’s Programming Guide

➲ Memory Management 219 ➲ Overview of Memory Management 220 ➲ C Memory Management 221 ➲ Multi-dimensional Arrays 221 ➲ Multi-dimensional Routine Names 223 ➲ References 225 ➲ FORTRAN Memory Management 226 ➲ mem.inc 226 ➲ RSPACEz and ISPACEz 227 ➲ Big Vector Routines 228 ➲ Debugging with dbx 229 ➲ Overview of dbx 230 ➲ System Review 231 ➲ Debugging 233 ➲ Writing to the packet.job File 233 ➲ Creating an Executable for dbx 233 ➲ Running dbx 234 ➲ Menus 237 ➲ Overview of ProMAX Menus and Landmark Lisp 238 ➲ Parts of a ProMAX menu 239 ➲ Menu Heading 240 ➲ Parameter Specifications 240 ➲ exec_data 242 ➲ Rules 243 ➲ Tips on Writing Menus 246 ➲ Use Examples 246 ➲ Keep it Simple 246 ➲ Usable Lisp Functions 247 ➲ Lisp Primitives 247 ➲ Access & Assignment Functions 250 ➲ ProMAX Lisp Extentions 252 ➲ Parameter Menu System 254 ➲ Parameter Keywords 255 ➲ Parameter Types and Attributes 255 ➲ Including Other Menus 266 ➲ Rules and Context Sensitivity 266 ➲ pwin 276 ➲ Example Macro: Display Shots with AGC 278 ➲ Helpfiles 283 ➲ FrameMaker-formatted Helpfiles 284 ➲ Starting FrameMaker 284 ➲ Creating a New Helpfile 284 ➲ Editing a Helpfile 284 Other Docs

Known Problems

Contents5

Devloper’s Programming Guide

➲ Working with FrameMaker Files 285

➲ Helpfile Organization 288 ➲ Example Helpfile 289 ➲ Theory 289 ➲ Usage 290 ➲ References 290 ➲ Parameters 291 ➲ Interactive Display 292 ➲ Common Error Messages 292 ➲ Customizing the User Interface 293 ➲ Hypertext 294

➲ Code Standards 295 ➲ C Coding Standards 296 ➲ ProMAX C Routine Documentation 297 ➲ FORTRAN Coding Standards 299 ➲ General Text File Format 300 ➲ Variable Naming and Declarations 301 ➲ C preprocessor 302 ➲ Comments 303 ➲ White Space 304 ➲ Code Structure 304 ➲ Miscellaneous 305 ➲ Purify 305 ➲ ProMAX Fortran Routine Documentation 305 ➲ Portable Code 307 ➲ Man Page Reference 309 ➲ Using the Man Pages 310 ➲ Finding what you want 311 ➲ Appendix: Expanded Directory Structure 313 ➲ Expanded Directory Structure 314 ➲ Appendix: C Programming Examples 325 ➲ Example Include Files 326 ➲ cglobal.h 327 ➲ cpromax.h 336 ➲ Example Simple Processes 345 ➲ simple.menu 346 ➲ simple.c 347 ➲ ampRatio.c 349 ➲ Appendix: Simple Tool Examples 357 ➲ amp_ratio.menu 358 ➲ amp_ratio.inc 359 Other Docs

Known Problems

Contents6

Devloper’s Programming Guide

➲ amp_ratio.f 360 ➲ ampRatio.c 367 ➲ Appendix: Ensemble Tool Examples 375 ➲ AVO Ensemble Tools 376 ➲ avo.menu 377 ➲ avo.inc 378 ➲ avo.f 379 ➲ avoC.c 383 ➲ Trace Interpolation Tools 388 ➲ prestk_interp.menu 389 ➲ prestk_interp.inc 390 ➲ prestk_interp.f 391 ➲ prestk_interp.c 393 ➲ Appendix: Panel Tool Examples 397 ➲ panel_test.menu 398 ➲ panel_test.inc 399 ➲ panel_test.f 400 ➲ panelTest.c 402 ➲ Appendix: Single Buffer Tool Examples 405 ➲ ens_define.menu 406 ➲ ens_define.inc 407 ➲ ens_define.f 408 ➲ interp_sb.menu 412 ➲ interp_sb.c 413 ➲ Appendix: Double Buffer Tool Examples 417 ➲ semblance.menu 418 ➲ semblance.inc 420 ➲ semblance.f 421 ➲ interp_db.menu 427 ➲ interp_db.c 428 ➲ Appendix: Complex Tool Examples 433 ➲ transform.menu 434 ➲ transform.inc 435 ➲ transform.f 436 ➲ transform.c 439 ➲ Appendix: Input Tool Examples 445 ➲ sine_wave.menu 446 ➲ sine_wave.inc 448 ➲ sine_wave.f 449 ➲ sineWave.c 455 Other Docs

Known Problems

Contents7

Devloper’s Programming Guide

➲ Appendix: Disk Iteration Examples 463 ➲ sc_amp.menu 464 ➲ sc_amp.inc 465 ➲ sc_amp.f 466 ➲ disk_iter.menu 469 ➲ disk_iter.c 470 ➲ Appendix: Stand-alone Tool Examples 473 ➲ prestack.menu 474 ➲ prestack.f 475 ➲ Makefile_prestack 480 ➲ poststack.f 481 ➲ Makefile_poststack 484 ➲ poststack.c 485 ➲ vel_io.f 488 ➲ Appendix: IPC Tool Examples 495 ➲ IPC Menu Code 496 ➲ IPC C Code 497 ➲ IPC FORTRAN Code 499 ➲ amp_ratio Menu Code 503 ➲ ampRatio C Code 505 ➲ amp_ratio FORTRAN Code 511 ➲ Appendix: Global Include File Examples 519 ➲ global.inc 520 ➲ cglobal.h 526 ➲ Appendix: Ordered Parameter File Examples 537 ➲ db_disp.f 538 ➲ comp_opf.menu 545 ➲ comp_opf.c 548 ➲ Appendix: Lisp Menu Example 551 ➲ EXAMPLE.menu 552 ➲ Appendix: C Library Summary 557 ➲ Error Routines 559 ➲ Control Functions 560 ➲ Parameter Input 561 ➲ Configuration 562 ➲ Database 563 ➲ Memory Allocation/Management 567 ➲ Trace Headers 569 ➲ Trace Muting 570 ➲ Velocity (Geophysical Routines) 571 Other Docs

Known Problems

Contents8

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Devloper’s Programming Guide

Vector Routines 572 Parameter Lists 573 Packet Files 574 Parameter Interpolation 575 Unix Interface 576 IPC Tools 577 Parameter Tables 580 PVM 585 Matrix Functions 588 Interpolation Routines 590 Math Functions 594 Signal Processing 596 Plotting 598 Data Structures 599 Sorting and Searching 602

➲ Appendix: FORTRAN Library Summary 603 ➲ Area/Line(Survey)/Flow 605 ➲ Configuration 608 ➲ Database Orders 609 ➲ Domain Mapping 611 ➲ Miscellaneous 1 612 ➲ Trace I/O 613 ➲ Trace Executive 614 ➲ Trace Headers 615 ➲ Memory Management 616 ➲ Mute/Kill 617 ➲ Statics 618 ➲ Summing 619 ➲ Error Routines 620 ➲ Parameter Tables 622 ➲ Tables Obsolete 624 ➲ Parameter Interpolation 625 ➲ Parameter Lists 626 ➲ String Decoding 628 ➲ Miscellaneous 2 629 ➲ Parameter Input 631 ➲ Packet Files 632 ➲ Character Routines 633 ➲ Seg-Y Disk 634 ➲ Geophysical Routines 635 ➲ Signal Processing 637 ➲ Disk I/O 640 ➲ SEG Vector Routines 641 ➲ Resource Reporting 647 ➲ UNIX Interface 648 Other Docs

Known Problems

Contents9

Devloper’s Programming Guide

➲ Index 651

Other Docs

Known Problems

1

Developer’s Programming Guide

Introduction

This Programmer’s Guide is written and designed to help you create ProMAX software modules. It assumes that you are familiar with large software system architecture, such as setting up a make system and accessing parameter tables. It provides instructional text for newer ProMAX programmers and serves as a reference book for experienced ProMAX developers.

Organization We divided this manual into chapters and sections that discuss the key processes of the ProMAX system. In general, this consists of: •

environment setup and testing



tools and tool building

We added extensive Appendices to include: •

an expanded directory structure



source code examples



C and FORTRAN library summaries

Documentation Conventions We use the following documentation conventions in this manual:

Other Docs



Boldface represents menu commands, push-button options, and keystrokes.



Courier

refers to text that you should type into a command line and represents program code listings. When a line of text or code is longer than we can print on one line in this manual, we use the UNIX standard of a backslash (\) at the end of the line to indicate that it continues on the next line.

Known Problems

2

Developer’s Programming Guide



Hypertext represents hyperlink-active text; click on this red text to go to related information.



Italics emphasizes key terms and concepts and refers you to other documents, chapters, or sections.



[Italics inside brackets] represent dummy parameters; replace the brackets and their contents with your user-specific information.



Text enclosed in a box signifies a warning, caution, or note. A warning or caution warns you when you could lose data or crash your system. A note emphasizes an important issue. Example This is an example of the box we use for warnings, cautions, and notes.

We suggest that you read:

Other Docs



the Quick Start chapter first for a brief overview of ProMAX IPC processes if you want to implement new code without investing time in learning the other parts of the ProMAX processing system



the Self-Guided Tutorial chapter first if you are interested in a step-by-step course in writing ProMAX software processes



the System Overview chapter first if you want an overall understanding of the ProMAX system. You should then be able to determine how this manual can further help you.

Known Problems

3

Developer’s Programming Guide

Quick Start

This chapter is designed to give a programmer a quick overview of a particular type of ProMAX processing process, the IPC (Inter-Process Communication) tool. It should allow you to implement some new code without investing much time in learning other parts of the ProMAX programming system. If you have previous experience with ProMAX programming, this chapter may be all that you need to get a program running on the ProMAX system. If this is not enough to get you going, the Self-Guided Tutorial chapter takes you step-by-step through a ProMAX programming course.

Topics Covered in this Chapter:

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Other Docs

Overview of a ProMAX Process Creating your own Directory Hierarchy Writing a Menu Installing the Menu Overriding the Default ProMAX Files Writing a ProMAX Program Viewing Online Documentation Writing Helpfiles

Known Problems

Overview of a ProMAX Process4

Developer’s Programming Guide

Overview of a ProMAX Process There are three parts to writing a ProMAX process: •

the menu



the program



the helpfile

The menu is the window that the User Interface pops up to let a user enter parameters for your program. The program is the code that actually runs. The helpfile hopefully tells people how to use your process without them having to call you too much. There are two classes of ProMAX programs: inline flow tools and stand-alones. Inline flow tools process data as it streams by in a flow; examples are AGC, FK filter, and migrations. Inline flow tools do not contain an asterisk in the Processes list. Standalone tools either read their own data, as occurs with random data access, or do not need data, as occurs with velocity manipulation. Stand-alone tools include such processes as the interactive velocity editor, interactive velocity analysis, and some of the statics programs. These tools do contain an asterisk in the Processes list. There are two types of inline flow tools: IPC tools and executive tools. IPC tools used to be called socket tools. This Quick Start chapter will only show you how to write an IPC tool. This is a relatively new approach which we recommend for many applications, especially if you are just getting started. Developing IPC tools is significantly quicker and simpler than developing non-IPC tools, at little performance cost. If you are an expert—for example, if you have taken a ProMAX programming course—you may decide to write inline flow tools using executive tools.

Other Docs

Known Problems

Creating your own Directory Hierarchy5

Developer’s Programming Guide

Creating your own Directory Hierarchy To help give you some order to your ProMAX programming, we have a recommended directory hierarchy for you to create in your home directory. Use the Makeadvance program, which is actually a shell script that is in $PROMAX_HOME/port/bin to create this directory hierarchy in your home directory. The environmental variable $PROMAX_HOME is the path to where your ProMAX software is installed. The default setting during installation for $PROMAX_HOME is /advance. The Makeadvance command produces a mirror of the $PROMAX_HOME directory hierarchy, but without any of the files. This will let you work on ProMAX modules under your home directory.

Other Docs

Known Problems

Writing a Menu6

Developer’s Programming Guide

Writing a Menu The menu file is an ASCII file that is interpreted on the fly by the User Interface. Writing a menu is simple enough that you should be able to do this using the ProMAX menu files delivered with your system as examples. The menus are in $PROMAX_HOME/port/menu/promax. A good example to start with is vdatum.menu; see also EXAMPLE.menu. vdatum.menu is an example of a menu for a an IPC tool. Most menus in the system, such as agc.menu, are for inline flow tools that are non-IPC tools. The only difference between these two menus occurs at the beginning of the exec_data section of the menu. You should be able to tell the minor differences between a IPC tool and a non-IPC tool, so you can still use all the menus as examples. An example of a menu of for a stand alone module is autostat.menu. Again, the only difference between this type of menu and others occurs at the beginning of the exec_data portion of the menu. We recommend that you write your menu in your mirror of $PROMAX_HOME/port/menu/promax under your home directory; that is, under ~/$PROMAX_HOME/port/menu/promax. Menus are actually written in Lisp. If you write menus often, you will notice that you are starting to learn Lisp. For more information on writing menus, refer to the Menus chapter. To test the menus you write, use the pwin program which is in $PROMAX_HOME/sys/bin. If you are an emacs user, type pwin my.menu and you should be in familiar surroundings. If you are not an emacs user, type pwin my.menu t and bring up your favorite editor in another window; you can create, debug, and edit the menu using your editor. After you write your edits to disk, click with the mouse in the pwin window and you will see the sample menu updated to reflect your changes. If you have syntax errors in your menu, pwin will generally print the line number containing the syntax error.

Other Docs

Known Problems

Installing the Menu7

Developer’s Programming Guide

Installing the Menu Once you have written a menu that you are happy with, you have to install it in the ProMAX list of menus that appears in the User Interface. We recommend that you copy the file $PROMAX_HOME/port/menu/promax/Processes to ~/$PROMAX_HOME/port/menu/promax/Processes, which is the 2D list of menus. The 3D list lives in $PROMAX_HOME/port/menu/promax3d. Now add your menu to the appropriate category. It should be straight forward how to do this by looking at the file. Be sure you do not use the .menu suffix.

Other Docs

Known Problems

Overriding the Default ProMAX Files8

Developer’s Programming Guide

Overriding the Default ProMAX Files You need to tell the User Interface where to find your personal ProMAX files: your new Processes file, menu, executable, and helpfile. Edit or create a .promax file in your home directory to look like this (replace your home directory for /mnt/stof): (quote ((:product ; This is a comment ("P" "ProMAX" ".:/mnt/stof/promax/1998.6/rs6000/exe"\ ; executables "/mnt/stof/promax/1998.6/port/menu/promax:\ promax" ; menus "/mnt/stof/promax/1998.6/port/menu/promax /Processes" ; Processes file "promax:/mnt/stof/promax/1998.6/port/help\ /promax" ; helpfiles "" ; miscellaneous files (use default) "" ; data area (use default) t) ("p" "Prospector" "" "prospector" "prospector/Processes" "prospector" "" "" t) ("3" "3D Promax" ".:/mnt/stof/promax/1998.6/rs6000/exe" "/mnt/stof/promax/1998.6/port/menu/promax3d:\ promax3d" "/mnt/stof/promax/1998.6/port/menu/promax3d\ /Processes" "promax3d" "" "" t) )) )

Remember that a backslash (\) at the end of a line indicates that the line of code is longer than we can print on one line of this manual and, therefore, continues on the next line; you should type it all as one line without the backslash. Also, you can use a “~” rather than typing out your home directory.

Other Docs

Known Problems

Overriding the Default ProMAX Files9

Developer’s Programming Guide

Note that specifying /mnt/stof/promax/1998.6/menu:promax creates a search path, separated by a “:”, for menus. Also note that just specifying promax or “.” refers to the default master directory tree such as /advance/port/menu/promax. You can use a “.” only for the executables master directory; for the others, you must use promax. Thus, this search path above looks first in your home directory, and then in the master location. When you now start the user interface by typing promax or $PROMAX_HOME/sys/bin/promax, the User Interface should read your personal Processes file, and you should see your new tool in the list of Processes. You should be able to bring your menu into a flow and parameterize it. You can supply any number of product stanzas in your personal .promax file, just so long as the initial single character strings, e.g. “P” and “3”, are unique. The User Interface will then display your list under the “Products” menu item in the Flow window and you can select any of them with the mouse prior to building or editing a flow. This avoids having to exit the User Interface in order to switch between development and/or production environments. Finally, it is important to remember that the User Interface looks at the setting of the environment variable new_menu to determine the menu init behavior. If new_menu=t, then the User Interface will re-initialize the menu every time it is displayed. If new_menu=f, then the flow builder will only read files once. This means if you modify the processes file or a menu file after starting the flow builder, it will not see your changes. Furthermore, if you add a tool to a flow, the corresponding menu file will be saved with the flow, and the menu file will not be read again when the flow is accessed again. Setting new_menu=t is a big help in testing menus and should be part of the ProMAX programmer’s environment. This setting slows down the User Interface because the menu files are reread; therefore, it is not recommended for the typical production processing environment.

Other Docs

Known Problems

Writing a ProMAX Program10

Developer’s Programming Guide

Writing a ProMAX Program To examine a sample IPC tool, written in C, look in $PROMAX_HOME/port/src/exe/c_socket and $PROMAX_HOME/port/src/exe/ampRatio. A sample FORTRAN IPC tool is in $PROMAX_HOME/port/src/exe/f_socket and $PROMAX_HOME/port/src/exe/amp_ratio. Each of these directories contain sample menus, although you should copy the menus to a more proper location, such as your own menu directory. Sample FORTRAN standalone programs are in $PROMAX_HOME/port/src/exe/stand_alone. A good portion of this manual deals with creating in-line tools that are executive tools, not IPC tools. To compile and link these programs, go to ~/$PROMAX_HOME/port/src/c_socket. Then type /$PROMAX_HOME/sys/bin/gmake

which should compile and link the sample code. The executable produced by this make will reside in ~/$PROMAX_HOME/rs6000/exe. For more information on the Make System, refer Make System chapter. For more description on IPC tools or stand-alone programs, see the Tool Types chapter. If you copy this directory and start making modifications to the code, you will need to change the name in the Makefile to match the name of your directory. You can change your product file to include your home directory in the search path for ProMAX executables. See the description in the product file. If you install your working source code under the $PROMAX_HOME tree, you can do a gmake there and your executable will be placed under the standard ProMAX executable path, $PROMAX_HOME/sys/exe. You could do your development under the $PROMAX_HOME directory tree rather than in your home directory, but we discourage this. It will become a maintenance hassle when many developers work in that area.

Other Docs

Known Problems

Viewing Online Documentation11

Developer’s Programming Guide

Viewing Online Documentation All ProMAX subroutines are accessible from either FORTRAN or C. We have online manual pages for these subroutines, which you can access by typing $PROMAX_HOME/port/bin/aman tblToMatrix

The aman command supports the -k option; to find all C subroutines for dealing with tables, type aman -k table | fgrep “C routine”

For a quick summary of the ProMAX subroutines, type aman c_promax

or aman f_promax

Other Docs

Known Problems

Writing Helpfiles12

Developer’s Programming Guide

Writing Helpfiles Documentation is an important part of the ProMAX system. You should write a helpfile for your new process if it is going to be used in a production processing environment. At Landmark, we use Adobe FrameMaker to generate and edit most of our online and printed software documentation. For more information, see the Helpfiles chapter, or contact any member of the Documentation Department for assistance.

Other Docs

Known Problems

13

Developer’s Programming Guide

Self-guided Tutorial

This chapter is designed to be used as a guide for programmers who want to learn to write ProMAX software modules. The tutorial directs you through a sequence of chapters to read and specific exercises to work on in order to learn the parts of the system.

Topics Covered in this Chapter:

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Support Documentation System Overview Your Development Directory Tool Anatomy Programming Exercises: Simple Tools (amp_ratio) Debugging C Programming Environment Tool Types Programming Exercise: Ensemble Tools (AVO_exer) Programming Exercise: Panel Tools Programming Exercise: Input Tools Programming Exercise: IPC Tools

Materials Needed for this Tutorial:

Other Docs



This Programmer’s Reference Manual



The ProMAX 2D Reference Manual



The FORTRAN Programmer’s Reference Manual or the C Programmer’s Reference Manual, depending on your programming language preference. These manuals are summarized in the C Library Summary and FORTRAN Library Summary appendices.

Known Problems

14

Developer’s Programming Guide



Access to a computer on which ProMAX and the ProMAX Development Environment have been installed. In addition, the computer must have an editor program, such as vi or emacs, to allow you to create new files, as well as a C or FORTRAN compiler.

To use this tutorial, simply read through each section and follow the reading and programming exercise assignments. The specific reading and exercise assignments are highlighted as follows:

➱ Now do this. Before you get started, please note that this chapter refers to the environmental variable $PROMAX_HOME, which is the path name of the directory under which ProMAX is installed. Ask your system administrator for help in setting up your ProMAX environment if he or she has not already done so.

Other Docs

Known Problems

Support Documentation15

Developer’s Programming Guide

Support Documentation The online C or FORTRAN Programmer’s Reference Manual contains documentation on hundreds of subroutines that are available in the ProMAX system. We have summarized these subroutines in the C Library Summary and FORTRAN Library Summary appendices..

➱ Take a few minutes now to skim through the C or FORTRAN library summaries to become familiar with the subroutine categories, such as Trace Headers and Parameter Input. To get a even better overview of the manual, read through the individual subroutine names and descriptions. We discuss only a small percentage of the available subroutines in this tutorial, so it is up to you to study the library to get the maximum use from it. You will also want to refer to the ProMAX Reference Manual. This manual provides basic information about the ProMAX system from the user’s point of view, along with additional information about Parameter tables and other parts of the database. We will give you specific reading assignments in these chapters at the appropriate times. Finally, the ProMAX routines are also documented in the online manual pages. You can access these manual pages by using the aman command. For example, to view the documentation for the ProMAX routine tblCopy, type: aman tblCopy

from the command line of the computer. Your path must include $PROMAX_HOME/port/bin. To see a name and short description of all table routines, type aman -k tbl

➱ Try typing aman tblCopy to see if your environment is set up properly for using the aman online documentation. If a message is returned saying that there is no manual entry for tblCopy, contact your system administrator for help in setting environment variables.

Other Docs

Known Problems

Support Documentation16

Developer’s Programming Guide

Another aman command allows you to see all ProMAX subroutine names by category. Type aman c_promax

to see all ProMAX C routines by category, or aman fortran_promax

to see all ProMAX FORTRAN routines by category.

Other Docs

Known Problems

System Overview17

Developer’s Programming Guide

System Overview The primary function of ProMAX is to create, modify, and execute processing flows. A flow is a sequence of processes which are used to manipulate seismic data. Flows are built by selecting processes from a Processes List. A typical flow contains an input process, one or more data manipulation processes, and a display and/or output process. Some of the chapters in this manual provide basic information on how the system components work together and introduce ideas that are present throughout the rest of this tutorial. Others describe the specific processes of the ProMAX system. Because you will be writing new menus which appear as part of the User Interface, you should have a basic familiarity with the selection of menu parameters at the User Interface level.

➱ Turn to the chapter entitled Working with ProMAX in the ProMAX 2D Reference Manual. Read the sections entitled Getting Started and Building a Flow. These sections provide a brief overview of Areas, Lines, and Flows.

➱ Read the System Overview, Executive, and Directory Structure in this manual. If you are already familiar with the User Interface, the ProMAX data directory structure, tables, and datasets, you may skip these chapters.

Other Docs

Known Problems

Your Development Directory18

Developer’s Programming Guide

Your Development Directory In order to develop ProMAX software, a particular directory structure must exist under the programmer’s home directory. The following exercises will help you create this directory structure.

➱ Read the Make System chapter. Complete the Makeadvance exercise in the User Setup section; be sure you can complete the Makeexec exercise. Contact your system administrator if the computer refuses to allow you to complete everything discussed in the chapter.

➱ Be sure to read the Adding A New Tool section of the Tool Types chapter. This section provides a recap of the critical components of a new processing tool and shows how they work together.

Other Docs

Known Problems

Tool Anatomy19

Developer’s Programming Guide

Tool Anatomy Before you actually get started on the programming exercises, you first need to understand the structure of the ProMAX tools.

➱ Read the Executive Tools section of the Tool Types chapter. You do not need to read beyond this section, since the remainder of the chapter will be assigned later in the tutorial. This assigned reading section provides an overview of the structure of most ProMAX processes, including the processes in the programming exercises in this tutorial.

Other Docs

Known Problems

Programming Exercises: Simple Tools (amp_ratio)20

Developer’s Programming Guide

Programming Exercises: Simple Tools (amp_ratio) The exercises in this tutorial consist primarily of example programs to which you add code as a way of learning the important parts of ProMAX programming. The first such program is called amp_ratio.f for FORTRAN programmers, and ampRatio.c for C programmers. The functionality of the C and FORTRAN versions is the same, so the program will in general be referred to as amp_ratio. amp_ratio is an example of a simple tool. A simple tool operates on a single trace at a time. amp_ratio operates on a single trace by sliding two windows down the trace, taking the ratio of powers in the windows, and outputting the ratio as a sample at the beginning of the upper window. This is a crude first break picker, since the peak value of the output trace is normally at approximately the first break time. While this routine is not high tech by any means, the programming exercises involving amp_ratio demonstrate four important features of ProMAX Programming: •

the general mechanics of adding a tool



adding and manipulating trace headers



the use of the ordered parameter files



the use of tables to store time gates

The FORTRAN programming exercises that demonstrate these last three features are amp_ratio1.f, amp_ratio2.f, and amp_ratio3.f, respectively. Working code which contains solutions to all three of these programming exercises can be found in amp_ratio.f. The corresponding C programming exercises and code is ampRatio1.c, ampRatio2.c, ampRatio3.c and ampRatio.c.

➱ To begin, move to your own maxtool/amp_ratio directory by typing the following command on the command line: cd ~/$PROMAX_HOME/port/src/lib/maxtool/amp_ratio

➱ If you are programming in C, copy the .c files from the system amp_ratio directory to your own directory by typing: cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio/*.c .

Other Docs

Known Problems

Programming Exercises: Simple Tools (amp_ratio)21

Developer’s Programming Guide

If you are programming in FORTRAN, copy the .f and .inc files from the system amp_ratio directory to your own directory by typing a command similar to the one above.

➱ Set the access permissions to the files by typing: chmod 644 *.*

➱ Next you will need to copy the menu files into your personal menu directory. To do this, go to your own menu directory by typing: cd ~/$PROMAX_HOME/port/menu/promax

then type the following command: cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio/*.menu .

➱ Set the permissions so that you can edit the files by typing: chmod 644 *.menu

amp_ratio Exercise 1: Adding Trace Headers The first amp_ratio exercise involves getting a first break pick time from the amp_ratio program, getting a quality estimate of the first break pick time, and placing those values into a trace header.

➱ Read the Trace Headers chapter. ➱ Next, edit amp_ratio1.f (or ampRatio1.c) so that the first break time and first break quality information can be read into the trace header for each trace. You can find solutions in the file amp_ratio.f or ampRatio.c.

➱ If you do not know how to compile and link the code into a new exec.exe, review the Make System chapter. Try your new program! Remember to add the path to amp_ratio1.menu to your Processes list. Review the Make System chapter if necessary. Also remember that you need to have a .promax file set up in your directory in order to tell ProMAX where to look for your Processes file and where to find the executable code that you have created. The .promax file is also discussed in the Make System chapter.

Other Docs

Known Problems

Programming Exercises: Simple Tools (amp_ratio)22

Developer’s Programming Guide

Menus A ProMAX menu is a file which controls how menu parameters appear to the user.

➱ Read the Menus chapter. ➱ Copy the EXAMPLE.menu file to your own menu directory by typing the following on the command line: cd ~/$PROMAX_HOME/port/menu/promax cp $PROMAX_HOME/port/menu/EXAMPLE.menu .

➱ Use the pwin program described in the Menus chapter to experiment with menu parameters in EXAMPLE.menu.

Global Parameters There are many numerical variables that are related to a dataset, such as the number of samples per trace and the number of trace headers that exist for each trace. ProMAX has a global include file that you can access to easily get this information.

➱ Read the Global Parameters chapter. C programmers can then look at: $PROMAX_HOME/port/include/cglobal.h

and/or read the C Environment chapter for more information.

amp_ratio Exercise 2: ordered parameter files The next amp_ratio exercise involves putting the first break information from the previous exercise into the ordered parameter files in the database.

➱ Read the Ordered Parameter Files chapter. ➱ Edit amp_ratio2.f or ampRatio2.c to add parameters to the TRC (trace) ordered parameter file; fill those database locations with the appropriate information. Add the necessary code to the menu file to allow the option of writing to the database. You might notice that there is already code in amp_ratio2.f and ampRatio2.c to take care of the trace headers.

Other Docs

Known Problems

Programming Exercises: Simple Tools (amp_ratio)23

Developer’s Programming Guide

Remember to change the Processes list to point to the right menu since there is a different menu for amp_ratio1 and amp_ratio2.

➱ Run Makeexec on your new program and try it out. ➱ Run the database display program and see the first break pick times.

amp_ratio Exercise 3: time gates and tables The final amp_ratio exercise is to limit the time window for which the first break is searched. In the previous example, the entire trace was searched for the first break. The start and end times of the analysis window can be controlled through use of ProMAX parameter tables which are stored in the database.

➱ Read the Parameter Tables chapter. You can find additional information can be found in a chapter with the same name in the ProMAX Reference Manual.

➱ Edit amp_ratio3.f or ampRatio3.c; use ProMAX parameter tables to limit the time window for which the first break is searched. Edit the menu to allow the option to use the time gates and to get the time gates from the database.

Other Docs

Known Problems

Debugging24

Developer’s Programming Guide

Debugging Most full-time programmers find that use of a debugger speeds up their code development. Use of the program dbx with ProMAX requires a few special considerations, primarily related to the packet.job file.

➱ Read the Debugging with dbx chapter. Practice using the debugger on one of the amp_ratio or ampRatio routines.

Other Docs

Known Problems

C Programming Environment25

Developer’s Programming Guide

C Programming Environment The system level code of ProMAX is largely written in FORTRAN, although most new ProMAX module development is now in C and C++. The C programming environment actually lies on top of the FORTRAN environment; therefore, there are a few things that the C programmer should know about, including some conveniences that have been developed for programming in C.

➱ Read the C Environment chapter.

Other Docs

Known Problems

Tool Types26

Developer’s Programming Guide

Tool Types Before you continue on with other programming examples, you need to know about other kinds of processing tools. As you learned earlier in this chapter, the program amp_ratio is an example of a simple tool, which is a processing tool that only processes one data trace at a time. There are several other types of tools that make handling multiple traces very convenient.

➱ Read the Tool Types chapter. Pay particular attention to the responsibilities that each tool has in both the init and exec subroutines, such as setting global variables and trace header values that the trace executive uses.

Other Docs

Known Problems

Programming Exercise: Ensemble Tools (AVO)27

Developer’s Programming Guide

Programming Exercise: Ensemble Tools (AVO) This section involves programming with ensemble tools. Ensemble tools operate on bundles of traces.

➱ Go to your personal maxtool directory: cd ~/$PROMAX_HOME/port/src/lib/maxtool

➱ Copy the avo_exer directory from the $PROMAX_HOME directory tree by typing the following on the command line: cp -r $PROMAX_HOME/port/src/lib/maxtool/avo_exer .

This routine is an AVO (Amplitude Variation with Offset) module. The intent of this routine is that a CDP ensemble, sorted by the absolute value of offset, is input into the exec subroutine as a 2D array. The routine fits a least squares straight line to the amplitude versus offset graph for each time sample in the input array (see the following figure). Least Squares Fit Line Intercept of LSF Line Amplitude Sample amplitude at an offset

Offset Least Squares fit line through Amplitude versus Offset Data A single data trace is output for each CDP gather that is input to the routine. The sample value at time T on an output trace is either the slope or the zero-offset intercept of the least squares curve shown in the figure above. A menu parameter allows the user to select either slope or intercept as the output.

Other Docs

Known Problems

Programming Exercise: Ensemble Tools (AVO)28

Developer’s Programming Guide

➱ Edit amp_ratio1.f or ampRatio1.c; input code where the comment lines direct you to do so. A working solution to the exercise is in amp_ratio.f and ampRatio.c. Remember to change your tools_to_add file and Makefile before running Makeexec. The entry in tools_to_add will look like this: AVO_EXER same panel

Note that the tool type is panel. This is because ensemble tools have the same calling arguments as panel tools and are inserted in the same part of the toolcall.f file. A further example of an ensemble tool is in $PROMAX_HOME/port/src/lib/maxtool/prestk_interp, which outputs more traces in an ensemble than are input.

Other Docs

Known Problems

Programming Exercise: Panel Tools29

Developer’s Programming Guide

Programming Exercise: Panel Tools Recall from the Tool Types chapter that panel tools are used to process 2D arrays of traces that are too large to fit into memory at one time. The 2D array is processed in pieces and then blended back together by the trace executive. The next programming example is a panel tool in which a 2D panel of traces is given to the exec subroutine. The exec subroutine sets the value of each sample in the input array to a constant value; for example, all sample values are set to 2.0. The sample values of the next panel input are set to a value that is 1.0 more than the previous panel. The input from the menu controls how large a panel will be and how many edge traces will be blended with adjacent panels. This lets you experiment with panel parameters and see the result on the screen. When using Screen Display or Trace Display to see the data, use the Entire Screen option for scaling the data traces so that the differences in trace amplitudes can be seen.

➱ Go to your maxtool directory and copy the panel directory from the main system maxtool directory via the following commands: cd ~/$PROMAX_HOME/port/src/lib/maxtool cp -r $PROMAX_HOME/port/src/lib/maxtool/panel .

➱ Edit the file panel_test.f or panelTest.c where the comments direct you to do so. A working solution to this exercise is in panel_test.f or panelTest.c.

➱ Remember to change your tools_to_add file and Makefile before running Makeexec. The entry in tools_to_add will look like this: PANEL_TEST same ensemble

➱ Also remember to change your Processes file before starting ProMAX so that the new menu will be read.

Other Docs

Known Problems

Programming Exercise: Input Tools30

Developer’s Programming Guide

Programming Exercise: Input Tools An input tool is, by default, the first processing tool in a flow. It is the tool that feeds trace data to the rest of the flow. The exercise for an input tool can be found in $PROMAX_HOME/port/src/lib/maxtool/sine_wave

The sine_wave.f and sineWave.c programs generate traces and give them to the trace executive to be passed along to other modules. The traces output from sine_wave consist of a set of summed sine waves of user-specified frequency.

➱ Copy the sine_wave directory to your own maxtool directory (see the previous examples on how to copy a directory). Edit sine_wave1.f or sineWave1.c and fill in the missing code where the comments indicate code needs to be added. The entry in the tools_to_add file looks like this: SINE_WAVE same complex

A working solution to this problem is in sine_wave.f and sineWave.c. If you have problems with this module, review the Tool Types chapter in this manual, paying special attention to the responsibilities of an input tool, such as global variables and headers. Then review the section on complex tools in that same chapter, paying attention to the modes, in particular EX_FLUSHMODE and EX_QUITMODE.

Other Docs

Known Problems

Programming Exercise: IPC Tools31

Developer’s Programming Guide

Programming Exercise: IPC Tools An IPC Inter-Process Communication) Tool is a special type of tool that allows the trace executive to pass data to and from a separate unix process or separate piece of executable code. IPC tools used to be called socket tools.

➱ Read the IPC Tools section of the Tool Types chapter. ➱ Copy the directory $PROMAX_HOME/port/src/exe/f_socket or $PROMAX_HOME/port/src/exe/c_socket to your own directory tree; that is, $~/$PROMAX_HOME/port/src/exe/f_socket or c_socket. Notice that you are not in the maxtool directory where all work has been done until now. The src/exe directory is for programs that are not linked to the Trace Executive, or said another way, tools that are not called from toolcall.f. This exercise is just a demonstration of how to Make an IPC tool executable. There is a file in the c_socket and f_socket directories called Makefile. The Makefile contains, among other things, the name of the output executable. The name is specified where the Makefile states: name = c_socket

The name of the executable must match the name of the directory in which the Makefile resides. The Makefile also contains a list of the object files that need to be used.

➱ Make the IPC tool by simply typing gmake in the directory where Makefile resides. Run the program by putting an entry in the Processes file for the path to the menu. You can see another example of an IPC tool in $PROMAX_HOME/port/src/exe/ampRatio for C programmers, or in $PROMAX_HOME/port/src/exe/amp_ratio for FORTRAN programmers. These examples show the amp_ratio program in the form of IPC tools. The Makefiles and menus are included and can be compiled and used in a processing flow.

Other Docs

Known Problems

Programming Exercise: IPC Tools32

Other Docs

Developer’s Programming Guide

Known Problems

33

Developer’s Programming Guide

System Overview

This chapter provides an overview of ProMAX. It describes the directory structure of the system and the User Interface.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲ ➲

Other Docs

ProMAX Organization: Areas, Lines, and Flows The User Interface: The Flow Builder Menu Files Flow Execution Super Executive Executive

Known Problems

ProMAX Organization: Areas, Lines, and Flows34

Developer’s Programming Guide

ProMAX Organization: Areas, Lines, and Flows The primary function of ProMAX is to create, modify, and execute processing flows. The program is built upon three levels of organization: areas, lines, and flows. An area can represent any grouping of seismic lines, but it is usually used as a prospect level collection of seismic lines. A line (in 2D) or survey (in 3D) usually represents a single seismic line or survey in the tradition sense. A flow is a sequence of processes which are used to manipulate seismic data. Flows are built by selecting processes from a Processes list. A typical flow contains an input process, one or more data manipulation processes, and a display and/or output process. Before we describe the system software any further, it is useful if we first describe the directory structure of the system, particularly the data files (see the ProMAX Directory Structure illustration at the end of this chapter). The ProMAX home directory defaults to /promax/1998.6, but you can re-specify this via the environmental variable PROMAX_HOME. Beneath the ProMAX home directory is a subdirectory named data/, which you can also be re-specify with the environmental variable PROMAX_DATA_HOME (see the ProMAX Data Directory Structure illustration at the end of this chapter). Any directory or symbolic link within the data/ subdirectory is, by definition, an area. Any subdirectory or symbolic link beneath an area subdirectory is, by definition, a line or a survey. ProMAX expects a line to be a collection of data that can be described by a single geometry. Most of the actual data files reside at the line or survey directory level (see the Dataset Components, Ordered Database Files, and Parameter Tables illustrations at the end of this chapter). Traces, trace headers, Ordered Parameter Files, and Parameter Tables, such as velocities, first break picks, mutes, gates, trace kills, also reside at this level. Any subdirectory beneath a line or survey subdirectory represents a further subgrouping, namely a flow. The flow subdirectory contains all of the files related to a particular processing flow, such as the saved binary job flow and the runtime job output (see the Flow Components illustration at the end of this chapter).

Other Docs

Known Problems

ProMAX Organization: Areas, Lines, and Flows35

Developer’s Programming Guide

There is an exception to the rules that govern the existence of areas, lines, and flows in the database: any subdirectory with a period in the name is ignored to facilitate the customization of the database to contain non-ProMAX files.

Other Docs

Known Problems

The User Interface: The Flow Builder36

Developer’s Programming Guide

The User Interface: The Flow Builder What people have come to know as ProMAX is sometimes called the User Interface, but is more correctly called the flow builder. This is the very top level of the system (see the System Overview illustration at the end of this chapter). If you are familiar with the previous generations of the processing system, the flow builder is analogous to the editor that was used to build jobs in those previous generations. In fact, the flow builder is just a very specialized editor; it actually has no notion of geophysics built into it. Its primary purpose is to present menus and accumulate a collection of parameters, write those parameters to disk, and then launch programs that will use the parameters. The ProMAX Flow Builder Window Map illustration (at the end of this chapter) sketches the various flow builder menus, and the Flow Builder illustration (at the end of this chapter) shows an screen image of the Flow Menu where jobs are actually built and executed.

Other Docs

Known Problems

Menu Files37

Developer’s Programming Guide

Menu Files In order for the flow builder to know how to present parameters to the user, it must read menu specification files, which are also called menu files. Menu files are closely integrated with the underlying processing tools; both are usually written by the same author. Within the menu files, programmers specify what parameters should exist, what their properties are—such as parameter type, default value, and description—and how they interrelate. Experience has shown that good menu parameter specification evolves to the point of approaching a language; at Landmark, we use Lisp, a language with suitable features. The resulting menu specification is extremely well-suited to handling traditional problems, such as context-sensitive hiding of unused parameters, and is clearly one of the cornerstones of ProMAX’s success. Please refer to the Menus chapter for further discussion of ProMAX menu specifications.

Other Docs

Known Problems

Flow Execution38

Developer’s Programming Guide

Flow Execution When a user clicks on the Execute button on the Flow Menu, the first step that the flow builder performs is to write all of the parameters to disk in the form of a packet file. A packet file is analogous to the ASCII job deck that existed in older systems, except that it is binary in form and cannot be modified by any editor except the flow builder. This protects the integrity of the contents. The flow builder writes the packet file into the flow directory that was created when the user constructed a new flow or copied an existing flow. After the flow builder writes the packet file, it executes the following shell command: super_exec.exe packet_file host_name

Super Executive The preceeding shell command launches the Super Executive, or super exec for short. The flow builder executes this Super Executive shell immediately, unless the job was submitted to a queue. The Super Executive supplies the command line with the path to the packet file and the name of the host on which the flow builder is running. The Super Executive is, in a sense, similar to the flow builder—it is restricted in purpose and has no notion of geophysics built into it. It is responsible for seeing that the intended processing actually happens so the flow builder can go off and do something else, like build another flow. The Super Executive first checks for the presence of multiple tasks within the packet file. For example, if there are multiple serial input steps in a flow, the Super Executive subdivides the packet file and executes the separate jobs sequentially. In addition, the Super Executive checks to see if a job should be run on another host. It also performs other tasks, such as expanding certain macros—for example, the Parameter Test— in what we could loosely describe as filtering the packet file. The Super Executive knows how to do most of these things because it receives instructions from programmers via special parameters that are included in the menu files, and are therefore included in the packet file. Some of the functionality within the Super Executive could be included in the flow builder without changing the behavior of

Other Docs

Known Problems

Flow Execution39

Developer’s Programming Guide

the system. The exception is that the Super Executive waits on the completion of the tasks that it spawns, rather than spawning them in the background. This enables it to spawn processes sequentially if required, and makes the spawning of one subprocess contingent on completion of the previous one. It also allows the Super Executive to report the completion status of any process back to the flow builder—even processes that are not strictly part of the ProMAX product. After the Super Executive modifies the packet file, it writes a new packet file in a scratch directory and launches the process that does the actual work. In a normal processing flow, the form of this command is: exec.exe packet_file host_name PID

Executive The preceeding command launches the Executive, or exec for short. The Executive supplies the command line with the path to the packet file, the name of the host on which the flow builder is running, and the process ID of the Super Executive. If the processing flow contains stand-alone or other special tools, the correct path to the process which was specified in the menu file is substituted for exec.exe. The full path to the process can vary with release and may also be controlled by the user via the Alternate Executive directive. The Executive includes the actual processing tools, such as gain, decon, filter, etc. Ordinary processing tools are not separate programs, but are rather a set of routines that are linked together in the Executive. Tools can be single trace input and output types or one of a number of multi-trace types (see the Processing Pipeline illustration at the end of this chapter). The primary function of this document is to provide you with information on how to add functionality to the Executive. You will see the terms exec and Executive used frequently in this document. They are often used interchangeably, but we will restrict the term exec to refer literally to the program named exec.exe, and the term Executive to refer to the body of software that surrounds the actual processing tools and handles the flow of data within the pipeline.

Other Docs

Known Problems

Flow Execution40

Developer’s Programming Guide

Each processing tool that is linked into the Executive must have two parts: an initialization routine and an execution routine. The initialization routine checks the input parameters for validity, sets global parameters (if appropriate), creates header entries, allocates memory, and other miscellaneous tasks that must be performed before traces can be processed. The execution routine is ideally a narrow-minded routine that simply processes the traces and headers. Both routines are surrounded by the Executive, which does everything it can to make life easy for the initialization and execution routines.

Other Docs

Known Problems

Flow Execution41

$PROMAX_HOME

Developer’s Programming Guide

/sys

/bin promax promax3d promaxvsp /frame /exe /sdi exec.exe super_exec.exe /(3rd party softwar *.exe /plot /lib lib*.a /help promax2d.ps promax3d.ps promaxvsp.ps (Manuals)

/port

/promax *.lok (Frame helps *.help (ASCII help /promax3d /promaxvsp

/menu promax promax3d promaxvsp /misc *.rgb (colormaps) ProMAX defaults /etc config_file product pvmhosts qconfig /scratch /queues /area1 /data (or $PROMAX_DATA_HOME)

/line1

ProMAX Directory Structure

Other Docs

Known Problems

Flow Execution42

Developer’s Programming Guide

PROMAX_DATA_HOME Area

/data

/area1 DescName Project

Parameter Tables

Trace Headers

Line

/line1 DescName 12345678CIND 12345678CMAP

17968042TVEL 317490267TGAT 22783694TFBP 36247238TMUT

/12345678 HDR1 HDR2 TRC1 TRC2 /flow1 DescName (ASCII) TypeName (ASCII) job.output (ASCII) packet.job (Binary) job.output (Binary) /flow2

Flow

/OPF.CDP (CDP) #s0_OPF_CDP.GEOMETRY.C_S #s0_OPF_CDP.GEOMETRY.ELE

Ordered Parameter Files database

/OPF.CHN (Channel) OPF60_CHN OPF60_CHN.GEOMETRY.FOLD OPF60_CHN.GEOMETRY.SLOOKUP

/OPF.ILN (Inline-3D) /OPF.LIN (Line) /OPF.OFB /OPF.PAT (Pattern) /OPF.SIN (Source) /OPF.SRF (Surface Station) /OPF.TRC (Trace) /OPF.XLN (Crossline) /OPF.XLN (Crossline-3D)

ProMAX Data Directory Structure

Other Docs

Known Problems

Flow Execution43

Developer’s Programming Guide

Traces

Trace Headers

Misc. Map

Dataset Components

Line Units X Ref Y Ref Lo Cut Hi Cut Total # of CDP’s etc. Source X Coord Y Coord Chans Uphole Static Elevation etc.

Trace FB Pick Offset Static etc. Surface Location Fold X Coord Y Coord Static Datum Vel etc.

Channel Static Amplitude Com etc.

CDP Fold X Coord Y Coord Elev Nearest SRF Elev Static etc.

Other Docs

Offset Bin Center Mean Offset etc.

Known Problems

Flow Execution44

Developer’s Programming Guide

Mutes

Gates

Surg Mutes

Kills

Reversals

Horizons

Velocities

Etc....

Ordered Database Files Parameter Tables

Flow Directory

Packet File (packet.job)

Job Output (job.output)

DescName

TypeName

Flow Components Other Docs

Known Problems

Flow Execution45

Developer’s Programming Guide

Job Builder (promax.exe) Batch Queue Super Executive (super_exec.exe)

Stand-alone Programs

Executive (exec.exe) Tool Caller AGC

Diskread

NMO

Line Database Information (datasets, parameter tables, ordered database files, etc.)

Mute Filter

Packet File

System Overview

Other Docs

Known Problems

Flow Execution46

Developer’s Programming Guide

ProMAX Flow Builder Window Map

Other Docs

Known Problems

Flow Execution47

Developer’s Programming Guide

Windows Overlap

Area: XXXXXX Line: YYYYYY Flow: ZZZZZZ

Edit Flow

Seismic Datasets

Other Docs

Processes

Window Overlay Parameter Tables

Known Problems

Flow Execution48

Developer’s Programming Guide

Global OptionsProcessing Flow

Flow Builder

Other Docs

Known Problems

Flow Execution49

Developer’s Programming Guide

Input Data

FK Filter Gain Running Mix Mute Stack NMO

Output Data

Single Trace Processes

Multi-Trace Processes

Processing Pipeline

Other Docs

Known Problems

Flow Execution50

Other Docs

Developer’s Programming Guide

Known Problems

51

Developer’s Programming Guide

Executive

This chapter describes the architecture and functions of the Executive system.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Other Docs

System Architecture Headers and Global Variables Input Tools Re-entrancy Common Blocks and Parms Structures Executive Functions Communication between Tools OPF Database

Known Problems

System Architecture52

Developer’s Programming Guide

System Architecture The ProMAX Executive is a pipeline system. In this system, traces are dropped into the flow one at a time by the input tool (see the Processing Pipeline illustration at the end of the System Overview chapter). Individual traces continue to drop through the flow until they encounter a multi-trace processing tool. The trace executive collects all of the necessary traces for the multi-trace processing tool and returns the traces back to the flow when the tool is finished. There are a variety of tool types; each type is designed to minimize the coding required to implement the necessary trace handling. These tool types range from simple, single-trace input/output tools, to multi-trace input/output tools of a variety of natural trace groupings, and finally to complex tools that facilitate the ultimate user control of trace buffering. Please see the Tool Types chapter for a description of each tool. The Executive creates an essentially isolated environment for each tool. It manages most of the tedious duties of trace and trace header I/O, data buffering, initialization of global variables, and other miscellaneous tasks. The following sections describe these duties.

Other Docs

Known Problems

Headers and Global Variables53

Developer’s Programming Guide

Headers and Global Variables At run time, each tool is presented with a list of active headers and a set of initialized global variables. The header list contains the description, format, and size of all headers that are valid at that step in the flow. The global parameters can be categorized as static or volatile. The static type of global parameters are typically geometry-related: the number of CDP’s, minimum and maximum station number, maximum offset in data, etc. The volatile type of globals, named global run time or run time variables, refer to variables that can change from step to step within the flow: sample rate, trace length, sort order, maximum ensemble size, etc. Headers can be added or deleted and run time variables changed at any step in the flow; for example, resampling changes the sample rate and trace length. Because of this, the Executive actually keeps a separate copy of headers and run time globals for every step in the flow. In a sense, then, the environment changes dynamically as traces pass from tool to tool. Please see the Global Parameters and Trace Headers chapters for details on these subjects. It is important to note, however, that the global run time attributes cannot change within a process or tool. In other words, while the sample rate, sort, trace length, and maximum ensemble size can all change from tool to tool, they can not change from trace to trace, such as by shot position within a tool. Please review the list of run times in the Global Parameters chapter.

Other Docs

Known Problems

Input Tools54

Developer’s Programming Guide

Input Tools The primary responsibility for setting the globals and establishing the current header list is assigned to the first tool in the flow, which by definition is an input tool. All input tools must retrieve the current state of headers and run time variables from the input ProMAX dataset and must make these available to subsequent tools. The input tool must also retrieve the remaining static global variables from the Ordered Parameter Files (geometry). If your input tool is reading a non-ProMAX (foreign) dataset or creating data, you are responsible for retrieving the necessary data from whatever source and creating all possible header and global values. See the Input Tool Examples in the Appendix for code examples.

Other Docs

Known Problems

Re-entrancy55

Developer’s Programming Guide

Re-entrancy Another related issue that the Executive manages is the issue of re-entrancy. As mentioned above, the Executive attempts to isolate each tool in such a way that the programmer codes his tool as if it is the only one in the flow. There is no need to worry about a ProMAX user putting your tool in the flow multiple times, which could create confusion over which set of parameters goes with which instance of the tool. Nor is there need to worry about two tools coincidentally sharing identical menu parameters or internal local variables. The Executive assures that each tool can only access menu parameters from its corresponding menu in the flow; it offers protection for duplicate variables.

Other Docs

Known Problems

Common Blocks and Parms Structures56

Developer’s Programming Guide

Common Blocks and Parms Structures The Executive manages a SAVED_PARMS COMMON block for each tool for FORTRAN code, or a parms structure for C code. The n FORTRAN COMMON block or C parms structure stores variables that need to be passed from the initialization routine to the execution routine, or local variables that must retain their values between each call of the Executive routine. Clearly this must be accomplished by swapping a separate COMMON block in and out of memory for each step in the flow. Again, the application programmer can develop a routine as if it is the only tool and with a single COMMON block. The FORTRAN SAVED_PARMS COMMON block must be fixed in size to accommodate memory swapping. Currently the size is limited to 1000 4-byte words. If your application requires more than a small fraction of this, you are probably misusing the space by passing arrays instead of pointers or indexes to the FORTRAN RSPACE memory array. As described in the Executive section of the System Overview chapter, each tool consists of two FORTRAN or C routines: the initialization (or init) routine, and the execution (or exec) routine. The Executive runs each of the init routines in the flow once. The exec routines are cycled through many times until all of the traces are processed through the pipe. The init routine is responsible for reading menu parameters, making any one-time calculations, resetting run time parameters (for example, changing the maximum ensemble size to one if stacking), and creating/deleting headers entries. Remember, changing the status of the headers or global run time parameters must be done in the init routine in order for other init routines to be aware of the change. This is one of the few situations where you must consider what happens before or after your application.

Other Docs

Known Problems

Executive Functions57

Developer’s Programming Guide

Executive Functions The exec routine does the actual trace processing. The Executive passes traces and headers to the exec routine in buffers and expects the processed traces to be in the same buffers when exiting the routine. If the tool is a multi-trace type, the Executive will allocate the memory necessary to accumulate the proper trace grouping, then free the memory after that group of traces is processed and passed to the next tool. This memory management is done automatically without any coding at the tool level. Often you will need to dynamically allocate additional memory within the tool to hold temporary results. This is supported both in FORTRAN and C, but it is assumed that such memory will be de-allocated upon exiting the exec routine. Remember that many tools could comprise a flow: your tool could be repeated many times. After exiting your multi-trace exec routine, it might be some time before the remaining flow is executed and traces are again accumulated and passed to your exec routine. This is another situation where you must consider the entire flow instead of simply your application. It is assumed system-wide that two ensembles will fit comfortably in memory. However, tools that require many ensembles, multiple copies of ensembles, or large collections of stacked traces should buffer traces on disk or process panels of traces.

Other Docs

Known Problems

Communication between Tools58

Developer’s Programming Guide

Communication between Tools As previously discussed, tool-to-tool isolation makes the programming environment simple, and the Executive does this task well. However, inter-tool or inter-process communication is necessary. Tools that change the sample rate, trace length, kill a trace, statically shift a trace, etc., need to communicate this change to processes later in the flow or in a subsequent flow. Trace headers move with the data, and are an excellent way to tag traces with information that may be required for a future process. Headers move through the flow and are stored on disk or tape upon output. Run time global parameters are also an excellent vehicle for communicating global changes in the data. As with headers, they move through the flow and are stored with the traces upon output. However, these global parameters cannot change from one trace to the next within a tool.

Other Docs

Known Problems

OPF Database59

Developer’s Programming Guide

OPF Database The Ordered Parameter Files database is another form of inter-tool communication. Refer to the Ordered Parameter Files chapter for a definition of this database. It is typically used to communicate information that is global to all datasets within the line; field geometry data is a good example. Convenience is a governing factor in choosing the form of communication that is used. For example, geometry information that is placed in the Ordered Parameter Files database can be conveniently accessed by stand-alone programs that do not wish to read trace data.

Other Docs

Known Problems

OPF Database60

Other Docs

Developer’s Programming Guide

Known Problems

61

Developer’s Programming Guide

Make System

This chapter provides an overview of the ProMAX make system.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Other Docs

Working with ProMAX Systems Getting Started System Administrator Setup User Setup: Aliases, Makeadvance, Makeexec Converting to the New System Understanding the Directory Structure Customizing the System Toggling Products: .promax Adding a New Tool Making Your New Executable Incorporating New Functionality Creating Menus Adding a ProMAX menu Changing Files Understanding the Makefile System

Known Problems

Working with ProMAX Systems62

Developer’s Programming Guide

Working with ProMAX Systems The ProMAX file and recompilation systems allow large groups of programmers to develop and maintain software for multiple machine types from a single master copy of source code. Some important features of these systems are:

Other Docs



All Landmark software lies beneath a single directory, e.g. /promax/1998.6/, in subdirectories with standard UNIX names like bin/, include/, lib/, and src/. We will henceforth refer to this directory as the environment variable $PROMAX_HOME.



To support concurrent development, each programmer has a subset of $PROMAX_HOME (containing the files they are working on) in their home directory.



Machine-dependent subdirectories of $PROMAX_HOME enable a single file server or programmer’s home directory to contain executable programs and libraries for all machine types supported by Landmark.



Software is organized to facilitate the evolution of Landmark products, such as ProMAX, ProMAX VSP, and ProMAX 3D.



Third-party software is organized to facilitate distribution (if permitted) to Landmark clients and to avoid conflicts in file names among different third-party vendors.



GNU make (gmake) is used to build executable programs and libraries and is provided with the ProMAX developer kit.



Programmers write Makefiles, one for each program or library, with compilation, link and installation commands that may be customized without affecting other programs.



Makefiles used at Landmark are designed to be used by Landmark clients.



Source code can be compiled and linked from one location for all machine types supported by Landmark.

Known Problems

Getting Started63

Developer’s Programming Guide

Getting Started The following sections describe the System Administrator and User setup.

System Administrator Setup The following steps must be performed after installing the ProMAX release to configure the Development Environment. These steps help ensure that you will have little trouble when attempting that first compile and link.

Modify the $PROMAX_HOME/port/include/make/master.make file 1.

Change the line atopdir := /promax/1998.6 to be the absolute path to your $PROMAX_HOME directory. An example might be: atopdir =: /Landmark/ProMAX/1998.6

2.

Change the line utopdir := $(HOME)/promax/1998.6 to be the path where each developer will build and use his own copy of the ProMAX development tree. An example might be: utopdir=: $(HOME)/Landmark/ProMAX/1998.6

Generally, Landmark recommends the above convention of making each user’s development path parallel the master installation path. 3.

Verify the line ctopdir := is blank. Typically you will leave this field blank. This is used when clients have in-house software which they want to be able to link with Landmark libraries, or if they have their own special versions of the Landmark libraries which should take precedence over the standard Landmark libraries.

Additional Setup Step For each platform on which your company develops, review the corresponding [machine type].make file located in

Other Docs

Known Problems

Getting Started64

Developer’s Programming Guide

$PROMAX_HOME/port/include/make/. Such files are supplied for the following machine types: • • • • •

crayymp rs6000 sgimips sgimips4 solaris

The [machine type].make file should point to the default locations of such things as compilers, include files, and libraries; however, if your system has these located in nonstandard places, you will have to edit the appropriate [machine type].make file to reflect this.

Solaris Users If your target development machine type is solaris, each development machine should have a symbolic link from /bin/cc to the location of your company’s C compiler. This is used by the gmake system to figure out what platform you are on; it determines this via the extra DEFINES that each platform’s C compiler sends in by default. To get the C compiler to be recognized on Solaris, you need to create the following symbolic link: ln -s /opt/SUNWspro/bin/cc /bin/cc

Automounter Users: If your development directories tend to be automounted, the makefile system sometimes has difficulty in determining what atopdir really is. It figures this out via the lines: atopdir := /promax/1998.6 atopdir := $(shell cd

$(atopdir); /bin/pwd)

If you suspect that you have a discrepancy here, cd to atopdir—in this case /promax/1998.6—and type /bin/pwd. If it is an automounted directory, it should return the actual path, something like /tmp_mnt/promax/1998.6. If it returns /promax/1998.6, you may have a problem which might be solved a number of ways: •

Other Docs

Hard mount the ProMAX master directory.

Known Problems

Getting Started65

Developer’s Programming Guide



Change /bin/pwd to a pwd program which returns the proper path.



Change the definition of atopdir to reflect the automounted version.

User Setup The following sections describe the User setup.

Creating Aliases, Makeadvance, Makeexec To use the make system, complete the following four steps if you have not already done so (this assumes that you have already installed the release distribution). Substitute /promax/1998.6 with whatever directory your system administrator used to install the release. Because most ProMAX development and testing is done using the Unix ksh command interpreter, including all Makefile and shell script commands, Landmark recommends you employ ksh for developing your own ProMAX modules as well. It is normally straightforward to develop under other command interpreters and the following instructions include settings for several popular alternatives to the ksh. 1.

Place the release toplevel directory into environment variable PROMAX_HOME. For csh/tcsh users this line might look like: setenv PROMAX_HOME “/promax/1998.6”

For ksh/zsh/sh users this line might look like: export PROMAX_HOME=”/promax/1998.6”

2.

Other Docs

Add $PROMAX_HOME/sys/bin and $PROMAX_HOME/port/bin to your path (or PATH) in your ~/.cshrc (or ~.profile, depending on your shell and what files it looks for upon login). csh looks for a .cshrc; tcsh looks for a .tcshrc (if it cannot find one of these, it uses a .cshrc); zsh looks for a .zshrc; sh and ksh use .profile. If you are unsure of which shell you are using, the command env | grep SHELL should tell you.

Known Problems

Getting Started66

Developer’s Programming Guide

For csh/tcsh users this line might look like: set path = ($path /promax/1998.6/sys/bin \ /promax/1998.6/port/bin)

For ksh/zsh/sh users this line might look like: export PATH=”$PATH:/promax/1998.6/sys/bin:\ /promax/1998.6/port/bin)”

3.

Add the following lines to your ~/.cshrc, ~/.tcshrc, ~/.zshrc, or ~/.profile file. Csh/tcsh users: alias gmake '$PROMAX_HOME/sys/bin/gmake \ -I$PROMAX_HOME/port/include/make' alias Makeexec '$PROMAX_HOME/port/bin/Makeexec \ -I$PROMAX_HOME/port/include/make'

Ksh/sh/zsh users: alias gmake='$PROMAX_HOME/sys/bin/gmake \ -I$PROMAX_HOME/port/include/make' alias Makeexec='$PROMAX_HOME/port/bin/Makeexec \ -I$PROMAX_HOME/port/include/make'

This tells gmake and Makeexec where to look for included files.

Other Docs

4.

Csh users should type source .cshrc (or .profile) to reset your aliases and paths. Ksh users should type the command . .kshrc to reset their aliases and paths.

5.

Type Makeadvance. This will make a User development tree under your home directory. In addition to making lots of directories, Makeadvance will also put a Makefile into your ~/advance/port/src/exe/exec/ as well as your ~/advance/port/src/lib/maxtool/ directory. You should study the two Makefiles—they are short because they include other files that do most of the work. (The other files are in $PROMAX_HOME/port/include/make/.)

6.

From somewhere in your ~/$PROMAX_HOME/port/src/ directory, type Makeexec. This will take a while.

Known Problems

Getting Started67

Developer’s Programming Guide

Makeexec invokes a script that makes any or all variants of the ProMAX exec.exe files. It simply cd’s to

~/$PROMAX_HOME/port/src/exe/exec, where the Makefile for exec.exe resides, and does a gmake with the arguments that you provide to Makeexec. FORTRAN or C toolcall: By default, Makeexec builds and compiles a FORTRAN version of toolcall, toolcall.f. By setting the option language=C for the Makeexec command, that is Makeexec language=C ..., Makeexec builds and compiles a C version of toolcall, toolcall.c. This allows the building of exec.exe without a FORTRAN compiler.

Understanding What Happens Because exec.exe depends on libraries, such as libmaxtool*.a, in addition to files like toolcall.f or toolcall.c, the first thing Makefile does is check the libraries to see if it should try to update them. To help Makefile make this decision: •

if a Makefile exists in the src directory for the library, then it ensures that the library is up-to-date by changing to that directory and running gmake; or,



it uses the master version of the library and assumes that it is up-to-date.

When you run Makeexec the first time, the Makefile in your ~/$PROMAX_HOME/port/src/exe/exec/ directory finds the Makefile in your ~/$PROMAX_HOME/port/src/lib/maxtool/, cd’s to that directory, and does a gmake. Because you do not have any of the libmaxtool*.a or libmaxtool*.so libraries in your User version of ~/$PROMAX_HOME/, the Makefile in ~/$PROMAX_HOME/port/src/lib/maxtool/ will create these libraries for you. This is the time-consuming part of running Makeexec the first time: the libmaxtool* libraries are big and you do not have them yet.

First-time compilation Unlike the Master version of $PROMAX_HOME/port/src/lib/maxtool/, your User version does not (yet) contain any source code. Therefore, the Makefile simply copies the libraries

Other Docs

Known Problems

Getting Started68

Developer’s Programming Guide

$PROMAX_HOME/[machtype]/lib/libmaxtool* over into your ~/$PROMAX_HOME/[machtype]/lib/ directory. Note that “machtype” varies, depending on which platform you are making on. It is rs6000 on the IBM, sgimips or sgimips4 on the Silicon Graphics, crayymp on the Cray, and solaris on systems running Solaris 2.x. After your own libmaxtool* libraries have been built, your toolcall.f or toolcall.c will be created and compiled and your exec.exe will be linked. The toolcall is created by the script $PROMAX_HOME/port/bin/buildtoolcall. This script looks at the file $PROMAX_HOME/port/src/exe/exec/tools.db, then deletes any entries found in ~/$PROMAX_HOME/port/src/exe/exec/tools_to_delete, then adds any tools found in ~/$PROMAX_HOME/port/src/exe/exec/tools_to_add. Based on the arguments to Makeexec or gmake, the buildtoolcall script will then build a ~/$PROMAX_HOME/port/src/exe/exec/toolcall.f FORTRAN file (default), or a ~/$PROMAX_HOME/port/src/exe/exec/toolcall.c C file if buildtoolcall is invoked from gmake or Makeexec with language=C. The new toolcall source file will then be compiled and linked into a working ~/$PROMAX_HOME/[machtype]/exe/exec.exe executable.

Nth-time compilation You can run Makeexec from anywhere within your ~/$PROMAX_HOME/port/src/ hierarchy. In particular, you can run it inside your ~/$PROMAX_HOME/port/src/lib/maxtool/xxx/ directory, where you have just finished modifying the source code for your latest and greatest new tool. For new exec tools, you must specify the source files for each new tool in the newsrcs variable in your ~/$PROMAX_HOME/port/src/maxtool/Makefile.

Other Docs

Known Problems

Converting to the New System69

Developer’s Programming Guide

Converting to the New System Normally, it is sufficient to copy source files and Makefiles from your previous user development tree into your new user tree to build them under the new system. You will probably want to rexamine your Makefile(s), if any, for local dependencies. Landmark has created several Makefile examples in $PROMAX_HOME/port/include/make to assist in creating Makefiles for new tools or reworking older Makefiles for improved compatibility.

Other Docs

Known Problems

Understanding the Directory Structure70

Developer’s Programming Guide

Understanding the Directory Structure Before dealing with the details of the development system, it is essential to have a basic understanding of the directory structure in which it functions. A few of the files and directories of interest in the development system include: • • • •

$PROMAX_HOME/port/include/make/ $PROMAX_HOME/port/bin/ $PROMAX_HOME/sys/bin/ $PROMAX_HOME/port/src/exe/exec

$PROMAX_HOME/port/include/make/ This directory contains the makefile header files which are used to set up for various possible configurations. It also contains sample makefiles for a library or stand-alone executables. The following table summarizes these files. $PROMAX_HOME/port/include/make/ Files File

Description

master.make

This is the file which is responsible for setting up the basic variable structures representing the client’s directory structure. This file can be included instead of advance.make if all you need from your makefile include are variables containing paths to advance things. In this respect, it serves as a lightweight header include.

advance.make

This header file includes master.make and sets up anything which is constant among all platforms such as Canned Command Sequences. It also includes $machtype.make to import its platform dependent information.

$machtype.make

This represents the header file containing the platform dependent information such as preprocessing, compiling, and link commands. As this varies dramatically from system to system, Landmark provides some reasonable defaults. Even so, clients installing ProMAX will probably need to modify this file to reflect their particular system setup.

simple.make

A simple makefile used to compile one or more executables into the current directory.

compile.test

A simple makefile used in conjunction with $PROMAX_HOME/port/bin/ctest which allows you to compile source files into executables.

libmaxtool.example

An example user makefile for the maxtool[1-3] libraries. This file is used by Makeadvance to create a user environment.

Other Docs

Known Problems

Understanding the Directory Structure71

Developer’s Programming Guide

$PROMAX_HOME/port/include/make/ Files (Continued) File

Description

exec.example

An example makefile for the executive. This file is used by Makeadvance to create a user environment.

stand_alone1.example

An example makefile for a stand-alone module. This file is used by Makeadvance to create a user environment.

stand_alone2.example

An example makefile for a stand-alone module utilizing the maxprog.make template.

maxprog.make

A template or recipe for building an ordinary stand-alone executable. maxprog.make is merely a series of includes of other smaller granularity actions. Thus, if maxprog.make does not work well a large number of applications, it is possible to create a new template without having to rewrite each part.

maxprog++.make

A template, similar to maxprog.make, which links via the C++ compiler.

Makefile_example_*

Sample makefiles for creating new modules under the bin, exe, and lib user port/src development subdirectories.

The following examples use files from this directory.

Example: simple.make This makefile example uses simple.make to compile into the current directory. 1.

cd $HOME

2.

mkdir simpletest

3.

cd simpletest

4.

Edit Makefile to look like: srcs1 := a.c b.c srcs2 := c.c d.f exec1 := abtest exec2 := cdtest include simple.make

5.

Edit a.c to look like: #include a() {printf(“a()\n”);}

Other Docs

Known Problems

Understanding the Directory Structure72

6.

Developer’s Programming Guide

Edit b.c to look like: #include b() {a(); printf(“b()\n”);} main() {b();}

7.

Edit c.c to look like: #include c_() {printf(“c()\n”);}

8.

Edit d.f to look like: SUBROUTINE D CALL C WRITE(*,*)’D()’ RETURN END PROGRAM D CALL D END

9.

Type gmake. The executables abtest and cdtest should be compiled in the current directory.

Example: compile.test This makefile example uses compile.test. 1.

cd $HOME

2.

mkdir compiletest

3.

cd compiletest

4.

Edit a.c to contain: #include main() {printf(“Hello World From a.\n”);}

5.

Edit b.f to contain: PROGRAM MAIN WRITE(*,*) ’HELLO WORLD FROM MAIN()’ END

6.

ctest a b should compile executables a and b corresponding

to your source code a.c and b.f.

Other Docs

Known Problems

Understanding the Directory Structure73

Developer’s Programming Guide

Example: maxprog.make For this makefile example, maxprog.make might look like: include include include include include include include include include include

advance.make exe_setup.make exe_top.make depend.make exe_link.make exe_f77_compile.make exe_ansic_compile.make exe_cxx_compile.make exe_clean.make phony.make

You might decide that you need a version of maxprog.make, such as maxprog++.make; this operates just as maxprog.make, except that the C source code is compiled via the K&R C compiler by default. In this case, all you have to do is copy maxprog.make to krc_maxprog.make and substitute "include exe_ansic_compile.make" for "include exe_krc_compile.make".

$PROMAX_HOME/port/bin/ This directory contains portable scripts which can be run from the command line. The following table summarizes these scripts. $PROMAX_HOME/port/bin/ Files Script

Description

Auto_test

A script which automates quality control testing on various platforms.

Makeadvance

A script which sets up a user development environment.

Makeexec

A script which creates a user version of the executive.

Setqueues

A script which allows users to toggle between NQS and LPD queues.

adoc

A script which allows users to read documentation.

aman

A script which allows users to get online man pages.

Promax

A script which is used to bring up the user interface.

makewhatis

A script which is used to make the whatis/windex for man pages. Must be executed from the man directory.

checkin2

A script which is used to check in a previously checked out Landmark sccs file.

checkout2

A script which is used to check out an Landmark sccs file.

Other Docs

Known Problems

Understanding the Directory Structure74

Developer’s Programming Guide

$PROMAX_HOME/port/bin/ Files (Continued) Script

Description

testFlows

Another script which is used for automated testing.

buildtoolcall

A script which is used to build toolcall source in FORTRAN or C. It is normally called from the Makefile but may also be invoked from the command line. buildtoolcall accepts as an optional first command line argument -C; this switch instructs buildtoolcall to build C code instead of the default FORTRAN.

app

A small script which preprocesses, adding $PROMAX_HOME/port/include and $PROMAX_HOME/port/include/private to your include paths.

pvmcleanup

A script which kills and cleans up after stray pvm daemons.

pvmdbx.sh

A script which is invoked by an xterm by the generic debugger script. It starts the debugger and waits when it exits to prevent the window from closing.

Makelink

A script which is used to create a link from a file in a user’s directory to the corresponding file in the $PROMAX_HOME directory.

Nmgrep

A utility used for extracting information from object files.

Fid

A simple utility used to print out include dependencies in a file.

$PROMAX_HOME/sys/bin/ This directory contains non-portable scripts which can be run from the command line. The following table describes these scripts. $PROMAX_HOME/sys/bin/ Files Script

Description

amakedepend

Landmark’s version of makedepend. Generates dependencies for use with the Makefile system.

aviewer

Landmark’s command-line interface to FrameViewer.

copycat

A program which runs every 1/2 hour and takes the tape catalog files and copies them to another directory. It ensures that there is always an uncorrupted version of the tape catalog database handy.

ctags

A program used to generate tag files for vi.

ctar

A program similar to UNIX tar except that it understands the notion of secondary storage. It uses the advance tape catalog optionally.

emacs

An editor provided by the Free Software Foundation.

etags

A program used to generate tag files for emacs files.

Other Docs

Known Problems

Understanding the Directory Structure75

Developer’s Programming Guide

$PROMAX_HOME/sys/bin/ Files (Continued) Script

Description

gmake

AKA Gnu Make. Gnu’s version of the make utility.

ll

AKA lisp listener. A lisp interpreter implementing Landmark’s lisp. Useful for rule writers wanting to check syntax.

lmdown

A flexlm program for taking a lmgrd daemon down.

lmgrd

The flexlm licensing daemon.

lmreread

A program causing the lmgrd daemon to reread a license.dat file.

lmstat

A program which outputs statistics pertinent to licensing.

nmapmgr

NQS tool for building and maintaining the machine ID database.

nqs_promax

The ProMAX accounting daemon which may also be used as an alternative to rsh (if rsh is not available).

perl

Perl version 5.000. Practical Extraction and Reporting Language. A program used to interpret Perl scripts. Many of the scripts, including the installation script, are written in Perl. Perl provides powerful text manipulation mechanisms and is also strong in the area of process management.

pixmap

A program used to build pixmaps. Useful if you are using the agX libraries.

pkt_dump

A program used to display a packet file to standard out.

promax

Landmark’s 2D GUI.

promax3d

Landmark’s 3D GUI.

promaxvsp

Landmark’s VSP GUI.

ptt

A program used to generate an ascii version of packet.job, which can then be read into an editor, modified. ttp can then be called to translate the ascii file back to a packet.job.

pwin

A menu interpreter/editor allowing menus to be created interactively.

qdel

NQS routine (see man pages).

qdev

NQS routine (see man pages).

qjob

NQS routine (see man pages).

qlimit

NQS routine (see man pages).

qmgr

NQS routine (see man pages).

qpr

NQS routine (see man pages).

qstat

NQS routine (see man pages).

qsup

NQS routine (see man pages).

tamsh

A tcl/motif interpreter/shell. This allows the creation of tcl scripts which contain motif widgets. This is a powerful homegrown version based upon Jan Newmarch’s TclMotif.

Other Docs

Known Problems

Understanding the Directory Structure76

Developer’s Programming Guide

$PROMAX_HOME/sys/bin/ Files (Continued) Script

Description

tcat

A program used to manage the tape catalog system.

tcatd

A daemon which performs transactions on a tape catalog.

tclsh

A tcl interpreter/shell.

tkperl

Perl 5.000 with the Tk and ReadLine extensions. This version of perl allows users to create scripts which have graphical user interfaces.

ttp

A program used to translate a ptt ascii file back into a packet.job.

$PROMAX_HOME/port/src/exe/exec/ The directory contains the executive’s makefile as well as listings of base tools, tools to add, and tools to delete. The following table summarizes this directory’s contents. $PROMAX_HOME/port/src/exe/exec/ Files Item

Description

Makefile

The makefile for the executive.

tools.db

A base listing of inline tools which will be incorporated into the executive.

tools_to_add

A listing of inline tools to add to the executive. Users may, and should, have a personalized version of this file.

tools_to_delete

A listing of inline tools to delete from the executive. Users may streamline their personal version of the executive with this file.

Other Docs

Known Problems

Customizing the System77

Developer’s Programming Guide

Customizing the System Much of the system can be customized via the environmental variable; that is, the path to most files and directories can be re-specified. The name of the environmental variable is always the path name, with the slashes replaced by underscores, every character promoted to upper case, and _HOME appended. Note that the rules are the same regardless of whether the file is an ordinary file or a directory (even though intuition might suggest that the environmental variable should end in _PATH for ordinary files, instead of _HOME). Some of the important ProMAX environmental variables are: • • • • • • • • • • • •

Other Docs

PROMAX_HOME PROMAX_DATA_HOME PROMAX_PORT_MENU_HOME PROMAX_PORT_MENU_PROCESSES_HOME PROMAX_SYS_EXE_HOME PROMAX_ETC_CONFIG_FILE_HOME PROMAX_QUEUES_HOME PROMAX_ETC_PVMHOSTS_HOME PROMAX_SCRATCH_HOME PROMAX_SCRATCHX1_HOME PROMAX_SCRATCHX2_HOME ...

Known Problems

Toggling Products: .promax78

Developer’s Programming Guide

Toggling Products: .promax Parts of the system can also be customized via the .promax file in the user’s home directory. This method allows customization by product and facilitates toggling between products without exiting the Flow Builder. The Flow Builder will build a .promax file in your home directory the first time you exit the User Interface. For the programmer, the .promax file is the best way to control the development environment. A search path mechanism—for executables, menus, and help files—directs the flow builder to use the files in your personal directories before searching in the standard installation or other directories. In this way, you can maintain your own parallel ProMAX system. To override the standard paths, the ~/.promax file must contain a stanza of the following form: (:product ("P" "ProMAX 2D" "entry3" "entry4" "entry5" "entry6" \ "entry7" "entry8" t) ("p" "Prospector" "entry3" "entry4" "entry5" "entry6" \ "entry7" "entry8" t) ("3" "ProMAX 3D" "entry3" "entry4" "entry5" "entry6" \ "entry7" "entry8" t) )

Entry3 is the absolute or relative pathname to the directory containing the executable files for the product. If specified as a relative pathname, it will be appended to $PROMAX_SYS_EXE_HOME. The executable directory is common to all Landmark products. There are no product-specific subdirectories. The default entry would be blank, directing the flow builder to the standard ProMAX /exe subdirectory typically defined by the environmental variable $PROMAX_SYS_EXE_HOME. For your development environment, the following entry may be preferred: "~/[$PROMAX_HOME]/rs6000/exe/:[$PROMAX_HOME]/rs6000/exe"

This entry will cause a search of your exe directory for exec.exe (or other executables) and the system exe directory next. For this specific case you could use: "~/[$PROMAX_HOME]/rs6000/exe/:."

The “.” following or preceding the colon denotes the system default as specified by environmental variables.

Other Docs

Known Problems

Toggling Products: .promax79

Developer’s Programming Guide

The preceeding entry assumes that you are working on an IBM RS6000 machine. If you are developing code across multiple hardware platforms, a more general entry is "~/[$PROMAX_HOME]/‘[$PROMAX_HOME]/port/bin/Machtype‘/exe :."

which allows the return of the machine type (solaris, rs6000, etc.) into the stanza so that you do not need to worry about which machine you are working on; therefore, the correct executable will be used. Entry4 is the absolute or relative pathname to the directory containing the menu files. If specified as a relative pathname, it will be appended to $PROMAX_PORT_MENU_HOME. The menus for each product are divided into subdirectories under the main directory defined by $PROMAX_PORT_MENU_HOME. For example, a default entry for the ProMAX 2D product would be "promax", directing the flow builder to $PROMAX_PORT_MENU_HOME/promax for menus. For your development environment, the following entry may be preferred: "~/[$PROMAX_HOME]/port/menu/promax/:\ [$PROMAX_HOME]/port/menu/promax"

or "~/[$PROMAX_HOME]/port/menu/promax/:promax"

This entry will cause a search of your menu directory for custom menus and the system menu directory next. Entry5 is the absolute or relative pathname to the Processes file. The Processes list is different for each product; by default the Processes file resides with the product menus. This entry generally will be the same as the menus with /Processes appended. A similar search mechanism outline for menus (Entry4) can be employed except that the path must include the Processes file name as this is a single file instead of a group of menus or helps. For example: "~/[$PROMAX_HOME]/port/menu/promax/Processes:\ promax/Processes"

Entry6 is the absolute or relative pathname to the directory containing the help files. If specified as a relative pathname, it

Other Docs

Known Problems

Toggling Products: .promax80

Developer’s Programming Guide

will be appended to $PROMAX_PORT_HELP_HOME. The helps for each product are divided into subdirectories under the main directory defined by $PROMAX_PORT_HELP_HOME. For example, a default entry for the ProMAX 2D product would be "promax", directing the flow builder to $PROMAX_PORT_HELP_HOME/promax for help files. For your development environment, the following entry may be preferred: "~/[$PROMAX_HOME]/port/help/promax/:promax"

This entry will cause a search of your help directory for custom helps and the system help directory next. Entry7 is the absolute or relative pathname to the directory containing the misc files. If specified as a relative pathname, it will be appended to $PROMAX_PORT_MISC_HOME. As with the executable directory, there is no product level subdivision of misc files. The misc/ level files control configuration aspects such as color tables and are rarely redirected. However, if desired, search path control is done in the same manner as that for the executables, Entry3, above. Entry8 is the absolute or relative pathname to the directory containing the primary data storage files. If specified as a relative pathname, it will be appended to $PROMAX_DATA_HOME. The data directory is also not divided by products, nor is a search path supported in the .promax file. The following examples demonstrate product stanzas in the .promax file: Standard: (:product ("P" "ProMAX" "" "promax" "promax/Processes"\ "promax" "" "" t) ("v" "ProMAXVSP" "" "promaxvsp" "promaxvsp/Processes"\ "promaxvsp" "" "" t) )

Custom: (:product ("P" "ProMAX" "/disk2/exe" "promax"\ "promax/Process" "promax" "" "" t) ("p" "Prospector" "" "prospector" "/home/joe/processes"\ "prospector" "" "/advance/mydata" t) )

Please refer to the ProMAX Reference Manual for further discussion of the .promax file.

Other Docs

Known Problems

Adding a New Tool81

Developer’s Programming Guide

Adding a New Tool In ProMAX, the menu file controls what parameters are presented to the user. The help file provides textual/graphical help during parameterization. The initialization routine checks the input parameters for validity and generally tries to accomplish any tasks that are performed on a one-time basis. The execution routine performs the actual trace processing. The include file provides communication between the initialization routine and the execution routines, and facilitates re-entrancy. Nine generic steps are required to add a new processing tool to the system and see the results. In typical chronological order they are: •

Creating or copying a source file that contains an initialization phase and an execution phase.



Creating or copying an include file (if using Fortran).



Creating a modified version of tools_to_add for new tools.



Compiling and linking the source code via Makeexec.



Creating or copying a new menu.



Adding an entry to the Processes file.



Restarting the flow builder.



Parameterizing a flow, including the new tool and the Alternate Executive directive.



Executing the flow.

The following steps outline the process for adding a new tool, using the amp_ratio tool as an example. 1.

Build a parallel directory tree within your home directory. % cd (go to your $HOME directory) % Makeadvance (builds entire Landmark directory tree beneath $HOME) % chmod -R a+rw ~/$PROMAX_HOME (give read/write permissions to all files)

Other Docs

Known Problems

Adding a New Tool82

Developer’s Programming Guide

2.

Copy needed source code into directories. % cd ~/$PROMAX_HOME/port/src/lib/maxtool % mkdir amp_ratio (creates a subdirectory for new 'amp_ratio' tool) % cd amp_ratio % cp $PROMAX_HOME/port/src/lib/maxtool\ /amp_ratio/amp_ratio1.f . % cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio\ /amp_ratio1.inc . % cd ~/$PROMAX_HOME/port/menu/promax % cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio\ /amp_ratio1.menu .

3.

Add new tool to Makefile. % cd ~/$PROMAX_HOME/port/src/lib/maxtool % [edit] Makefile newsrcs :=amp_ratio/amp_ratio1.f (Path is relative to Makefile subdirectory) [exit editor, saving Makefile]

4.

Update toolcall source (toolcall.f or toolcall.c). % cd ~/$PROMAX_HOME/port/src/exe/exec % more tools.db (Toolcall database. Just look, don’t touch.) % [edit] tools_to_add AMP_RATIO same simple [save file]

5.

Create a new executable (exec.exe). % cd ~/$PROMAX_HOME/port/src % Makeexec (Builds toolcall.f. Compiles it)

or % Makeexec language=C (Builds toolcall.c.\ Compiles it) % cd ~/$PROMAX_HOME/rs6000/exe % ls -l (File called ‘exec.exe’ should now exist)

6.

Add the new tool to your Processes list. (Be sure to configure your .promax file as shown earlier.) % cd ~/$PROMAX_HOME/port/menu/promax % cp $PROMAX_HOME/port/menu/promax/Processes . % [edit] Processes under "Amplitude" section add the entry: ("Amp ratio" "amp_ratio1") [save file]

Other Docs

Known Problems

Adding a New Tool83

Developer’s Programming Guide

7.

Run the new tool within ProMAX. % promax (start ProMAX User Interface) Add “Amp Ratio” to a flow Execute the flow Minimize the ProMAX User Interface window.

8.

Iteratively make changes to the source code. % [edit] ~/$PROMAX_HOME/port/src/lib/maxtool\ /amp_ratio/amp_ratio1.f % Makeexec [Try it out in ProMAX]

Other Docs

Known Problems

Making Your New Executable84

Developer’s Programming Guide

Making Your New Executable After you add new functionality, you must be sure that you can make the exec. The following AMP_RATIO example demonstrates the steps to complete to accomplish this: 1.

Move to the tools subdirectory in your advance/ directory tree: % cd ~/$PROMAX_HOME/port/src/lib/maxtool % mkdir amp_ratio % cd amp_ratio % cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio\ /amp_ratio1.f . % cp $PROMAX_HOME/port/src/lib/maxtool/amp_ratio\ /amp_ratio1.inc . % chmod +rw *

2.

Add an entry to your ~/$PROMAX_HOME/port/src/exe/exec/tools_to_add file (this file is used to make ~/$PROMAX_HOME/port/src/exe/exec/toolcall.f or toolcall.c, which contains the glue between the Executive and the individual tools): % [edit] ~/$PROMAX_HOME/port/src/exe/exec/\ tools_to_add

AMP_RATIO is a simple tool (see the Tool Types chapter). Search for AGC and add a similar entry for AMP_RATIO so that the file looks like this: AMP_RATIO same simple

Note that the source code file name, amp_ratio1.f, and the subroutine name, amp_ratio, may be different. Be sure the name listed in the tools_to_add file truly reflects the name of the init and exec phases. 3.

Add an entry to the tools Makefile: % [edit] ~/$PROMAX_HOME/port/src/lib/maxtool/\ Makefile

Other Docs

Known Problems

Making Your New Executable85

Developer’s Programming Guide

Search for newsrcs and add another line listing the new .f file so that the file looks like this (do not forget the backslash continuation mark): newsrcs := \ amp_ratio/amp_ratio1.f

4.

Remake the exec: % Makeexec

(build and compile a toolcall.f)

or: % Makeexec language=C(build

and compile a toolcall.c)

or, directly using gmake: % cd ~/$PROMAX_HOME/port/src/exe/exec % gmake [language=C](build and compile toolcall.f

or .c)

Step 4 should produce the file exec.exe in the directory ~/$PROMAX_HOME/[machtype]/exe/ where [machtype] reflects the type of machine that you are working on (such as sgimips4 or solaris).

The Details The following things happened when you typed Makeexec:

Other Docs

1.

The Makeexec script cd’d to your ~/$PROMAX_HOME/port/src/exe/exec directory and spawned the Makefile there with the same arguments that you gave Makeexec.

2.

~/$PROMAX_HOME/port/src/exe/exec/Makefile included the master version of this Makefile to be executed. This ensures that your local Makefile always reflects the changes made to the master version $PROMAX_HOME/port/src/exe/exec/Makefile.

3.

$PROMAX_HOME/port/src/exe/exec/Makefile decides that this is a User make. It checks the following directories

Known Problems

Making Your New Executable86

Developer’s Programming Guide

for existence and whether or not a Makefile is present in each of: • • • • •

~/$PROMAX_HOME/port/src/lib/maxtool ~/$PROMAX_HOME/port/src/lib/maxutil ~/$PROMAX_HOME/port/src/lib/maxexec ~/$PROMAX_HOME/port/src/lib/uiutils ~/$PROMAX_HOME/port/src/lib/agfc

If a Makefile is present, the makefile ensures that the library is up to date by cd’ing to the directory and spawning the Makefile with the same arguments that were given to Makeexec. If no Makefile is found, it is assumed that you will link with the client or master version of the library. 4.

Other Docs

After ensuring/assuming that the libraries are up to date, $PROMAX_HOME/port/src/exe/exec/Makefile respawns itself so it can see if the exec.exe really needs to be remade. Depending on the answer it will either report that the exec.exe is up to date or re-link a new exec.exe in the ~/$PROMAX_HOME/[machtype]/exe/ directory.

Known Problems

Incorporating New Functionality87

Developer’s Programming Guide

Incorporating New Functionality By following the preceeding steps, you successfully made a new version of the exec and added new functionality. However, you might be disappointed to find that the ProMAX Flow Builder is still unaware of your achievement. We will return to this example after a discussion about customizing ProMAX menus. In the Executive section of the System Overview chapter, we discussed how each tool must have an initialization and execution routine. A menu file must also exist if you hope to do more than admire the new executable. Menu files are used to specify what parameters should exist, what their properties are (such as their type, default value, and description), and how they interrelate. Without a menu file, the flow builder has no idea how to capture and output the parameters that will make your new processing tool operate. The flow builder reads and interprets the menu files at runtime (they are not compiled). However, before the flow builder can know what menu files to read, it must read a list of available menus in what is known as the Processes file. The Processes file is product specific and is located in $PROMAX_HOME/port/menu/[product]/. You will need a custom copy of the Processes file in order to reference your new tools without altering the production system and to avoid interfering with other developers.

Other Docs

Known Problems

Creating Menus88

Developer’s Programming Guide

Creating Menus The User Interface looks at the setting of the environment variable new_menu to determine the menu init behavior. If new_menu=t, then the User Interface will re-initialize the menu every time it is displayed. If new_menu=f, then the flow builder will only read files once. This means if you modify the processes file or a menu file after starting the flow builder, it will not see your changes. Furthermore, if you add a tool to a flow, the corresponding menu file will be saved with the flow, and the menu file will not be read again when the flow is accessed again. Setting new_menu=t is a big help in testing menus and should be part of the ProMAX programmer’s environment. This setting slows down the User Interface because the menu files are reread; therefore, it is not the default setting for the typical production processing environment. The menu file contains the description of parameters and rules that control what the User Interface presents. Menu files are written in Lisp and are interpreted at run time (no compilation is necessary). No Lisp programming experience is necessary, however. The syntax is simple and there are abundant examples in the directory $PROMAX_HOME/port/menu (see files ending with the extension .menu). In most cases, it is easiest to find existing parameters in the system that are similar to those that you wish to use, and then cut and paste them into a new file. While you are creating the new menu file, it is a good idea to use an editor that has parentheses checking, since mismatched parentheses are the most common source of errors. A utility program called promenu exists for debugging menu files. After the menu file is near completion, use promenu to adjust cosmetic features and to test the menu rules (if they exist in your menu file). The program promenu takes the menu file as a command line argument; therefore, if your menu file is named testtool.menu, the command would be: % $PROMAX_HOME/sys/bin/promenu testtool.menu

Each time you change the menu file, use File->Reread to reflect any changes made to the menu file in the editor.

Other Docs

Known Problems

Adding a ProMAX Menu89

Developer’s Programming Guide

Adding a ProMAX Menu To use a tool, such as the simple tool AMP_RATIO (see the chapter entitled Tool Types), you must do two things: 1) make the flow builder aware of the menu file, and 2) direct execution to the new exec.exe that you made. You can accomplished this with the following steps: 1.

Exit the flow builder if it is already running.

2.

Copy the production Processes file into your development tree and make it writable. The commands will look similar to this: % cd % cp $PROMAX_HOME/port/menu/promax/Processes\ ~/$PROMAX_HOME/port/menu/promax/Processes % chmod +w ~/$PROMAX_HOME/port/menu/promax/Processes

3.

Add an entry to your Processes file: % [edit] ~/advance/port/menu/promax/Processes

Search for “agc” and add another line for AMP_RATIO : (“Automatic Gain Control” “agc”)\ (“Amplitude Ratio” \ “/home/joe/prog_course/src/amp_ratio/ amp_ratio” ) \ (“Time-Variant Scaling” “tvs”)

Note that your entry is different; it includes the full path to the menu file within your home directory. In the absence of a full path, the flow builder assumes that the file resides in the production menu directory. Also note that the .menu extension of the file name is implied. Alternatively, you can add your new menu without an explicit path and use the search path facility in the .promax file. For instance, you can direct the menu search to check your personal menu directory first and the standard menu directory next. In this case, the amp_ratio.menu file must be copied to your menu directory. 4.

Other Docs

Redirect the flow builder to your custom version of the Processes file. For future benefit, it is probably best to edit

Known Problems

Adding a ProMAX Menu90

Developer’s Programming Guide

the file ~/.promax and add a product stanza as discussed in the Toggling Products section of this chapter. 5.

Restart the flow builder (that is, restart ProMAX).

6.

Create a new flow and add Disk Data Input, followed by Amplitude Ratio and Screen Display. If Amplitude Ratio does not show up at all in the list of tools, step 4 failed. (Take a close look at your .promax file, and look at the file $PROMAX_HOME/port/misc/misc_files/sys_ad/promax_dev.)

7.

Set the parameters for the individual steps in the flow. If you are using the tutorial datasets, “Raw shots with geometry” is a good input dataset. If you get an error message when you attempt to parameterize Amplitude Ratio, you have a typo in your Processes file. (Take a close look at the path that you specified. Note that you cannot use ~/prog_course/src/amp_ratio/amp_ratio as the path.) The defaults for “Amplitude Ratio” and “Screen Display” are fine. You must also add a directive in your flow to alert the flow builder to use your new exec.exe instead of the standard, production version. This can be done by adding the tool “Alternate Executive” anywhere in the flow. Parameterization is simply entering an explicit path to exec.exe, typically: ~/promax/1998.6/[rs6000,sgi,or...]/exe/exec.exe

Again, the best approach may be to direct the flow builder to search your exe/ subdirectory first via the .promax file. 8.

Run the flow. You should see fairly dead-looking traces, with a large peak where the first breaks once were. AMP_RATIO transforms a trace by taking the ratio of the amplitudes in two sliding gates (lower gate / upper gate). When the lower gate is in the first break energy and the upper gate is in the noise, the result is a large peak (this is the beginning of a crude first break picker).

Please be sure to complete this exercise before moving on to be certain that any and all idiosyncrasies in your release, installation, and make environment are resolved.

Other Docs

Known Problems

Changing Files91

Developer’s Programming Guide

Changing Files Several things must be done each time a new tool is added to the ProMAX system. The following illustrations show files within the ProMAX directory tree that need to be changed when a new tool is added. The files that are changed frequently during the development of a new tool are circled with a solid line; files that are changed infrequently are circled with a dotted line. Other files are shown, but if they are not circled you do not change them. For example, the source code for an inline tool is found in the directory path: ~/$PROMAX_HOME/port/src/lib/maxtool/your_tool and the .c and/or .f files found in this directory are changed frequently during development. The .promax file found in the home directory is changed infrequently. The first illustration shows files that need to be changed or added when an inline tool (linked to the trace executive) is added. The second illiustration shows files that must be changed or added when a stand-alone or IPC (socket) tool is added to the system. These charts have proven to be very useful when used as a reminder of the file changes that are needed for a new tool.

Other Docs

Known Problems

Changing Files92

Developer’s Programming Guide

your advance directory Makefile . promax

lib

maxtool

exe

exec

your_tool

.c, .f

src port

tools_to_add

menu

~/ $PROMAX_ HOME

promax Processes your_menu.menu

*.exe

exe rs6000 lib

system libraries (libmaxtool*.a)

infrequently changed file frequently changed file

Files to be changed for an inline tool

Other Docs

Known Problems

Changing Files93

Developer’s Programming Guide

. promax

exe

src

.c, .f

your_stand_alone

port menu

~/ $PROMAX_ HOME

Makefile

promax Processes your_menu.menu

exe

* .exe

rs6000 lib

system libraries (libmaxutill*.a)

infrequently changed file frequently changed file

Files to be changed for a stand-alone or IPC tool

Other Docs

Known Problems

Understanding the Makefile System94

Developer’s Programming Guide

Understanding the Makefile System This section describes the Makefile system.

C++ Template Instantiation One of the more difficult aspects of the Makefile system is C++ template instantiation due to the differences in the way it is handled between the various platforms. Basically, there are two different types of template instantiation among the platforms Landmark currently supports.

Other Docs



Manual Template Instantiation - With the AIX xlC compiler ProMAX employs manually instantiated templates. That is, if you want the use of a template, you must instantiate it yourself. This is pretty simple from the Makefile perspective because the complexity is transferred to the code, whose responsibility is to instantiate every template it expects to use.



Automatic Template Instantiation - With Solaris and IRIX C++ compilers, ProMAX uses automatic template instantiation. While relieving programmer burden, this complicates the Makefile. These systems work via a template repository. This is the location of the instantiated templates. These repositories are selected by the makefile system. You can however, define these repositories verbatim in your makefiles by setting the variable CXXTEMPLATES to whatever repository or repositories your application needs. For further information on how templates are dealt with, read the entries on the various variables associated with the use of C++.

Known Problems

Understanding the Makefile System95

Developer’s Programming Guide

Terms and Variable Descriptions There are a number of variables of which the Makefile writer should be aware. The following tables describe the general terms and makefile conventions. Terms Term

Description

Canned Command Sequence

A canned command sequence can be thought of as a predefined set of commands. It is actually a recursively expanded variable which can span many lines. As a result, automatic variables such as $@, $^ and so forth, expand to fit the context in which they are used. For the sake of brevity, Canned Command Sequences will be refered to as CCR’s throughout this document.

Rule

A set of targets, dependencies and commands which describe how to create the target(s). The make will look up and evaluate the rules for each of the dependencies before evaluating the commands which the target(s) depend(s) on. For a more detailed explanation of this, please refer to the GNU Make document by Richard M. Stallman and Roland McGrath. Example: foo : bar1.o bar2.o $(CC) $^ -o $@

Variable

A location in memory which contains information which may be used by rules, CCS’s or other variables. In general, the use of variables will increase the porability of your makefiles. Variables may be set from the command line via calls to the make utility. For example: ‘gmake var1:=foo var2=var‘ Refer to the GNU Make document for further information.

Boolean Variable

A variable which may contain the string “yes” or “no”. It is used to figure out exactly how the user wants certain things to be made.

Command Variable

A variable which contains a command. An example might be $(AR) which might be /bin/ar on one platform and /usr/bin/ar on another. $(AR) can be thus be used without regard to what platform the make is being run on.

Library Variable

A variable which contains the path to a library. The path may vary depending on the context of the make.

Directory Variable

A variable which contains a path to a directory.

Makefile Variable

A variable which contains a path to a makefile.

CCS Rule Variable

A canned command sequence which assumes that it is called in a rule. As a result, it may assume that certain automatic variable which are set in rules, are set. (Ex: $@, $^, $(BAR, this means that BAR inherits FOO. Or rather, BAR contains FOO. Thus, anything which is set in FOO, will also be set in BAR.

5.

Anytime you see something like (FOO1, FOO2)->BAR, this will mean that BAR inherits from both FOO1 and FOO2.

6.

Anytime you see something like FOO1->FOO2->BAR, this will mean that BAR inherits from FOO2 and that FOO2 inherits from FOO1. This construction is similar to the statement (FOO1, FOO2) -> BAR with the difference that we now know a little more about the hierarchy of the variables in that we also know the relationship between FOO1 and FOO2 as well. We may say that BAR directly inherits from FOO2, FOO2 directly inherits from FOO1, thus BAR indirectly inherits from FOO1.

7.

Variables whose names are prefixed with with either a, m or nothing at all will be denoted with something like [a|m|]. These types of variables are paths to files and directories in the Makefile system. For example, suppose we have a three variables, representing three directories, afoodir, mfoodir and foodir. Each variable is so similar in function that we wished to document this as one variable. The variable would appear as [a|m|]foodir. This should look familar to those familiar with regular expressions. In addition, the prefixes “a” and “m” have special signifigance. When discussing variables of this nature, we will sometimes refer to $path. For variables preceded with “a”, $path will be the same value as the shell environmental variable $PROMAX_HOME. For variables preceded with “m”, $path will have the same value as $mtopdir. For variables lacking the “m” or “a” prefix, $path will be either $mtopdir or $utopdir, depending on the context of the make. •

Other Docs

Developer’s Programming Guide

Variables preceded with an “a” signify that they are Landmark’s version of this variable. Since these types of variables contain paths to directories or files, their contents would represent Landmark’s version of this

Known Problems

Understanding the Makefile System99





8.

Developer’s Programming Guide

directory or file. Clients of Landmark should not concern themselves with these types of variables as they are internal to Landmark’s use of the Makefile system. Variables preceded with an “m” signify the master version (or client version), of this variable. Their contents would represent the master version of this directory or file. Variables not preceded with either an “m” or an “a” are evaluated at the time the make is run. They are interpreted by the context of the make. In a master make context, the variable defaults to the master version of the variable. In a user make context (that is, when someone is performing a make in his home directory), these variables default to the version of the file or directory in the user’s home directory version of the advance tree.

Variables whose names are prefixed with KRC, C, F, CXX, or AMD or nothing will be denoted with something like [KRC|C|F|CXX|AMD|Y|LEX|] prefixed before the basename of the variable. The version of this variable which does not have a prefix is said to be the base for all the rest of the variables. The only exception to this rule are the [a|m|] variables described previously in step 5. Thus, given: [KRC|C|F|CXX|AMD|Y|LEX]foo we can assume: foo -> (KRCfoo, Cfoo, Ffoo, CXXfoo, AMDfoo, Yfoo, LEXfoo) or rather KRCfoo, Cfoo, Ffoo, CXXfoo, AMDfoo, Yfoo, and LEXfoo all inherit from the base variable foo. In addition, the prefixes themselves having meaning. Anything preceded with KRC will apply to the K&R C compiler.

Other Docs

Known Problems

Understanding the Makefile System100

Developer’s Programming Guide

The following table describes each of the prefixes with their meaning. Variable Prefixes Prefix

Meaning

KRC

These variables apply to K&R C preprocessing, compiling and linking options.

C

These variables apply to ANSI C preprocessing, compiling and linking options.

F

These variables apply to Fortran 77 preprocessing, compiling and linking options.

CXX

These variables apply to C++ preprocessing, compiling, and linking options.

Y

These variables apply to Yacc preprocessing and translation.

LEX

These variables apply to Lex preprocessing and translation.

AMD

These variables apply to the Amakedepend options. This is Landmark’s special version of the unix makedepend utility. 9.

The inheritance notation can be combined with the regular expression notation to express several relationships more efficiently. For example, a statement such as [KRC|C|]foo -> [KRC|C|]bar can be used to express the relationships KRCfoo -> KRCbar, Cfoo->Cbar, foo->bar. Also note that there are a couple of implicit relationships: bar->KRCbar, bar->Cbar, foo->KRCfoo, foo->Cfoo. The following example illustrates a more subtle issue: ([KRC|]foo1, [KRC|C|]foo2) -> [KRC|C|]bar This says: • • •

(KRCfoo1, KRCfoo2) -> KRCbar Cfoo2 -> Cbar (There is no Cfoo1) (foo1, foo2) -> bar

We also know implicitly that: • • •

Other Docs

a) foo1->KRCfoo1 (Cfoo1 does not exist) b) foo2->(KRCfoo2, Cfoo2) c) bar->(KRCbar, Cbar)

Known Problems

Understanding the Makefile System101

Developer’s Programming Guide

Variables The following table describes the important makefile variables. Makefile Variables Variable

Type

Explanation

debug

Boolean Variable.

When set to yes, debug mode is turned on.

profile

Boolean Variable.

When set to yes, profiling is activated.

quantify

Boolean Variable.

When set to yes on a Solaris machine, the code will also be quantified. Quantify is a fancy profiler available from Pure Software.

purify

Boolean Variable.

When set to yes on a Solaris machine, the code will be purified. Purify is another product from Pure Software, designed to detect memory leaks.

[KRC|C|CXX|F|AMD|] INCPATH

Variables.

These variables hold the include path(s) for [KRC|C|CXX|F|AMD|] preprocessing. Note we are using the regular expression format.

[KRC|C|CXX|F|] DEFINES

Variables.

These variables hold the defines for [KRC|C|CXX|F|] preprocessing.

[KRC|C|CXX|F|AMD|] PPFLAGS

Variables.

These variables hold the preprocessor flags for [KRC|C|CXX|F|AMD|] preprocessing.

[KRC|C|CXX|F|] OPTIONS

Variables.

These variables are used to hold options given to the [KRC|C|CXX|F|AMD|] programs.

[KRC|C|CXX|F|] OPTIMIZATION

Variables.

These variables are used to hold the optimization options given to the [KRC|C|CXX|F|] compilers. It can be argued that this should be part of the OPTIONS variable but was split apart for the convenience of efficiency minded developers.

Other Docs

Known Problems

Understanding the Makefile System102

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

[KRC|C|CXX|F|AMD|Y|]FLAGS

Variables.

These variables are used to hold flags given to the [KRC|C|CXX|F|AMD|Y|] programs. ([KRC|C|CXX|F|AMD|]\ OPTIONS, [KRC|C|CXX|F|AMD|]\ OPTIMIZATION) -> [KRC|C|CXX|F|AMD|]FLAGS

[KRC|C|CXX|F|] LDFLAGS

Variables.

These variables are used to hold any flags passed down to the [KRC|C|CXX|F|] link phase, such as directories to search for libraries.

[KRC|C|CXX|F|]LIBS

Variables.

These variables hold a list of libraries, specific to the compiler being used, which are to be linked when doing a [KRC|C|CXX|F|] link.

[KRC|C|CXX|F|] PRECOMPILE

Variables.

These are prepended to a [KRC|C|CXX|F] compilation. Purify and quantify take advantage of these variables.

KRCC

Command Variable.

The command which invokes the K&R C compiler.

CC

Command Variable.

The command which invokes the ANSI C Compiler.

CXX

Command Variable.

The command which invokes the C++ compiler.

FC

Command Variable.

The command which invokes the Fortran 77 compiler.

CPP

Command Variable.

The command which invokes the C Preprocessor. This is also used by the other compilers, with the exception of the fortran compiler.

FPP

Command Variable.

The comand which invokes the Fortran Preprocessor.

SYSLIBS

Command Line Augmentable Variable.

A list of system libraries to be used to link executables (with the exception of C++ executables).

SYSXXLIBS

Command Line Augmentable Variable.

A list of system libraries to be used to link C++ executables.

Other Docs

Known Problems

Understanding the Makefile System103

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

XLIBS

Variable.

A list of X11 libraries with which to link.

XMLIBS

Variable.

A list of Motif and X11 libraries with which to link.

cxxlink

Boolean Variable.

When this is set to yes, it indicates that a C++ executable is being created. As a result, certain things are set to their C++ defaults instead of their C defaults.

shared_libs

Boolean Variable.

A boolean yes/no value. When shared_libs is set to yes, libraries and executables created will be shared.

lib_suf

Variable.

lib_suf contains the library suffix. It is usually “.a”.

master

Boolean Variable.

A boolean yes/no value which indicates whether or not a master make is occurring.

GET

Command Variable.

The command which performs an SCCS get on a file or files.

ctopdir

Directory Variable.

The client’s top directory.

utopdir

Directory Variable.

The user’s top directory, usually $HOME/$PROMAX_HOME.

atopdir

Directory Variable.

Landmark’s top directory. atopdir should be the same as the $PROMAX_HOME environmental variable.

topdir

Directory Variable.

The effective top directory. In a master make context, it will be the same as mtopdir; in a user make context, it will be utopdir; in an advance master make context, it will be atopdir. For Landmark, mtopdir is the same as atopdir, a fact which causes a small amount of terminology confusion when dealing with clients.

loadmap

Boolean Variable.

A boolean yes/no value which determines whether or not a loadmap is generated; traces which function came from which library.

Other Docs

Known Problems

Understanding the Makefile System104

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

mtopdir

Directory Variable.

This is the master top directory. It is the same as ctopdir for clients. For Landmark, it is the same as atopdir (Landmark is it’s own client).

AR

Command Variable.

The ar or library archiver command.

ARCHIVE_REPLACE_FLAGS

Command Flags Variable.

A variable containing the flags to be given to AR when an archive replace is to be done.

ARCHIVE_CREATE_FLAGS

Command Flags Variable.

A variable containing the flags to be given to AR when a library archive is to be created.

AS

Command Variable.

The as command, AKA the assembler.

RANLIB

Command Variable.

The ranlib command. On machines without the ranlib command, it replaces ranlib with touch.

RANLIB_TARGET

CCS Rule Variable.

A CCS which ranlibs the library described in the target section of a make dependency.

CXXTEMPLATES

Variable (SunOs, Solaris, IRIX).

A variable which may contain a list of template repositories. If CXXTEMPLATES is set, the makefile will not attempt to set the repositories. Instead, it will rely on the user’s definition of CXXTEMPLATES.

Other Docs

Known Problems

Understanding the Makefile System105

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

CXX_LINK

Command Variable.

A command which will C++ link an executable. How this variable is handle depends on whether the compiler performs automatic template instantiation. Manual Template Instantiating Compilers: On platforms which require manual instantiation of templates, this command is simply a call to the C++ compiler with CXXLDFLAGS thrown in. CXX_LINK -> (CXX, CXXLDFLAGS)

Automatic Template Instantiating Compilers On auto instantiating machines, the makefile system will use CXXTEMPLATES to set the template directory or directories. If CXXTEMPLATES is not set, it will attempt to determine what the template repositories should be. These repositories are determined via the algorithm: 1. Get the list of object dependencies. 2. Create a list, uobjlist, which contains a list of all objects in the dependency list whose base path points to the user’s directory utopdir. 3. Create a list, mobjlist, which contains a list of all objects in the dependency list whose base path points to the master directory mtopdir. 4. Create a list, aobjlist, which contains a list of all objects in the dependency list whose base path points to the advance directory atopdir. (continued)

Other Docs

Known Problems

Understanding the Makefile System106

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

CXX_LINK (continued)

Explanation (continued) 5. If uobjlist contains one or more files, take the first object file and replace the filename.o portion with templates and prepend -ptr to the string. Add this to the tail end of our CXX_LINK canned command sequence. Repeat this step with mobjlist and aobjlist (in that order). Thus, user directories get first crack and being the read/write template repository, then master directories and advance directories get last crack. Also note that if you have object files coming from more than one location in your user directory, the object file listed first in the dependency list will be the one whose path is inherited by the repository. Thus, in cases such as this, list object files which reside in the location where you want the template repository first. CXX_LINK -> (CXX, CXXPPFLAGS, CXXTEMPLATES CXXLDFLAGS)

INSTANTIATE_CXX_TEMPLATES

Other Docs

CCS Rule Variable (SunOs, Solaris, IRIX).

A CCS, valid on automatic template instantiating platforms, which ensures that all templates are instantiated. Basically, it attempts to link an executable with the library being used. In the process of attempting to create this executable (which fails due to lack of a main), the templates are instantiated.

Known Problems

Understanding the Makefile System107

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

COPY_CXX_TEMPLATES

CCS Rule Variable.

A CCS which you should not have to worry about. This is called from with the make to copy all instantiated templates into a temporary repository which ARCHIVE_CXX_LIB[NS] recognize and archive into their respective libraries. This command works differently depending on the context of the make. Master Context: In a master context, a temporary directory is created. Then, the read/write template directory’s object files are copied into this temporary directory and renamed to Template1.o Template2.o and so forth. When the ARCHIVE_CXX_LIB[NS] CCS’s are called, they will ensure that these template instantiations are archived into the library currently being built. User Context In a user context, a temporary directory is created. Then a list of template instantiation object files are created based on the user’s read/write template repository, the master template repository, and advance template repository. Duplicate names are thrown out and the result is saved to a new list named files. COPY_CXX_TEMPLATES then loops over each entry of this list and prepends a user, master, or advance path to this base filename. The user path will be prepended if the user templates directory contains this object file. (continued)

Other Docs

Known Problems

Understanding the Makefile System108

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

COPY_CXX_TEMPLATES (continued)

Explanation (continued) If it does not, the master template directory is scanned for it. If the master contains this template object file, the master template repository path is prepended to the base object filename. Otherwise, the advance template repository path is prepended to the base template object file. Thus we ensure that we have a singular list of template object files in which the user’s directory overrides the master, which in turn overrides the advance definitions. At this point, the object files are copied into the temporary template directory under the names Template1.o Template2.o etc... and are archived into the user’s personal library as described in the master context.

ARCHIVE_CXX_LIB

CCS Rule Variable.

A CCS which will archive a shared version of a C++ library. On automatic template instantiating platforms, the template instantiations created via INSTANTIATE_CXX_\ TEMPLATES and COPY_CXX_TEMPLATES will also be archived into the shared library.

ARCHIVE_CXX_LIBNS

CCS Rule Variable.

A CCS which will archive a nonshared version of a C++ library. On automatic template instantiating platforms, the template instantiations created via INSTANTIATE_CXX_\ TEMPLATES and COPY_CXX_TEMPLATES will also be archived into the nonshared library.

machtype

Variable.

A string identifying the current platform.

[a|m|]sysdir

Directory Variable. $path/sys.

The top level of where the system dependent files are kept.

Other Docs

Known Problems

Understanding the Makefile System109

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

[a|m|]bindir

Directory Variable. $path/sys/bin/.

The directory where the system dependent command line executables are kept.

[a|m|]aexedir

Directory Variable. $path/sys/exe/.

The directory where the system dependent executables are located. These executables are generally invoked via the ProMAX/VSP/ProMAX3D\ /Prospector user interfaces.

[a|m|]libdir

Directory Variable. $path/sys/lib/.

The directory where the system dependent libraries are kept.

[a|m|]objdir

Directory Variable. $path/sys/obj/.

The top level directory where the object files are located.

[a|m|]objbindir

Directory Variable. $path/sys/obj/bin/.

The directory where the object files for the system dependent executables are located. Executables in this directory are generally invoked via the command line, interactively by the user.

[a|m|]objexedir

Directory Variable. $path/sys/obj/exe/.

The directory where the system dependent object files (which correspond to the executables in [a|m|]exedir) are located.

[a|m|]objuidir

Directory Variable. $path/sys/obj/ui/.

The directory where the object files corresponding to the for user interface are located.

[a|m|]objlibdir

Directory Variable. $path/sys/obj/lib/.

The directory where the object files corresponding to the various libraries are located.

[a|m|]portdir

Directory Variable. $path/port/.

The top level directory of the port hierarchy. Files under here are considered to be portable.

[a|m|]incdir

Directory Variable. $path/port/include/.

The top level directory where the include files are stored.

[a|m|]srcdir

Directory Variable. $path/port/src/.

The top level directory for all the portable source code.

[a|m|]srcbindir

Directory Variable. $path/port/src/bin/.

The directory where the portable source code corresponding to the executables in [a|m|]bindir are located.

Other Docs

Known Problems

Understanding the Makefile System110

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

[a|m|]srcexedir

Directory Variable. $path/port/src/exe/.

The directory where the portable source code corresponding to the executables in [a|m|]exedir are located.

[a|m|]srclibdir

Directory Variable. $path/port/src/lib/.

The directory where the portable source code corresponding to the libraries in [a|m|]libdir are located.

libmaxexec

Library Variable. $path\ /sys/lib/libmaxexec.a.

The library containing executive ($path/sys/exe/exec.exe) support functions.

libmaxtool1

Library Variable. $path\ /sys/lib/libmaxtool1.a.

The library containing some of the modules composing the executive.

libmaxtool2

Library Variable. $path\ /sys/lib/libmaxtool2.a.

The library containing some more of the modules composing the executive.

libmaxtool3

Library Variable. $path\ /sys/lib/libmaxtool3.a.

The library containing the rest of the modules composing the executive.

libmaxutil

Library Variable. $path\ /sys/lib/libmaxutil.a.

The library containing general support routines.

libuiutils

Library Variable. $path\ /sys/lib/lib/libuiutils.a.

The library containing general support routines used by the user interface.

libmaxui

Libary Variable. $path\ /sys/lib/lib/libmaxui.a.

The library containing more general support routines used by the user interface.

libagfc

Library Variable. $path\ /sys/lib/libagfc.a.

The library containing the old cwp routines used by the various modules in the system.

libpar

Library Variable. $path\ /sys/lib/libpar.a.

A library providing a mechanism for decoding command line arguments.

libagX

Library Variable. $path\ /sys/lib/libagX.a.

A library contains C++ graphics objects for OOP graphics design.

libagXi

Library Variable. $path\ /sys/lib/libagXi.a.

A library containing classes and functions which are not quite ready to be in libagX yet. libagXi is internal to Landmark (denoted via the suffix ‘i’).

Other Docs

Known Problems

Understanding the Makefile System111

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

libagC

Library Variable. $path\ /sys/lib/libagC.a.

A library containing base template as well as other objects used by the agX graphics library.

libagCi

Library Variable. $path/sys/lib/libagCi.a.

A library containing routines and classes which are not quite ready to be placed in libagC.

libeispack

Library Variable. $path\ /sys/lib/libeispack.a.

A library containing Landmark’s version of Lynn Kirlin’s eispack library for K-L Transforms.

libGeom

Library Variable. $path\ /sys/lib/libGeom.a.

A library containing geometry database routines.

libpagefile

Library Variable. $path\ /sys/lib/libpagefile.a.

A library containing the file manipulation and access routines used by the database.

libgeoquest

Library Variable. $path\ /sys/lib/libgeoquest.a.

A library containing Geoquest routines used to interface with IES on SunOS 4.1.x platforms.

libpvm

Library Variable. $path\ /sys/lib/libpvm.a.

A library containing Parallel Virtual Machine routines used to run certain jobs in parallel.

libXpm

Library Variable.

The color pixmap support library.

libpsplot

Library Variable.

The postscript plotting library.

libpp

Library Variable.

The library for creating a promax path variable.

AGCLIBS

Collection of Libraries.

A list of advance libraries normally linked with standalone executables. AGCLIBS is a convenience to Makefile writers.

MAXTOOLLIBS

Collection of libraries.

A list of maxtool libraries. MAXTOOLLIBS is a convenience to makefile writers.

libmaxtoolmake

Makefile Variable.

The location of the maxtool library makefile.

libmaxutilmake

Makefile Variable.

The location of the maxutil library makefile.

libmaxexecmake

Makefile Variable.

The location of the maxexec library makefile.

libuiutilsmake

Makefile Variable.

The location of the uiutils library makefile.

Other Docs

Known Problems

Understanding the Makefile System112

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

libmaxuimake

Makefile Variable.

The location of the maxui library makefile.

libagfcmake

Makefile Variable.

The location of the agfc library makefile.

libagXmake

Makefile Variable.

The location of the agX graphics library makefile.

libagXimake

Makefile Variable.

The location of the agXi library makefile.

libagCmake

Makefile Variable.

The location of the agC library makefile.

libeipackmake

Makefile Variable.

The location of the eispack library makefile.

libGeommake

Makefile Variable.

The location of the Geom library makefile.

libpagefilemake

Makefile Variable.

The location of the pagefile library makefile.

libpvmmake

Makefile Variable.

The location of the pvm library makefile.

libXpmmake

Makefile Variable.

The location of the Xpm library makefile.

libpsplotmake

Makefile Variable.

The location of the psplot library makefile.

libppmake

Makefile Variable.

The location of the pp library makefile.

report

Boolean Variable.

A boolean “yes/no” value which determines whether to report errors in the make to a log file.

BACKUP

CCS Variable.

A routine which will backup the target to a .bak file. This is intended for use with BACKUP_ON_ERROR.

BACKUP_ON_ERROR

CCS Rule Variable.

A routine which handles errors by moving the backup back into the original file (depends on BACKUP and ERROR_HANDLER).

ERROR_HANDLER

CCS Rule Variable.

A routine which handles errors in the make. Depending on how report is set, it will log errors to a file or simply report errors to the screen.

Other Docs

Known Problems

Understanding the Makefile System113

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

define_usrcs

CCS Function Variable.

A routine which will figure out what the user source code actually is. For a master make, it will be the same value as $(srcs). For a user make, it will look at $(srcs) and then figure out which of those files the user has and return that subset. (Assumes that the user has set a string srcs to contain a list of all source code comprising the executable or library.)

define_asrcs

CCS Function Variable.

A routine which will figure out what the actual source code is. It takes $(srcs) and replaces each file in the $(srcs) list with the user version, master version, or advance version. If the user does not have one of the files in his home directory, the master directory is scanned; if it does not exist there, the advance version of the filename is returned. For example, suppose srcs = a.c b.f c.C and that a.c exists in the master, advance and user directory. Also suppose that b.f exists in the master and advance directories while c.C exists only in the advance directory. A call $(define_asrcs) will yield a string $(path1)/a.c $(path2)/b.f $(path3)/c.C where a.c points to the user’s version, b.f to the master version and c.C to the Landmark version. (Assumes that the user has set a string srcs to contain a list of all source code comprising the executable or library.)

define_uobjs

CCS Function Variable.

A routine, similar to the define_usrcs routine, except that it translates the directory to the corresponding obj directory and replaces the .c, .f or .C with .o. (Assumes that the user has set a string srcs to contain a list of all source code comprising the executable or library.)

Other Docs

Known Problems

Understanding the Makefile System114

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

define_aobjs

CCS Function Variable.

A routine, similar to the define_asrcs routine, except that it translates the directory to the corresponding obj directory and replaces .c, .f, or .C with .o. (Assumes that the user has set a string srcs to contain a list of all source code comprising the executable or library.)

AMAKEDEPEND

CCS Rule Variable.

A CCS which will figure out what the header file dependencies are and output them to a file defined by $(depend_include). It is include file path is defined by AMDPPFLAGS and the options given to it can be augmented via setting AMDFLAGS.

AMAKEDEPEND_TO_STDOUT

CCS Rule Variable.

A rule, similar to the AMAKEDEPEND rule, except that the output is sent to standard output. This is useful when you have multiple targets but want to append the depedency output to a single file.

ANSIC_COMPILE

CCS Rule Variable.

A CCS which takes an ANSI C source file in the dependency and compiles it into an object file using the ANSI C compiler defined in the variable CC. It applies CPPFLAGS as well as CFLAGS in the compilation. If an error occurs, ERROR_HANDLER is invoked.

KRC_COMPILE

CCS Rule Variable.

A CCS which takes a K&R C source file in the dependency and compiles it into an object file using the K&R C compiler defined in the variable KRCC. It applies KRCPPFLAGS as well as KRCFLAGS in the compilation. If an error occurs, ERROR_HANDLER is invoked.

define_ugincs

Other Docs

Known Problems

Understanding the Makefile System115

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

CXX_LINK

Command Variable.

This is a variable which invokes the C++ compiler with CXXLDFLAGS and CXXPPFLAGS if your compiler needs include paths in the link phase as well. For compilers with automatic template instantiation, it will also set a template directory corresponding to what is being linked.

CXX_COMPILE

CCS Rule Variable.

A CCS which takes a C++ source file in the dependency and compiles it into an object file using the C++ compiler defined in the variable CXX. It applies CXXPPFLAGS as well as CXXFLAGS in the compilation. If an error occurs, ERROR_HANDLER is invoked.

F_COMPILE

CCS Rule Variable.

A rule which takes a Fortran 77 source file in the dependency and compiles it into an object file using the Fortran 77 compiler defined in the variable FC. First, it preprocesses the file into a *PP.f file and then it compiles the PP.f file. It applies FPPFLAGS, as well as FFLAGS in the compilation. If an error occurs, ERROR_HANDLER is invoked.

YACC_COMPILE

CCS Rule Variable.

A CCS which takes a yacc source file in the dependency and translates it into C code, perhaps with a header file yy.tab.h. YACC_COMPILE applies YFLAGS when yacc is applied. If an error occurs, ERROR_HANDLER is invoked.

LEX_COMPILE

CCS Rule Variable.

A CCS which takes a lex source file in the dependency and translates it into C code. If an error occurs, ERROR_HANDLER is invoked.

Other Docs

Known Problems

Understanding the Makefile System116

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

AS_COMPILE

CCS Rule Variable.

A CCS which takes assembler code and assembles it into an object file. If an error occurs, ERROR_HANDLER is invoked.

COPYFILE

CCS Rule Variable.

A CCS which takes a destination file in the target and a source file in the dependency. It then copies the source file to the destination.

LINKFILE

CCS Rule Variable.

A CCS which takes a destination link in the target and a source file in the dependency. It then links the destination link to the source file.

COPYLIB

CCS Rule Variable.

A CCS which takes a destination library in the target and a source library in the dependency. It then copies the source library to the destination library and ranlibs it.

ARCHIVE_REPLACE

CCS Rule Variable.

A CCS which takes a destination library name in the target and a list of object files in the dependency list. It then performs an archive replace of all the object files within the target library. It then ranlibs the library to update the SYMDEF table. It applies the variable ARCHIVE_REPLACE_FLAGS when archive replacing.

ARCHIVE_CREATE

CCS Rule Variable.

A CCS which takes a destination library name in the target and a list of object files in the dependency list. It first removes the old version of the library, if it exists. It then performs an archive create and creates the target library. It them ranlibs the library. It applies the variable ARCHIVE_CREATE_FLAGS when archive creating.

MAKETARGETDIR

CCS Rule Variable.

A CCS which ensures that the destination directory of the target is present. If it is not present, then MAKETARGETDIR will create the directory.

Other Docs

Known Problems

Understanding the Makefile System117

Developer’s Programming Guide

Makefile Variables (Continued) Variable

Type

Explanation

UPDATE_[libname]

CCS Rule Variables.

These CCSs correspond to the UPDATELIB[libname] rules. The difference is that UPDATE_[libname] rules are used by other routines (UPDATE_LIBRARIES) to automatically generate updated libraries. The only difference from their UPDATELIB[libname] counterparts is the name. The UPDATELIB[libname] macros are in the process of being phased out.

UPDATE_LIBRARIES

CCS Rule Variable.

This is a macro which will update a series of libraries (assumed to be defined in libs).

cleanlib

Rule.

This rule will clean the library in the corresponding library directory.

cleanexe

Rule.

This rule will clean the executable(s) in the corresponding executable directory.

cleanobjs

Rule.

This rule will clean the object files in the corresponding object directory.

Makefile Techniques The following tips illustrate how the makefile system can be used to solve common problems.

Creating the executable in a non-standard place Sometimes you may wish to link the target executable in a nonstandard place such as your current directory. This is useful if you wish to debug an executable in your current directory instead of supplying the somewhat cumbersome pathnames common to the advance tree. To do this, simply supply the variable exe which contains the path to the target executable name. gmake exe=./myexe

Other Docs

Known Problems

Understanding the Makefile System118

Developer’s Programming Guide

Compile without updating libraries For a faster compile/link, you may supply the ul=no option. This option tells the make to assume that the libraries you are linking with are up to date, foregoing the time consuming time/date stamp check for each library. Do not use this option if you are concerned that the libraries in question are in fact not up to date. gmake ul=no

Creating an exec.exe which does not contain the C++ extensions Linking via the C++ compiler is slower than using the C compiler due to the complexity that goes into linking in C++. If you do not need any of the C++ functionality in your executives, you may use the clink:=yes option in conjunction with the “utc” rule to create an executable which does not contain any of the C++ stuff. The first time you create your C++ executive, you must supply the utc rule. The utc rule tells the makefile to Update The Toolcall. A new toolcall is created which does not contain any of the C++ functionality. Having updated the toolcall, successive makes need only give the clink:=yes option. When switching back to a C++ based executive, you must again provide the utc rule to update the toolcall to once again provide the C++ functionality. gmake clink:=yes utc

This will cause the C only toolcall to be created and link via the C compiler. gmake clink:=yes

This is how you can make from here on out. The utc is necessary the first time only. gmake utc

This is how you switch back to a C++ based executive.

Adding a library to the link without changing the Makefile You may wish to add a library to the link without changing the link line in the Makefile. To do this, you may supply the SYSLIBS in the case of C linkage or SYSXXLIBS in the case

Other Docs

Known Problems

Understanding the Makefile System119

Developer’s Programming Guide

of C++ linkage. SYSLIBS and SYSXXLIBS enter the library in question after the advance and X11 libraries and before the system libraries. If you need the libraries to be inserted in a different location, you will probably want to go into the Makefile and edit the link. If not, XLIBS, CLIBS, CXXLIBS and libs are also available. gmake SYSLIBS:=”libfoo.a” gmake SYSXXLIBS:=”libC++foo.a”

Debugging a make failure Figuring out why your make is failing can be a painful experience. These suggestions may help you debug your make problems a bit faster. gmake [options] -n >& file

This causes your make to output what it would have done to the file “file”. You can now edit this file, adding and deleting options to help figure out just why your make is failing. Edit the compile/linking options to include the -v option for verbosity.

This option may be -show in the latest SGI compilers. This will cause the compiler/linker to go verbose helping you isolate exactly which phase is going wrong. You can add print statements to your makefile by doing something like this: foo := $(shell echo>&2 “My Variable = $(My_Var)”)

Debugging a failed link A failure to link is one of the most common make failures. To assist in figuring just why your link is failing, Landmark has provided the Nmgrep utility. The Nmgrep tool is a fairly comprehensive library analysis tool; you should not have to worry about all its bells and whistles. Perl 5.0 must be installed for this tool to function properly. A few example usages of this tool follows. To get a comprehensive help listing on this tool, type Nmgrep -h To show DEFs and REFs in $PROMAX_HOME/sys/lib:

Other Docs

Known Problems

Understanding the Makefile System120

Developer’s Programming Guide

Nmgrep

To show where any routines with trace in the name are defined/referenced in $PROMAX_HOME/sys/lib. Nmgrep trace

Using another compiler for this make only You may wish to substitute the compiler temporarily for some reason or another. CC, KRCC, CXX and FC correspond to the default Ansi C, K&R C, C++ and Fortran compilers respectively. Simply supply a definition for these to override the default. gmake CC=/usr/bin/cc gmake CC=”/usr/bin/cc” CXX:=”/usr/bin/CC”

Other Docs

Known Problems

121

Developer’s Programming Guide

Directory Structure

This chapter describes the organization of the ProMAX directories and files.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲ ➲

Other Docs

Directory Hierarchy Machine-dependent Directories Directory Naming Conventions Product-dependent Subdirectories Third-party Software Recompilation - GNU Make Makefile Rules Makefile Options User and Master Versions Master Versions for Clients

Known Problems

Directory Hierarchy122

Developer’s Programming Guide

Directory Hierarchy Everything we need to build our products, except for standard, vendor-supplied software such as ANSI C include files, goes under the single directory $PROMAX_HOME. For example, if we need the latest X11R4 (or X11R5, ...) include files, and these files are not supplied with all machine types we support, then we put them in $PROMAX_HOME/port/include/X11/, instead of /usr/local/include/X11/. Having everything under a single directory makes it easier to find things during development and maintenance, to build distribution tapes, and to save/recover new/old releases of the software. The next level of the hierarchy contains several directories named after machine types, such as rs6000/. These directories contain machine-dependent object code, libraries, and executable programs. Software that does not depend on machine type goes in port/. port/ contains everything necessary to port our software to a new machine type. The etc/ directory contains configuration files (such as the ProMAX config_file) that may vary from machine to machine, even when the machines are of the same type. Many sites will also put their ProMAX data/ and scratch/ directories just beneath $PROMAX_HOME, although these locations may vary. A listing of some of the more important, higher level directories of theProMAX hierarchy follows. You may also refer to the Expanded Directory Structure appendix. $PROMAX_HOME

Other Docs

.

rs6000/ (sys/ -> rs6000/ on IBM RS/6000 machines)

.

.

bin/

.

.

exe/

.

.

lib/

.

.

obj/

.

.

.

bin/

Known Problems

Directory Hierarchy123

Developer’s Programming Guide

.

.

.

exe/

.

.

.

lib/

.

.

nodist/

.

.

.

lib/

. decmips/ (sys/ -> decmips/ on DEC MIPS Rx000 machines)

Other Docs

.

rs6000/ (sys/ -> rs6000/ on IBM rs6000 machines)

.

sgimips/ (sys/ -> sgimips/ on SGI MIPS Rx000 machines)

.

sparc/ (sys/ -> sparc/ on SUN SPARC machines)

.

port/

.

.

bin/

.

.

misc/

.

.

menu/

.

.

.

promax/

.

.

.

prospector/

.

.

.

promaxvsp/

.

.

.

promax3d/

.

.

help/

.

.

.

promax/

.

.

.

prospector/

.

.

.

promaxvsp/

.

.

.

promax3d/

.

.

man/

Known Problems

Directory Hierarchy124

Other Docs

Developer’s Programming Guide

.

.

include/

.

.

src/

.

.

.

bin/

.

.

.

exe/

.

.

.

lib/

.

.

nodist/

.

.

.

.

etc/

.

data/

.

scratch/

include/

Known Problems

Machine-dependent Directories125

Developer’s Programming Guide

Machine-dependent Directories Machine-dependent directories, such as rs6000/ (for IBM’s RS/6000) and decmips/ (for DEC’s MIPS-based machines) are used for two reasons: 1.

They make it easy for programmers working in their own version of the $PROMAX_HOME hierarchy to build and test code on all platforms. The Makefile will put machinedependent files in the appropriate machine-dependent subdirectories of a programmer’s home directory. The Makefile rules determine machine type automatically, so programmers rarely have to specify machine-dependent directory names.

2.

They provide a standard way for groups with a single large file server to support multiple platforms. To provide a uniform look across different machines, system administrators might do the following: mount igor:/promax/1998.6/rs6000/ /promax/1998.6sys/ mount igor:/promax/1998.6/port/ /promax/1998.6/port/ on IBM RS/6000s that are NFS clients of a file server igor. If igor is a SUN SPARCstation, then a symbolic link from /promax/1998.6/sys/ to /promax/1998.6/sparc/ should be created on igor so that the /promax/1998.6/sys/ file system appears the same on all machines, including igor. Then, users can add something like /promax/1998.6/sys/bin/ to their PATH and forget about machine-dependent directory names. System administrators would not typically mount igor:/promax/1998.6/etc/ remotely, because the configuration files in /promax/1998.6/etc/ are likely to vary from machine to machine.

Machine-dependent directories are named after the machine type, not the vendor name, so different architectures from the same vendor (for example, DEC’s alpha machine) correspond to different directory names.

Other Docs

Known Problems

Directory Naming Conventions126

Developer’s Programming Guide

Directory Naming Conventions Where possible, directory names are chosen to follow UNIX conventions. In particular, the names bin/, etc/, include/, lib/, and src/ are used for directories that contain files that an experienced UNIX user would expect. bin/ directories contain programs or shell scripts that are launched by humans, such as the flow builder. Machine-dependent programs go in machine-dependent directories, such as $PROMAX_HOME/rs6000/bin/. Machine-independent shell scripts go in $PROMAX_HOME/port/bin/. Programs that are typically executed by other programs, such as a ProMAX or Prospector executive, go in the exe/ directory and have names that end in .exe. Subdirectories beneath src/ correspond to the destination of the compiled code. Thus, src/lib/ contains source code for the libraries in lib/, src/exe/ contains the source for executables in exe/, and so on. The source code for libmaxutil.a is in src/lib/maxutil/. Likewise for other libraries. Every subdirectory of src/lib/ contains a Makefile that is responsible for building the corresponding library. All .o files are built in machine-dependent obj/ directories, so that source code can be compiled for all machine types simultaneously. The hierarchy beneath obj/ is the same as that for src/, and the default Makefile rules exploit this similarity. Some of our subdirectories, such as misc/, menu/, and help/ have no UNIX counterparts, so their names are simply chosen to reflect the contents of these directories. Directory names contain only lower-case characters (except for SCCS/ directories), simply to make them easier to type.

Other Docs

Known Problems

Product-dependent Subdirectories127

Developer’s Programming Guide

Product-dependent Subdirectories Some files with the same names are different for different products. For example, the menu and help files for ProMAX 2D, Prospector, ProMAX VSP, and ProMAX 3D differ. To distinguish among different files for different products, the last subdirectory sometimes reflects the product name. For example, agc.menu appears in menu/promax/, menu/prospector/, menu/promaxvsp/, and menu/promax3d/. The ProMAX flow builder takes advantage of this structure to find and use the appropriate files, depending on which product is being used. The lib/ directories contain libraries (.a archives) of .o files that are used to build executables. The following table describes some of the more important libraries. Libraries File

Description

libmaxtool*

ProMAX tools for exec.exe only (formerly known as tools*.a). Most programmers will be adding code to these libraries. Multiple libraries are used to reduce the time required to update them.

libmaxexec.*

ProMAX utility functions called by tools and exec.exe only (formerly known as exec.a). Link with this and you will get an executable as big as the exec.exe.

libmaxutil.*

ProMAX utility functions used by any ProMAX program or tool (formerly known as MAX1.a and MAX2.a). Stand-alone programs typically need this library, but DO NOT need ibmaxexec.a.

libmaxui.*

ProMAX user interface functions (formerly known as routines.a).

libagfc.*

ProMAX library of sort routines, FFTs, interpolators, etc.

X11/libXaw.a

Our customized and bug-free X Athena widget library.

X11/libXhp.a

Our customized and bug-free X HP widget library. Libraries developed at Landmark follow the UNIX naming convention of beginning with lib and ending with .a for shared versions.

Other Docs

Known Problems

Third-party Software128

Developer’s Programming Guide

Third-party Software Third-party software requires special handling because third parties are unlikely to organize their files in the same way that Landmark does. Two or more vendors may choose the same name for different files. Some vendors permit Landmark to distribute their software and some do not. Today, no two sets of third-party software can be handled in exactly the same way. When a third party permits Landmark to distribute its software, we simply install it in the usual place in the $PROMAX_HOME hierarchy, except that we put it inside a subdirectory that reflects the vendor’s name. An example is the golden.a library from Golden Geophysical, which goes in $PROMAX_HOME/sys/lib/golden/golden.a. We distribute this library because Golden uses hooks into our licensing system that force users of Golden’s software to obtain a license from Golden. Another example of software that Landmark distributes is SDI’s viewer (in $PROMAX_HOME/sys/exe/sdi/sdi_view) for CGM metafiles, which has its own security/licensing scheme. All third-party software is installed (or symbolically linked) in the $PROMAX_HOME directory tree with a unique directory name corresponding to the vendor. For example, inside $PROMAX_HOME/port/include/ are geoquest/ and sdi/ directories. These vendor-specific directories minimize file name collisions. (Note that both geoquest/ and sdi/ have a portable.h file.) Software that includes portable.h should do so with #include geoquest/portable.h or #include sdi/portable.h, depending on which portable.h is needed. In some cases both may be included, so the vendor-specific directory name is important. As always, full pathnames should not and need not be specified in these #include statements.

Other Docs

Known Problems

Recompilation - GNU Make129

Developer’s Programming Guide

Recompilation - GNU Make Every executable or library has its own Makefile, which is located in the directory containing its source code. For example, the Makefile for sys/exe/exec.exe is located in port/src/exe/exec/. Likewise, the Makefile for the library sys/lib/libmaxutil.a is located in port/src/lib/maxutil/. Landmark Makefiles must be used with the GNU make program (which we call gmake)—they are incompatible with the make program provided with most UNIX systems. We use gmake primarily because it provides a powerful and uniform set of features for all UNIX systems. The set of make features provided by all UNIX systems is inadequate for the management of software development projects as complex as ours. The Free Software Foundation supports and distributes GNU make with source code and documentation that is lucid and complete. The GNU Make Manual (Stallman and McGrath, 1991) is highly recommended. In addition, a UNIX man page is located in $PROMAX_HOME/port/man/man1/.

Other Docs

Known Problems

Makefile Rules130

Developer’s Programming Guide

Makefile Rules Each developer of a library or executable program must provide a Makefile to build it on all machine types supported by Landmark. Most programmers will never have to write a Makefile from scratch because there are several examples in the src/ directories. However, if you write or modify Makefiles, be sure the Makefile does not reference anything outside of the $PROMAX_HOME directory tree. This simple rule: •

facilitates system administration



makes porting to new machines easier



helps ensure that we distribute everything that clients need for their own ProMAX development

Makefiles should reference only those libraries and include files that are absolutely necessary to build whatever it is that they are supposed to build. Makefiles should provide recipes for building programs and/or libraries with no extra ingredients. If programmers follow these rules, then it is easy to determine, for example, which of our programs may be affected if GeoQuest modifies its libraries. Writing Makefiles is simplified by the standard definitions and rules contained in advance.make and other files stored in the $PROMAX_HOME/port/include/make/ directory. advance.make sets numerous variables for directory paths that are used in Makefiles, among other things. In particular, advance.make automatically determines the machine type and sets variables for machine-dependent libraries, compilers, flags, etc. Hardwired pathnames to libraries, include files, etc., should not be specified in Makefiles. Another useful file in $PROMAX_HOME/port/include/make/ is maxprog.make. This file is included by Makefiles that build stand-alone programs and socket tools, such as autostat.exe and gbmig.exe. The core of the Makefiles for most stand-alone programs and socket tools lie in the definitions and rules in maxprog.make and advance.make. Yet another useful file in $PROMAX_HOME/port/include/make is machine_type.make, where machine_type might be rs6000, sgimips or decmips. This file sets variables which are specific to that platform. Other Docs

Known Problems

Makefile Options131

Developer’s Programming Guide

Makefile Options To make a program or library, just cd to the directory containing the source code for the program or library, and type gmake. All Makefiles that include advance.make provide two important command line options. The first is for debugging. Typing gmake debug=yes turns on debugging and turns off optimization. The default is debug=no. Another useful command line option for gmake is gmake clean, which deletes libraries and objects in your personal directories, or gmake clearlibs, which only deletes libraries. Other supported options are purify=yes (Sun only), quantify=yes, and loadmap=yes. purify=yes will try to run the commercial Purify package on compiled code in order to find memory leaks. quantify=yes will try to run the commercial Quantify package on compiled code in order to profile the code. (This assumes that Purify and/or Quantify are loaded on your system.) loadmap=yes is an IBM option which will create a loadmap file, which will report the source of undefines and/or multiple defines in your code. An executable shell, $PROMAX_HOME/port/bin/Makeexec, is provided to make it easy to build any variant of exec.exe. This shell simply cd’s to the exec source directory and runs gmake with the command line arguments provided to Makeexec. For example, typing Makeexec debug=yes

will build the ProMAX exec with debugging enabled. Makeexec is most convenient after modifying a tool, such as in lib/maxtool/agc/, although it can be used from within any subdirectory of $PROMAX_HOME/port/src/.

Other Docs

Known Problems

User and Master Versions132

Developer’s Programming Guide

User and Master Versions The Makefiles have been designed to support simultaneous development by a large number of programmers. Each programmer should have his own version of the Master $PROMAX_HOME directory tree in his home directory. For example, /home/dave/$PROMAX_HOME would contain Dave’s User version of $PROMAX_HOME. The actual locations of the User and Master versions are set in advance.make. The Makefiles behave differently depending on which version (User or Master) is the current working directory when running gmake. For example, if Barry types gmake in $PROMAX_HOME/port/src/lib/maxtool/ (the Master directory), then the Makefile will compile source code as necessary to ensure that the libraries $PROMAX_HOME/sys/lib/libmaxtool*.a are up-to-date. However, if Dean types gmake in /home/dean/$PROMAX_HOME/port/src/lib/maxtool/, then the Makefile will update /home/dean/$PROMAX_HOME/sys/lib/libmaxtool*.a via the following two-step algorithm: 1.

Any of the Master versions of libmaxtool*.a that are newer than the corresponding User versions are copied to the User directory, effectively replacing the User versions.

2.

The User versions of libmaxtool*.a are updated as necessary by compiling source code in the User subdirectories of maxtool/.

This algorithm ensures that Dean always has the latest tools that have been installed in the Master versions, while enabling him to have his own User version for development. An important difference between User and Master versions of $PROMAX_HOME is that a User version need not contain everything that is in the Master version. Typically, /home/larry/$PROMAX_HOME might contain subdirectories for only those tools on which Larry is working, plus the subdirectories necessary for his own version of exec.exe. When Larry links his User versions of exec.exe, the exec Makefile will ensure that any prerequisite libraries found in Larry’s $PROMAX_HOME directory tree are up-to-date. Any

Other Docs

Known Problems

User and Master Versions133

Developer’s Programming Guide

prerequisite libraries not found in Larry’s $PROMAX_HOME tree will be obtained from the Master version of $PROMAX_HOME, and the Master libraries will be assumed (not ensured) to be up-to-date. A general rule to remember in using the $PROMAX_HOME Makefiles is that they will search for include files and libraries in the User version and the Master version of $PROMAX_HOME, in that order. If Christof has his own User version of the include file cglobal.h, then that version will be included when he makes something in /home/stof/$PROMAX_HOME/port/src/. Otherwise, the Master version of cglobal.h will be included. The search order used by the Makefiles implies that Christof does not need a copy of every include file or library in order to build his own User versions of exec.exe, but that he can override the Master versions as necessary during his development and testing.

Other Docs

Known Problems

Master Versions for Landmark Clients134

Developer’s Programming Guide

Master Versions for Landmark Clients Makefiles were designed to support development efforts by Landmark clients, who themselves may have a large number of programmers. A programmer at a client site must be able to develop and test new ProMAX tools without changing anything in the site’s own Master version of the $PROMAX_HOME directory tree. Likewise, the ProMAX guru at a client site who is in charge of the Master version should not change the software distributed by Landmark. Therefore, to facilitate their use by clients, the Makefiles actually support three different versions of $PROMAX_HOME—User, Master, and Landmark. By default, the Master version for clients is assumed to be the Landmark version. This default will work well for clients with no more than one or two programmers. However, clients with more programmers will likely want the Master version to be different from that provided by Landmark. These clients would make their own top directory, such as $PROMAX_HOME/client/, and then define the variable ctopdir in advance.make. If ctopdir is defined, then it is assumed to be the root directory of the client’s Master version of the software. The client’s programmers would have their own User versions, just like programmers at Landmark. A client programmer who develops a new ProMAX tool could make it available to all ProMAX users at the site by installing it in the Master directory tree and remaking the Master versions of exec.exe. Clients with their own Master version will also need access to numerous files from Landmark’s version of the software. Therefore, when Makefiles search for include files or libraries, the precedence of the three versions goes as you might expect— User, Master, and Landmark, in that order—so that anything that is not contained in the User or Master versions will be picked up from the Landmark version. At Landmark, the variable ctopdir is left undefined, so that the Master version is the Landmark version. You might say that Landmark is its own client. Only programmers at Landmark should modify the Landmark version of $PROMAX_HOME. Reference: Stallman, R. M., and McGrath, R., 1991. GNU Make: a program for directing recompilation. Free Software Foundation.

Other Docs

Known Problems

135

Developer’s Programming Guide

C Environment

This chapter introduces the ProMAX C programming environment and highlights some of the conveniences within that environment. (See the C Programmng Examples appendix for examples of C include files and simple processes.)

Topics covered in this chapter:

➲ C Process Components (non-socket tools) ➲ C and FORTRAN Links ➲ Global Parameters

Other Docs

Known Problems

C Process Components136

Developer’s Programming Guide

C Process Components The components of a ProMAX process written in the C programming language are as follows: • • • • • •

a menu an init_ subroutine an exec_ subroutine included files an entry in the toolcall source file (toolcall.c or toolcall.f) an entry in the Processes list

The menu is written using Lisp commands. The initialization routine is executed once. It is used to get user input parameters, allocate any memory that is needed throughout the process, and to do anything that you will need to do just once in the course of the processing. It is also used to declare the number of parameters that need to be saved for re-entrancy. Included files usually have an .h appended to their name, but these included files do not generally contain parameters for re-entrancy. The entry of your new tool in the toolcall.c or toolcall.f file tells the trace Executive about the existence of your new processing tool. Finally, the entry in the Processes list points to your menu.

Other Docs

Known Problems

C and FORTRAN Links137

Developer’s Programming Guide

C and FORTRAN Links The UNIX convention for linking C and FORTRAN subroutines is demonstrated in the following examples.

Calling a FORTRAN Routine from a C Routine The following syntax is used to call a FORTRAN routine from a C routine. Suppose we have the following FORTRAN routine: A_FORTRAN_ROUTINE( ALPHA, CNAME, NINT, ARRAY, CNAME2 ) REAL ALPHA, ARRAY() INT NINT CHARACTER CNAME, CNAME2

We call it from a C routine as follows: float alpha, array[128]; int nint; char cname[8], cname2[64]; a_fortran_routine_( &alpha, cname, &nint, array, cname2, 8, 64 );

Note that: • • •

The name is all lower case with an underscore appended to the name. Only addresses are passed to FORTRAN (call by reference). The length of the character strings are appended to the calling arguments.

Calling a C Routine from a FORTRAN Routine The following syntax would be used to call a C routine from a FORTRAN routine. Suppose we have the following C routine: a_c_routine_( float *alpha, char *cname, int *nint, float\ *array, char *cname2 );

We call it from a FORTRAN routine as follows: CALL A_C_ROUTINE( ALPHA, CNAME, NINT, ARRAY, CNAME2 )

Other Docs

Known Problems

C and FORTRAN Links138

Developer’s Programming Guide

Note that the C subroutine has an “_” at the end of the name. This is required if a C subroutine is going to be called from a FORTRAN routine. Another requirement if a C routine is to be called from FORTRAN is that the calling arguments of the C routine must be call-by-reference; in other words, the arguments must be pointers rather than just values. Given these rules and the fact that all ProMAX modules get called from the FORTRAN subroutines in toolcall.f or analogous C routines in the equivalent toolcall.c, the initialization routine in a C module must be named init_name_( int *len_sav, int *itooltype )

and the execution routine must be called exec_name_( float *trace, float *rthdr, int *ithdr )

In other words, the “_” must follow the tool name and all arguments must be pointers. In this example, the exec_ calling arguments are for a simple tool.

Other Docs

Known Problems

Global Parameters139

Developer’s Programming Guide

Global Parameters The cglobal.h file (listed in the C Programming Examples appendix) provides the facilities for accessing the ProMAX external global common blocks using C structures. In FORTRAN, common blocks are stored in a format similar to structures in C. For example, using cglobal.h, the C variable globalRuntime->samprat

corresponds to the FORTRAN variable SAMPRATz of the GLOBAL_RUNTIMEcz common block. Other global variables follow the same style of using the common block name as the structure name and the same variable name in lower case without the trailing z. The FORTRAN constants, IENGLISHpz, are identical in C, keeping the upper case but with the trailing pz dropped (IENGLISH). For a complete listing of the C external global variable names and constants, see the Global Parameters chapter.

Trace Header Index Values Array index values in C start with 0 while array index values in FORTRAN typically start with 1. The ProMAX trace header routines for the C programming language always use the C standard of 0 being the index of the first element in a trace header array. ProMAX trace header routines for C all start with the three letters “hdr” followed by other letters; for example, hdrIndex is a function that returns the index of a trace header, the index appropriate for the C programming language. The index for standard headers can be found in the stdHdr structure defined in $PROMAX_HOME/port/include/cglobal.h. Note that the header index values in stdHdr are for FORTRAN arrays since the stdHdr structure points to the same place in memory as the trace executive’s standard header common block STD_HDRcz (defined in $PROMAX_HOME/port/include/header.inc). C programmers can use the macro STDHDR(x) to subtract 1 from the value of the standard header, or the C programmer can simply subtract 1 from the standard header to obtain the correct C header array index.

Other Docs

Known Problems

Global Parameters140

Developer’s Programming Guide

Re-Entrancy ProMAX modules handle re-entrancy by putting the permanent variables in structure called parms by using the following syntax. ProMAX FORTRAN modules handle re-entrancy by keeping permanent variables in the common block SAVED_PARMS. The Executive copies the variables out of SAVED_PARMS to a private storage area after each routine is called to avoid a conflict between variables of two separate calls of the same module in a flow. The variables are then copied back before the routine is called again. BEGINPARMS int int1, anotherInt; float aFloat, anotherFloat; Tbl *vel_tbl; ENDPARMS(parms);

This block is placed outside of the init and exec subroutine blocks. The variables are then referenced in init_ and exec_ using the standard C syntax for accessing a member of a pointer to structure: parms->int1, or parms->aFloat;

The *len_sav calling argument for init_name_( ) can be conveniently set as follows: *len_sav = NPARMS(parms);

The syntax of BEGINPARMS, ENDPARMS, and NPARMS are macros defined as follows: #define BEGINPARMS static struct{ #define ENDPARMS(p) } *(p)=(void *) (&saved_parms_.buffer[1] ); #define NPARMS(p) (2+sizeof(*p)/sizeof(float));

Other Docs

Known Problems

141

Developer’s Programming Guide

Tool Types

ProMAX supports three main types of tools: executive, stand-alone, and IPC. This chapter provides an overview of these types.

Topics covered in this chapter:

➲ Executive Tools ➲ Simple Tools ➲ Ensemble Tools ➲ Panel Tools ➲ Single Buffer Tools ➲ Double Buffer Tools ➲ Complex Tools ➲ Stand-alone Tools ➲ IPC Tools

Other Docs

Known Problems

Executive Tools142

Developer’s Programming Guide

Executive Tools The Executive supports six varieties of tools: simple, ensemble, panel, single buffer, double buffer, and complex (with the input tool being special case of the complex tool). Each of these tool types is designed to expedite the coding of typical trace handling situations that arise in seismic processing. The Executive attempts to do as much as possible to simplify data management and the development environment (see the Executive chapter). Each of the Executive tools require a subroutine known as the init phase and another subroutine known as the exec phase. The name of an init or exec subroutine is dictated by ProMAX. The name of the init routine is the tool name preceded by INIT_. The name of the exec phase is the tool name preceded by EXEC_. The Executive uses the tool name found in the exec_data portion of each menu in the flow (see the Menu chapter) pre-pended with init or exec to call the appropriate tool. The following sections describe the purpose and general structure of the init and exec subroutines, and the mechanism by which processing parameters are passed between the routines. The buffered tools (single and double buffered tools) also require a subroutine called the flow routine, which is discussed later in the buffered tools sections.

init Subroutine The init routines are responsible for:

Other Docs



getting menu parameters from the packet flow file and saving them in the common block (FORTRAN) or in external parameters (C) for use in the exec subroutine



doing one time calculations, such as calculating and saving a filter for application in the exec phase. The filter’s location in memory, not the filter itself, would be saved in the common block defined in the include file

Known Problems

Executive Tools143

Developer’s Programming Guide



reserving any memory that would be held for the entire job. For example, memory buffers of the size of a few traces would be considered too small to bother allocating and deallocating in the exec phase. Buffers the size of a shot gather would normally be allocated in the exec phase and de-allocated after each group of traces exited the tool.



creating or deleting new header or database entries and resetting any global runtime variables



checking parameters, memory and disk usage requirements, etc. and erroring out if necessary

In short, the init subroutine does things that need to be done just once during the execution of a particular processing tool. The final and important responsibility of the init subroutine is to set the values of the init subroutine return arguments called LEN_SAV and ITOOLTYPE. The LEN_SAV parameter is the number of memory words allocated to keep re-entrant parameters and variables. ITOOLTYPE is a integer variable indicating the tool type. The tool types are defined in the global parameter include files global.inc for FORTRAN and cglobal.h for C. Both of these variables must be set prior to exiting the init routine. The init phase routine for all tool types has the same calling arguments, those of LEN_SAV and ITOOLTYPE. The following is the subroutine definition for the Amp Ratio exercise (see Simple Tool Examples appendix). SUBROUTINE INIT_AMP_RATIO( LEN_SAV, ITOOLTYPE )

In the C programming language the function definition is void init_amp_ratio( int* len_sav, int* itooltype )

parms Structure and .inc File As discussed above, the init subroutine retrieves processing parameters from the menu. The processing parameters are passed from the init to the exec subroutines by way of a common block called SAVED_PARMS in FORTRAN and through an external structure called parms in C processes. The SAVED_PARMS common block and parms structure must be used so that the trace executive can save the processing

Other Docs

Known Problems

Executive Tools144

Developer’s Programming Guide

parameters for each instance of a tool in a flow. This allows a tool to be used more than once in a flow while saving the parameters for each occurrence of the tool. For example, the band-pass filter module could be used twice in one flow, but since there is only one subroutine for band-pass filter, the parameters for each occurrence of the routine in the flow must be kept separate and loaded into the routine when that occurrence is called by the trace executive. The ability to use a routine more than once in a flow is called re-entrancy. In a FORTRAN module, the SAVED_PARMS common block is a standard common block. In C, the parms structure is created through use of the BEGINPARMS and ENDPARMS macros which are defined in $PROMAX_HOME/port/include/cglobal.h, where $PROMAX_HOME is the directory path where the ProMAX system is installed. An example of the SAVED_PARMS common block and its use can be seen in the .inc files in the directory $PROMAX_HOME/port/src/lib/maxtool/amp_ratio. An example of the parms structure and its use can be seen in the .c files in $PROMAX_HOME/port/src/lib/maxtool/ampRatio.

exec Subroutine The exec subroutine is the subroutine that actually processes data. The processing parameters are passed to the subroutine through the saved parameters. The exec subroutine is normally called multiple times within a processing flow since it processes a natural grouping of traces and then passes those traces along to tools that are further down the flow. The exec subroutine of a band-pass filter program, for example, would be called each time a new trace is passed down the pipeline to be filtered. Since the exec routine is called multiple times, it is good programming practice to deallocate memory that is allocated within the exec subroutine before the routine returns control to the trace executive. A feature common to all exec tools is the cleanup condition. After the last trace has passed through a tool in the flow (the rest of the flow may have been executing for some time), the Executive will call that routine one more time with the global runtime ICLEANUPz set to true(=1). In C this value is in globalRuntime->cleanup. This gives the tool an opportunity to release any allocated memory, flush buffers of data, etc., and signals the Executive not to call this portion of the flow again.

Other Docs

Known Problems

Executive Tools145

Developer’s Programming Guide

Be careful to return from the routine immediately after these tasks and not continue on in the routine. For all tool types, except complex and buffer tools, the Executive automatically handles the last trace condition. The complete calling argument list for an exec subroutine is dependent upon the type of the processing tool (COMPLEX, SIMPLE, etc). There are some common features to the argument lists however. In all exec subroutines the calling arguments include a buffer to hold a data trace array, a trace header array for integer array values and another trace header array for floating point trace header values. Note that the real and floating point header arrays are actually equivalent locations in memory. The subroutine definition for amp_ratio in both FORTRAN and C is as follows: SUBROUTINE EXEC_AMP_RATIO( TRACE, ITHDR, RTHDR ) void exec_amp_ratio( float* trace, int* ithdr, float* rthdr )

Simple Tools Simple tools are designed for those numerous seismic data processing tools that only require a single trace at a time. Scaling, bandpass filtering, spiking decon, NMO correction, static shifting, and geometry installation are all examples. The simple tool assumes a single trace will be input, the tool will process the trace, and then output the processed trace to be picked up by the trace executive routine. No buffering or special directives to the Executive are required. The amp_ratio example module makes an approximate first break pick by finding the maximum of the ratio of a pair of adjacent sliding time windows. After making the pick, the user optionally stores the results in the trace headers and TRC Ordered Database. The program also demonstrates the use of interpolated time gates to limit the pick search range. This example can be found in the Simple Tool Examples appendix. Before leaving the init phase of amp_ratio, or any simple tool, ITOOLTYPE must be set to ISIMPLEpz.

Other Docs

Known Problems

Executive Tools146

Developer’s Programming Guide

The exec phase routine for the Amp Ratio simple tool has the following calling arguments: SUBROUTINE EXEC_AMP_RATIO( TRACE, ITHDR, RTHDR )

TRACE is a buffer containing a single input or output trace array (NUMSMPz long; see the Global Parameters chapter). The Executive passes in a trace and returns the processed trace (in TRACE buffer) to the flow for subsequent processing upon exiting. ITHDR and RTHDR are fixed and floating point equivalenced trace header arrays (NTHz long; see the Global Parameters chapter) which are passed in and out with the corresponding trace. The ampRatio example in C has the same arguments of course, but trace, ithdr, and rthdr are all pointers to arrays. void exec_amp_ratio_(float *trace, int *ithdr, float *rthdr)

Simple tools require little additional discussion except for runtime variables. Consider the case of resampling the data from 2 ms to 4 ms. Both the number of samples per trace and the sample rate are changing from that point in the flow down. Remember from the Executive chapter that a separate copy of the runtime variables will be kept for each tool in the flow. The last tool’s copy was given to you at the initiation of your init phase; you must change these as needed before exiting. The next tool in the flow will inherit your changes. With each call of your exec phase, the globals will have the new, updated versions; therefore, if you need some of the old values, you must store them as new variables in the common block. In this example, the runtime variables NUMSMPz (number of samples/trace) and SAMPRATz will change. Specifically consider that NUMSMPz will be about half the size after processing by the tool. So what is the dimension of the TRACE array passed to the exec phase or the tool? The Executive will take care of you and set it to the larger of the incoming or outgoing value. In this example, NUMSMPz may be set at 1001, but incoming trace buffer would be filled with 2001 samples in order to handle the incoming trace. This sort of hand holding can be expected of the Executive even in more complex situations described later.

Other Docs

Known Problems

Executive Tools147

Developer’s Programming Guide

Ensemble Tools The next most common bundle size for traces is an ensemble, where data is collected as shots, CDP’s, receivers, offsets, etc. Ensemble tools pass an ensemble of traces in and out of the exec routines. Two dimensional filtering, such as FK, is a good example of such processing. An ensemble tool must input and output one ensemble in each call of the exec routine; however, the number of traces in the output ensemble can be different than the input. Refer to the Ensemble Tool Examples appendix for examples of an ensemble tool. The AVO attribute analysis creates either a zero offset or gradient stack of incoming ensembles, and provides an example of fewer traces in the output ensemble than the input. In the prestack trace interpolation example, more traces are output than were input. Before leaving the init phase of an ensemble tool, ITOOLTYPE must be set to IENSEMBLEpz in FORTRAN or IENSEMBLE in a module written in C. The exec phase of a FORTRAN ensemble tool has the following calling arguments: SUBROUTINE EXEC_AVO( TRACES, ITHDRS, RTHDRS, NSTORED )

and for C the exec arguments look like this: void exec_avo_( float *traces, int* ithdrs, float *rthdrs,\ int *nStored )

The input trace and header arrays are two dimensional in the sense that there are multiple traces and headers in each array. A FORTRAN program can dimension the TRACES array as REAL TRACES( NUMSMPz, MAXDTRz)

and the headers as REAL RTHDRS( NTHz, MAXDTRz) INTEGER ITHDRS( NTHz, MAXDTRz )

where MAXDTRz is the maximum number of traces that the tool is going to see in any trace ensemble in the flow. MAXDTRz is one of the global variables and its value is

Other Docs

Known Problems

Executive Tools148

Developer’s Programming Guide

maintained by the Trace Executive. NTHz is the number of trace header entries in the trace header array. Most C programmers find it convenient to allocate an array of (float*) of length globalRuntime->maxdtr (same value as in the MAXDTRz discussed above) and point the beginning of each data trace to a place in the array. The trace header arrays are handled in a similar way. The ProMAX routine fVecTo2d() is written to make this task easy. The input trace and header buffers may not be completely filled if the current ensemble is not the largest ensemble in the dataset. The NSTORED (*nStored in C ) argument passes the number of traces in the current ensemble. If the number of traces in the ensemble is changed (that is, more or less than NSTORED will be output), then NSTORED must be reset accordingly prior to exit from the exec subroutine. The Executive allocates the required memory and starts accumulating traces just prior to calling the exec ensemble routine. When an end-of-ensemble flag is found in a trace header, indicating the end of the current ensemble, the Executive calls your exec routine and passes the ensemble of traces and headers. Your only responsibility is to return an ensemble. You are at liberty to change the ensemble size, even to one in the case of stacking, but the ensemble flag in the trace headers must be properly set upon exiting. Of course, if you change the ensemble size by stacking or interpolating traces, you are responsible for changing the runtime, MAXDTRz (globalRuntime->maxdtr in C), to reflect this change. Remember that MAXDTRz or globlaRuntime->maxdtr must be changed in the init_ subroutine, NOT in the exec subroutine. You might wonder what the Executive would do for the buffer size of TRACES if MAXDTRz was doubled for trace interpolation. The input buffer will be filled with the original set of, for example, shot traces, but the buffer would be dimensioned large enough to accommodate the larger output ensemble size. It is also worth pointing out again that the Executive is allocating and de-allocating these input and output buffers automatically as traces drop through the flow.

Panel Tools Panel tools are designed to handle situations in which a 2D processing step is to be applied to a dataset but there is not

Other Docs

Known Problems

Executive Tools149

Developer’s Programming Guide

sufficient room in memory to hold the entire 2D dataset at one time. Post-stack migration and FK filter of large stacked sections are two examples. With a panel approach, the dataset can be divided into pieces that can be comfortably processed by the system and then appropriately blended back together. Panel tools attempt to collect a set number of traces to process in each call of the exec routine. The exec subroutine processes the 2D array that has been passed to it and passes the array back out. The trace executive blends the processed array back together with the adjacent panels that have been processed in a like manner. The following figure depicts a stacked section; the dashed vertical lines represent data traces, and the solid vertical lines represent the panel of traces from within the section that is being processed. The traces at the edge of the panel are summed with the traces at the edge of the previous panel to provide continuity from panel to panel in the processed section.

A panel of traces within a stacked section Panel tools can panel through a large ensemble as well as a through a stacked section. If an ensemble is found to be smaller than the panel size, the panel exec will process the ensemble as an ensemble tool would, without paneling or blending the traces back together. An exception to this is the special case in which an ensemble size is equal to one trace, since this typically represents stack data. Panel tools are generally written in such a way that the panel size is larger than a typical shot ensemble.

Other Docs

Known Problems

Executive Tools150

Developer’s Programming Guide

They will process shots (and small to moderate stack datasets) like an ensemble tool, but will still panel for large stack datasets or anomalously large ensembles (common offset sort). In a panel tool, the programmer codes simply for a matrix of traces. The Executive handles the buffering of traces and the overlap blending automatically. The only special coding required is for a panel that has fewer than the expected number of traces. When you reach the end of a dataset or a large ensemble there will usually be insufficient traces to complete the panel. You can not anticipate when this will happen and must be ready to deal with it for any panel. Before leaving the init phase of a panel tool, ITOOLTYPE must be set to IPANELpz in a FORTRAN module or to IPANEL in a C module. Another routine that must be called in the init subroutine of a panel tool is EX_PANEL_PARMS in FORTRAN or exPanelParms in C. The arguments EX_PANEL_PARMS and exPanelParms notify the trace executive of panel size, overlap, and the number of traces to mix between sequential panels. It also tells the trace executive how much padding is required. The panel width, overlap, and mixing are demonstrated by the example illustrated in the figure below. The symbol “O” in the figure is used to indicate a trace that is output after the panel is processed. An “X” indicates a trace that is discarded after the panel is processed. An “M” indicates a trace that is mixed with a trace from a previous panel and then passed on in the trace pipeline for further processing. The variable PANEL_SIZE is the number of traces in the panel. PANEL_EDGE is the number of traces on either end of the panel that will be either discarded or mixed after the panel is processed. The variable PANEL_MIX is the number of traces at either end of the panel that will be mixed with adjacent panels.

Other Docs

Known Problems

Executive Tools151

Developer’s Programming Guide

Adjacent panels in the figure are represented by rows of “X”, “M”, and “O” characters. A. NPANEL_SIZE=12, NPANEL_EDGE =3, NPANEL_MIX=0 OOOOOOOOOXXX XXXOOOOOOXXX XXXOOOOOOOOO

B. NPANEL_SIZE=12, NPANEL_EDGE =3, NPANEL_MIX=1 OOOOOOOOOMXX XX MOOOOOOMXX XXMOOOOOOOOO C. NPANEL_SIZE=12, NPANEL_EDGE =3, NPANEL_MIX=3 OOOOOOOOOMMM MMMOOOOOOMMM MMMOOOOOOOOO

Note that the mix traces are a subset of the panel edge; therefore, the number of mix traces must be less than or equal to the number of edge traces. If the init subroutine of a panel tool does not call EX_PANEL_PARMS, then the tool defaults to an ensemble tool and the advantages of the panel tool are lost. The exec phase routine for a panel tool has the same calling arguments as the ensemble tool: SUBROUTINE EXEC_PANEL_TEST( TRACES, ITHDRS, RTHDRS, NSTORED )

for FORTRAN and void exec_panel_test( float* traces, int* ithdrs, float*\ rthdrs, int* nstored )

for C. The NSTORED argument passes the number of traces found in the current panel. NSTORED will be equal to the panel size in traces specified in the init phase, except when the last trace in the panel is at the end of a datasets or is at the end of an ensemble that was not large enough to fill out the panel.

Other Docs

Known Problems

Executive Tools152

Developer’s Programming Guide

Panel tools offer padding in both time and space. This is to facilitate such things as in place FFT’s without the allocation of additional memory. When using time padding, the traces that are input to the exec subroutine are of length NUMSMPz + time pad length. The executive provides the traces in a buffer that is long enough to hold the padded trace, then shortens the trace on output to NUMSMPz for subsequent processes. The value of NUMSMPz remains set to the number of samples that will be output from the tool after the panel tool is finished processing. Therefore, NUMSMPz is always less than or equal to the padded trace length. When padding traces are used, the padded traces occur at the trailing edge of the 2D array containing the data traces. In other words, the first trace in the input array is a valid, live trace. There are NSTORED live traces in the array. These live traces are followed in the array by the number of padding traces that were specified in the init subroutine. The padded regions always contain zero amplitudes; a padded 2D array input to the exec subroutine of a panel tool looks like the following figure in which the dark shaded region represents the original data and the light shaded region represents the padded region of zero values.

Other Docs

Known Problems

Executive Tools153

Developer’s Programming Guide

traces

Original live data traces Trace Padding time

Time padding

A padded 2D array In FORTRAN programs it is vital to dimension the input trace array to the exec routine as being NUMSMPz + the number of padding samples. Therefore, the number of padding samples must be sent to the exec routine from the init routine through the SAVED_PARMS common block. Panel modules written in C must also handle the extra trace length. See the example programs in both C and FORTRAN in the panel test sections of the Panel Tool Examples appendix.

Single Buffer Tools Unlike ensemble tools that stop at ensemble groups or panel tools that stop at a fixed number of traces, single buffer tools stop collecting traces at some custom condition set by the programmer. For example, you may write a tool that needs five CDP ensembles in memory to implement some enhancement process to the center CDP. This center CDP will be output, and the next CDP ensemble will be read with the last four shuffled down in memory; in other words, you will create a sliding window of ensembles containing 5 CDP except at the dataset boundaries. Generally, the condition to stop collecting and start Other Docs

Known Problems

Executive Tools154

Developer’s Programming Guide

processing in the exec routine will be dependent on some trace header condition, such as last trace in fifth CDP gather. For ensemble tools, this check is simply for the END_ENS flag set to 1, and is done automatically by the Executive. For single buffer tools, a third routine in addition to the init and exec routines must be written. This routine, called the flow routine, accumulates traces in a buffer one at a time checking each time for the execute condition. When the proper condition is met for processing and output, the NOUTPUT calling argument is set to the number of traces intended to be output from the exec routine. This will signal the Executive to stop accumulating traces and to make a call to the exec phase. Single buffer tools (and double buffer tools) require the usual init and exec routines plus a flow routine. The init phase must set ITOOLTYPE to ISNL_BUFFpz in FORTRAN and ISNL_BUFF in C. FORTRAN routines must also make a call to EX_BUFF_PARMS before leaving the init routine to set the maximum buffer size, in traces, for the input or output buffer. The analogous routine in C is exBuffParms. This call also specifies the time and space padding parameters for the input buffer (similar to panel tools). Refer to the Single Buffer Tool Examples appendix for an exercise example of a single buffer tool. This example process resets ensemble flags whenever a selected header changes. The traces are accumulated in a buffer until this condition is met. The calling arguments for the flow routine are: SUBROUTINE FLOW_ENS_DEFINE( TRACES, ITHDRS, RTHDRS, NSTORED, & IFOUND_EOJ, NOUTPUT, NOVERLAP )

Input arguments Input arguments consist of: TRACES: the 2D array of input accumulated traces ITHDRS: the 2D array of trace accumulated headers equivalenced to RTHDRS NSTORED: number of traces currently stored in TRACES IFOUND_EOJ: flag to indicate if the EOJ trace has been encountered (1=true). Since you are accumulating traces, you may hit the end of the dataset prior to reaching your

Other Docs

Known Problems

Executive Tools155

Developer’s Programming Guide

criteria for processing. In that case, you will be given the last trace again, without incrementing NSTORED, and must react accordingly.

Output arguments Output arguments consist of: NOUTPUT: number of traces that the tool will output from the exec subroutine. This argument must be set to a value of 1 or greater for the exec subroutine to be called. NOVERLAP: number of input traces to include in the next buffer that is accumulated. The last NOVERLAP traces in the current buffer will be the first NOVERLAP traces in the next buffer that is accumulated in the flow routine. The exec phase routine for an single buffer tool has the following calling arguments in FORTRAN: SUBROUTINE EXEC_ENS_DEFINE( TRACES, ITHDRS, RTHDRS, NTR_BUFF )

and the following argument in C: void exec_ens_define( float* traces, int* ithdrs, float* rthdrs, int* ntr_buff )

This is the same as ensemble and panel tools, with the addition of the parameter NTR_BUFF. NTR_BUFF is the size, in traces, of the TRACES buffer and is set to the maximum of NSTORED or NOUTPUT. If the number to be output is greater than the number of traces stored, then the number of data traces that are actually in the buffer to be processed must be sent to the exec subroutine via the saved parameters. Like ensemble tools, the number output can be larger (or smaller) than the number accumulated. This could get tricky with trace overlapping. Remember that NSTORED minus NOVERLAP traces will be dropped from the first portion of the

Other Docs

Known Problems

Executive Tools156

Developer’s Programming Guide

buffer, and that the next NOVERLAP traces are saved at the beginning of the buffer to start the next call of the flow routine. When the flow tool is called again, the buffer will have the NOVERLAP saved traces plus the next trace coming down the pipe.

Double Buffer Tools Double buffer tools are identical to single buffer tools in all aspects except one: the exec phase has extra buffers that are managed by the Trace Executive to store the output traces and headers. This allows the programmer to fill the output buffer without changing the buffer of input traces. In other words, the Executive handles the allocating and de-allocating of one extra work buffer. This type of automatic buffering is particularly convenient for transforming data from one form to another; for example, in generating a semblance plot from an input CDP or in interpolating traces within an ensemble. Double buffer tools require the usual init and exec routines plus a flow routine. The init phase must set ITOOLTYPE to IDBL_BUFFpz in FORTRAN and IDBL_BUFF in C, and must also make a call to EX_BUFF_PARMS (or exBuffParms) to set buffer size and padding. The flow routine for the double buffer tool is identical to the single buffer tool. The exec phase routine for a double buffer tool has the following calling arguments SUBROUTINE EXEC_SEMBLANCE( TRACES_IN, ITHDRS_IN, RTHDRS_IN, & NTR_BUFF_IN, TRACES_OUT, ITHDRS_OUT, RTHDRS_OUT, & NTR_BUFF_OUT )

where TRACES_IN, ITHDRS_IN, and RTHDRS_IN are the input buffers for the traces and trace headers. NTR_BUFF_IN is the amount of buffer space, measured in numbers of traces, in the input buffer. TRACES_OUT, ITHDRS_OUT and RTHDRS_OUT are the output buffers of traces and headers. NTR_BUFF_OUT is the number of traces available in the output buffer measured in numbers of traces. Please refer to the Double Buffer Tool Examples appendix for an example of how to use a double buffer tool.

Other Docs

Known Problems

Executive Tools157

Developer’s Programming Guide

Complex Tools If none of the of the multi-trace tool types handle your needs for trace bundling, you must finally resort to the complex tool which offers the ultimate in multi-trace I/O flexibility. This flexibility comes at a price. You must now handle trace buffering and block your code into logical units for buffer filling and flushing. Most tasks can be accomplished without resorting to complex tools, but, if the need arises, this style of multi-trace processing is only slightly more difficult. Please refer to the Complex Tool Examples appendix for an exercise example of a complex tool. This example process transforms a shot ensemble so that the time axis is replaced by the space axis. In other words, the shot is turned on its side. Before leaving the init phase of a complex tool, ITOOLTYPE must be set to ICOMPLEXpz in FORTRAN and ICOMPLEX in a module written in C. The exec phase calling arguments of complex tools is identical to simple tools: SUBROUTINE EXEC_TRANSFORM( TRACE, ITHDR, RTHDR )

Like simple tools, one trace is input or output from the exec routine at a time. Unlike simple tools, complex tools can declare their state to be filling (only receiving), flushing (only giving), or both (pipe: input and output in same call, same as a simple

Other Docs

Known Problems

Executive Tools158

Developer’s Programming Guide

tool). The exec phase of complex tools must declare their state prior to exiting the routine. The options are described in the following table: Complex Tool Options Option

Description

FILL

Set by a call to EX_FILLMODE. This tells the Executive that the tool is not supplying a trace upon exit, but that it expects to receive one on the next call. Presumably you are filling a buffer of traces for future processing. Note that in the very first call to the exec routine, a trace is passed in and either kept or output depending on the mode (FILL, etc) set before exit. One slight quirk of the system is that there is no mechanism to know if the trace you just received and stored is the last trace to be input to the flow. If the trace is the last input trace and the tool remains in fill mode, the Executive will pass the tool a dummy trace in the next call that has the trace header entry EOJ set to 1. The tool’s cleanup flag will not be set (which terminates the tool from the flow) because the Executive knows that traces are buffered and will be released later. You are responsible for checking this flag and responding by processing the traces that have been saved in the routine and then outputting them, primarily by using FLUSH mode. The EOJ flag appears only ONCE when a tool is in FILL mode, so if there are traces to dump after the EOJ trace appears, it is a good idea to keep a saved parameter stating that the EOJ trace has been seen.

FLUSH

Set by a call to EX_FLUSHMODE. This tells the Executive that the tool is supplying (outputting) traces to the flow upon exit, one trace per call, and that it will not receive a new trace on the next call. This mode is for flushing traces from a buffer after processing.

PIPE

Set by a call to EX_PIPEMODE. This tells the Executive that a trace will be supplied to the flow upon exit, and that a trace is expected to be input on the next call. If the last valid trace has been input to a tool and it asks for another trace in pipe mode, the Executive sets the cleanup flag to TRUE which terminates the flow for this tool. A complex tool that is always in pipe mode behaves as a simple tool; in other words, the Executive assumes that the trace being input is the same trace that is being output and that there are no traces buffered within the tool. This is not always a valid assumption; however, EX_PUSHMODE alleviates this problem. When shifting from fill to flush modes, there will be a single call to pipe mode when you are flushing the last trace in a buffer and expect to start filling the buffer again on next call.

Other Docs

Known Problems

Executive Tools159

Developer’s Programming Guide

Complex Tool Options (Continued) Option

Description

PUSH

Set by a call to EX_PUSHMODE. Push mode behaves exactly like pipe mode except for the important difference that the EOJ trace is input to the tool. Recall that when a tool is in pipe mode, the Executive does not pass the EOJ flag to the tool. The ability to see the EOJ flag allows the tool to flush traces that may be stored in a buffer internal to the routine. If a tool has more traces to flush after the EOJ flag has been found, it goes into flush mode until all of the traces have been dumped into the processing flow, at which point the tool sets itself to pipe mode to trip the cleanup flag. The general logic is as in the following pseudo-code for the exec subroutine. if( trace header has end of job flag (EOJ) ) set a saved variable that indicates the EOJ trace has been seen have_seen_EOJ_flag = TRUE endif if( have_seen_EOJ_flag is TRUE ) move trace and header from buffer to output trace if( the trace being dumped is the last to dump ) set to pipe mode to trigger cleanup mode on next call else set to flush mode to continue outputting traces endif else set to push mode, output this trace, accept the next input trace on next call endif

A very important point regarding the EOJ flag is that only ONE trace will be passed to the exec_ subroutine in which the EOJ trace header is set to 1; thus, the have_seen_EOJ_flag variable is set in the preceeding pseudo-code. QUIT

Set by a call to EX_QUITMODE. This tells the Trace Executive that the exec subroutine flushed the last trace in the flow on the previous call to the exec subroutine by using EX_FLUSHMODE. EX_QUITMODE is called when the exec subroutine is called and: - it finds that it has no more traces to pass to the trace executive, - there are no more traces to come, and - it is time to set the cleanup flag and have the flow terminated for this tool This is appropriate for the case where the entire stack line was filled, processed, and flushed. You know you are done when the last trace in the buffer is gone.

An input tool is a type of complex tools; by default, it is the first tool in a processing flow. An input tool gets data and feeds it into a processing flow. Examples include those which read data from a disk or tape or a program that generates synthetic seismograms. An input tool starts and remains in flush mode until the last available trace is given to the flow. When the tool is called again and it finds that it has no more trace to output, the

Other Docs

Known Problems

Executive Tools160

Developer’s Programming Guide

mode is set to quit mode. Refer to the Input Tool Examples appendix for an example. Another complex tool type is the iteration tool. Disk iteration means to have ProMAX read a group of data traces more than once from the disk. This facility is useful when all calculations on an entire dataset must be done before results of calculations can be applied. Examples include derive-and-apply processes that must see all the data to first derive surface consistent parameters (amplitude, decon, statics, etc.) and then pass through the data again for an application phase. Refer to the Disk Iteration Examples appendix for a crude example of surface consistent amplitude correction. The data must pass through the flow above the iteration tool two or more times, but this is usually a small price to pay. There are even some advantages: the first pass can optionally be enhanced (mixed, FK filter strong low frequency noise, etc.) for the derive phase, but not enhanced in the application phase. These tools have virtually eliminated the need for stand-alone tools that handle data traces.

Other Docs

Known Problems

Stand-alone Tools161

Developer’s Programming Guide

Stand-alone Tools There are certain situations to which the pipeline model of trace flow is not well suited. Modules that need to randomly access traces off the disk many times are such examples. In these cases a stand-alone tool that does its own trace I/O may be the answer. (Executive multiple iteration tools have largely eliminated the need for stand-alone tools that handle data traces. Refer to the preceeding discussion in the Complex Tools section of this chapter.) ProMAX supports stand-alone executables for data input only. These executables are launched by the Super Executive just as the Executive. However, the menu for a stand-alone tool must supply the executable name in a special section in the exec_data portion of the menu. The tool is declared a stand-alone type to the super exec in the same section. The TOOLTYPE and PATHNAME parameters pass on this information to the super exec, for example: exec_data: (“STAND_ALONE” (“SPECIAL” (“TOOLTYPE” implicit: “STAND_ALONE”) (“PATHNAME” implicit: “/usr/dave/poststack.exe”) ) (“GENERAL” (“LABEL” implicit: (value ’LABEL)) ) )

Stand-alone flows can contain only one executable and one menu associated with that executable. Please examine the stand-alone menus in the Stand-alone Program Examples appendix. Stand-alone programs must handle much of what is done automatically in the Executive, including: • • •

input of the packet file and special routines for parameter retrieval initialization of global geometry and runtime variables trace input and selection

Please refer to the prestack.f code in the Stand-alone Program Examples appendix for detailed comments on each of these items. Other Docs

Known Problems

Stand-alone Tools162

Developer’s Programming Guide

For FORTRAN modules, a C wrapper main must be written to pass the command line arguments containing the packet file name to the FORTRAN work routine. The wrapper for the prestack example looks like: void main( argc, argv ) int argc; char **argv; { prestack_( &argc, argv[1] ); }

Compiling and linking stand-alone modules are somewhat different than for the Executive subroutines; the compiling of the C wrapper for FORTRAN processes is one specific example.

Other Docs

Known Problems

IPC Tools163

Developer’s Programming Guide

IPC Tools IPC tools are a relatively new way of programming a ProMAX process. We recommend this approach to people for starting to program in ProMAX. If you are an experienced ProMAX programmer, note that IPC tools used to be called socket tools. While most inline flow tools reside in a giant program called the executive, IPC tools are separate programs that pass data via sockets. As traces come down the processing flow, they are sent at the appropriate place from the Executive to the IPC tool. After the processing of each individual trace or group of traces, the IPC tool sends them back into the Executive where they continue down the flow. All the communication and associated bookkeeping is hidden by input and output routines. To the programmer, this tool is compiled similar to a stand alone program. To the user, the IPC tool looks like any ProMAX tool. The main advantages of this method of incorporating modules into ProMAX are: •

The executable size is smaller and links are much faster and simpler.



The issue of re-entrancy is avoided—each program has its own address space.



The logic of a complex tool is simplified. When an IPC tool needs traces, it calls the subroutines saGetTrace, saGetEnsemble, or saGetPanel for C, and ST_GET_TRACE or ST_GET_ENSEMBLE for FORTRAN (panel I/O is not supported in FORTRAN). Whenever it wants to write traces, it calls the subroutines saPutTrace, saPutEnsemble, or saPutPanel for C, and ST_PUT_TRACE or ST_PUT_ENSEMBLE for FORTRAN.



The toolcall source code does not have to be edited.



An Alternative Executive does not have to be run for a new module.

These characteristics mean that it is easy to maintain IPC tools. You can update individual programs as needed. The compile and link problems of some other modules does not effect your

Other Docs

Known Problems

IPC Tools164

Developer’s Programming Guide

module. IPC tools are also easy to trade since you do not need to trade the entire executive. On the downside, the IPC tool is less efficient that a normal tool because of the overhead of data transfer; however, this overhead is generally very small. And, at present, IPC tools can only run on the same machine as the executive. The best approach for understanding IPC tools is to examine the example programs in the IPC Tool Examples appendix. The comments in the code describe the steps. The user does not have to alter the Executive or toolcall source code. The proper subset of the packet file from the menu is transferred to the the IPC tool. You can have repeated instances of the same IPC tool in a flow. And, the IPC tool can access the data base and use other ProMAX features, just as any normal stand alone program can. Note, however, that all header routines must be preceded with an “st” for C (stHdrAdd instead of hdrADD) and “ST_” for FORTRAN (ST_HDR_ADD instead of HDR_ADD). The same is true for the Executive error routines. The IPC tool is designed mainly for C programs but can be used with FORTRAN programs. Some wrappers for the C routines may need to be written.

IPC Tool Details The IPC tool approach is implemented with master init_stand_alone and exec_stand_alone routines that are complex tools in the Executive. The packet file passes the path of the IPC tool executable to init_stand_alone, which executes it, hooks up sockets with it, passes it the proper subset of packet files, and handles such items as passing global parameters and headers. After a signal from the IPC tool, the init_stand_alone routine returns and the exec_stand_alone starts. The exec_stand_alone passes the traces via sockets to the IPC tool. As a complex tool, it can read and write traces as the IPC tool demands them. On the IPC tool end, we have written routines that hide the IPC communication from the user. The performance degradation using this IPC-based method is not significant. The overhead to initiate and transfer data to an IPC tool is comparable to running in-line Automatic Gain Control (AGC). Since AGC is a computationally cheap module (one or two operations per sample), the communication time

Other Docs

Known Problems

IPC Tools165

Developer’s Programming Guide

may not be significant. The transfer time to a process that is running on a different machine from the rest of the flow is greater, but not unmanageable. The cost of the transfer will be less when the stand-alone process performs more computations, since traces are sent and buffered on the receiving machine while the module is computing. When the process needs the traces, they are immediately available for access.

IPC Tool Debugging To debug a IPC based tool, pass the DEBUG parameter in the menu a value of 2 or 3. In a normal, no debug execution, this value should be set to either 0 or 1 (1 = verbose listing of IPC information). When DEBUG is set to 2, the Executive will not start up the IPC tool. You just start it, probably under dbx. The Executive will print out the name of the packet file in /tmp that you must execute with your program (see the View button in the user interface). The command in dbx is: (dbx) run /tmp/pFile_287657

where /tmp/pFile_287657 is the name of the packet file printed by the Executive. Then, continue debugging your program as you normally would. When you set DEBUG to 3, the IPC tool is started but then halts waiting for you to attach the debugger to it. You give the command dbx -a 1542, where 1542 is the process ID of the IPC tool. You can get the process ID of the IPC tools from the printout (see the View button), or you can get it via the ps command. After you attach the debugger, you can continue running the program with the next or cont command. You have about 5 minutes to attach the debugger before the program aborts. With this approach, you do not have to specify the packet file name.

Other Docs

Known Problems

IPC Tools166

Other Docs

Developer’s Programming Guide

Known Problems

167

Developer’s Programming Guide

Global Parameters

This chapter describes ProMAX global parameters.

Topics covered in this chapter:

➲ Overview of Global Parameters ➲ Common Blocks and C Structure Descriptions

Other Docs

Known Problems

Overview of Global Parameters168

Developer’s Programming Guide

Overview of Global Parameters ProMAX global parameters are dataset-specific values that are commonly needed within a processing tool. These values are made conveniently available through the cglobal.h file for C and the global.inc file for FORTRAN. (See the Global Include File Examples appendix for examples of global.inc and global.h files.) An example of a global parameter is the number of samples per trace, which can be accessed through the variable name NUMSMPz in FORTRAN and the structure member globalRuntime->numsmp. The global parameters are initialized at run time and are defined in the global.inc file. Notice that parameters are grouped into common blocks by similar characteristics and that they all end with a z suffix.

Other Docs

Known Problems

Common Blocks and C Structure Descriptions169

Developer’s Programming Guide

Common Blocks and C Structure Descriptions The following table describes the common blocks and C structure names and how they are initialized. The names of the common block members are listed in some cases; the name of the corresponding C member parameter is obtained by dropping the trailing z on the name and changing all upper case letters to lower case. For example, the number of samples per trace in the FORTRAN GLOBAL_RUNTIMEcz common block is NUMSMPz. The analogous name in the globalRuntime structure is numsmp. Common Blocks and C Structure Names Item

Description

GLOBAL_CHARcz (FORTRAN) globalChar (C)

These are global character parameters that are supplied to the packet file (binary flow) by the flow builder at run-time. These variables are extracted from packet file by the Executive.

GLOBAL_RUNTIMEcz (FORTRAN) globalRuntime (C)

These are the global run time parameters. These parameters are dataset dependent and, as such, are stored with each trace dataset. These variables are read from disk at run time by the input tool, if the input is a ProMAX dataset. If the input tool is reading a foreign dataset (non-ProMAX format) or creating traces (modeling), the tool must supply values for the following subset of the run time variables SAMPRATz NUMSMPz IPSORTz MAXDTRz IDTYPz IPKEYz ISKEYz IDOMAINz IGEOM_MATCHz ITRNO_VALIDz The remaining run time variables are either set by the Executive or flow builder: NTHz MODEz IOUNITz IDATEz INIT_ONLYz CLEANUPz IERRORz

Other Docs

Known Problems

Common Blocks and C Structure Descriptions170

Developer’s Programming Guide

Common Blocks and C Structure Names (Continued) Item

Description

GLOBAL_GEOMcz (FORTRAN) globalGeom (C)

These are the critical global parameters related to geometry and are stored in the line order database file. Prior to geometry installation, these values are set to either INULLpz or RNULLpz. These variables are read from disk at run time by the input tool. After geometry installation, most of these have valid entries.

GLOBAL_MISCcz (FORTRAN) globalMisc (C)

These are the miscellaneous global parameters related to geometry and are stored in the LINE Ordered Database file. Prior to geometry installation, these values are set to either INULLpz or RNULLpz. These variables are read from disk at run time by the input tool. After geometry installation, most of these have valid entries.

GLOBAL_COORDcz (FORTRAN) globalCoord (C)

These are the global parameters related to coordinates and are stored in the LINE Ordered Database file. Prior to geometry installation, these values are set to either INULLpz or RNULLpz. These variables are read from disk at run time by the input tool. After geometry installation, most of these have valid entries.

GLOBAL_AQUIScz (FORTRAN) globalAcquis (C)

These are the global variables related to acquisition and are stored in the LINE Ordered Database file. Prior to geometry installation, these values are set to either INULLpz or RNULLpz. These variables are read from disk at run time by the input tool. In practice, few of these parameters are ever assign non-null values. In addition to common block definition, the global.inc defines what are referred to as global parameters. These constants end in pz rather than z and are used instead of arbitrary numbers (like 7) primarily for code readability. For example, the primary sort flag may be set to IOFFSETpz instead of 4 for offset sorted data. Beware that this is accomplished via substitution by the C pre-processor, so adhere to the upper/lower case mix. For C coding, the COMMON blocks are replaced by equivalent C structures and the same set of global constants are defined without the pz extension. These definitions are performed in the cglobal.h header file located in $PROMAX_HOME/port/include.

Other Docs

Known Problems

171

Developer’s Programming Guide

Ordered Parameter Files

This chapter describes the ProMAX database, which consists of a set of ordered parameter files that are used to store information. See the Ordered Parameter File Examples appendix for code examples.

Topics covered in this chapter:

➲ Overview of the ProMAX Database ➲ Standard Orders

Other Docs

Known Problems

Overview of the ProMAX Database172

Developer’s Programming Guide

Overview of the ProMAX Database The ProMAX Database consists of a set of ordered parameter files used to store information for each seismic line in structured categories representing unique sets of information. There are nine standard Orders that are created when the geometry is loaded: • • • • • • • • •

LIN (line) SIN (source index number) SRF (surface location) CDP (Common Depth Points) CHN (channel) TRC (trace) OFB (offset bin) ILN (3D inline number) XLN (3D crossline number)

Information stored in these Ordered Parameter Files is considered common to all datasets in the line (or survey) directory level. Information that is dataset dependent should be stored in trace headers. See the discussion in the Trace Headers chapter. In each Order there are N slots available for storage of information, where N is the number of elements in the Order (number of sources, number of surface locations, number of CDPs, etc.). Each slot contains various parameters (in various formats) for one particular element of the Order. If values for a particular parameter are not stored in all of the available slots, the empty slots contain the proper null value for that parameter. The following figure shows a sketch of an ordered parameter file. The order has N elements (columns) and five parameters

Other Docs

Known Problems

Overview of the ProMAX Database173

Developer’s Programming Guide

(rows). Values would be stored in each of the boxes (intersection of a row and column). ORDER ELEMENTS 1

2

3

4

5

....

....

N

PARM 1 PARM 2 PARM 3 PARM 4 PARM 5

An ordered parameter file with N elements and 5 parameters

Collectively, the Ordered Parameter Files are used to store large classes of data, including the acquisition parameters, the geometry, statics (and other surface-consistent information), and pointers between the source, receiver, and CDP domains. The design of the Orders is tailored for seismic data and provides a compact format without duplication of information. Of particular interest to programmers is the cross-referencing between domains, which reduces the book-keeping in programs such as residual statics. The ordered parameter files serve four main purposes: •

Other Docs

They serve as a geometry database to Executive tools, stand-alone processes, and to the flow builder. Menu parameters can be defaulted to parameters in the line (LIN) Order; examples include minimum CDP number or nominal CDP spacing. Surface consistent processing modules can access these files to determine buffer sizes and surface relation of shots and receivers, etc.

Known Problems

Overview of the ProMAX Database174

Other Docs

Developer’s Programming Guide



They provide general inter-tool communication of information that is global to all datasets in the line or survey; first break or residual statics picks are typical examples. In ProMAX, in-line tools make these time picks and place the picks in the trace Order. Another in-line or stand-alone tool reduces these picks to shot and receiver statics that are stored, respectively, as parameters in the SIN and SRF Orders. Once in the database, these final shot and receiver static parameters can be applied to any of the line’s datasets. As with trace headers, user defined Ordered parameters can be of any length or format.



They allow domain mapping between the CDP, SIN, SRF and TRC Orders. When geometry is installed, these domains are cross-referenced for fast access. Two convenience routines exist that utilize this cross-referencing: DB_ENSEMBLE_MAP and DB_TRACE_MAP. For example, when using DB_ENSEMBLE_MAP, from a particular CDP, the routine can return the receiver static of the nearest station or the uphole time of the nearest shot. The analogous routines in C are dbEnsembleMap and dbTraceMap.



They provide quick sorting of traces off disk. Each trace dataset has a mapping file which maps the relative position of each trace on disk to the original geometry trace number. For each of the common ensemble sorts (CDP, shot, receiver, offset bin, 3D inline, and 3D crossline), a row in the TRC Ordered Parameter file serves as a look-up table containing these original trace numbers sorted by the associated ensemble. A corresponding starting address look-up row (SLOOKUP) in each of the ensemble Order files points to the start of each ensemble group in the corresponding TRC look-up row. For example, assume you want to read CDP 437 off a disk dataset of unknown sort order. Disk Data Input will use the column associated with CDP 437 in the CDP Order to determine the CDP fold— we will use 30 for this example—and the starting position (SLOOKUP) in the TRC look-up row for CDP—we will use 4201. At position 4201 in the CDP look-up row of the TRC Order is the original trace number for the first trace in CDP 437, and the next consecutive 29 positions contain the remaining trace numbers for CDP 437. From this set of trace numbers, the dataset map file can be used to directly

Known Problems

Overview of the ProMAX Database175

Developer’s Programming Guide

determine the disk positions of the CDP traces. This appears complicated, but it suffices to say that Orders facilitate quick sorts of data from disk. The quick sort from disk using the MAP file, as described in the preceeding paragraph, depends on the trace numbers (in the trace headers) being valid; that is, they are unique and associated to a column in the TRC Order. If this is not the case, the dataset will be flagged as such (run-time variable ITRNO_VALIDz=0), and the sort will be performed using trace headers.

Other Docs

Known Problems

Standard Orders176

Developer’s Programming Guide

Standard Orders The following table describes the nine standard orders. Standard Orders Order

Description

LIN Order

The LIN Order contains information that is unique to the LINE, such as: Geometry: EXDATUMz - Final datum elevation (if not variable) IUNITSz - Type of units ISRCTYPz - Source type NSHOTSz - Total number of live shots NUMCDPz - Total number of CDPs Etc.

TRC Order

The TRC Order contains information that is unique to each trace, such as: Geometry: SIN - Shot index numbers of every trace SRF - Receiver surface location number of every trace CDP - CDP bin number of every trace CHN - Channel number of every trace OFFSET - Source-receiver offset of every trace SIN_LTBL - SIN look-up table SRF_LTBL - Receiver surface location look-up table OFB_LTBL - OFB look-up table CDP_LTBL - CDP look-up table ILN_LTBL - ILN look-up table XLN_LTBL - XLN look-up table User parameters: First break pick times Trim statics Source-receiver offsets Etc.

Other Docs

Known Problems

Standard Orders177

Developer’s Programming Guide

Standard Orders (Continued) Order

Description

SIN Order

The SIN Order contains information that is unique to each source, such as: Geometry: NCHANS - Number of channels in each source X_COORD - X coordinate of every source Y_COORD - Y coordinate of every source ELEV - Elevation of every source SRF - Nearest surface location number of every source SOURCE - Live source number of every source PATTERN - Pattern number of every source PATT_REF - Pattern reference surface location of every source DEPTH - Hole depth of every source UPHOLE - Uphole time of every source SLOOKUP - Starting address in SIN look-up table for every source User parameters: Source statics Etc. Source index numbers are unique sequential numbers that are assigned to all of the records that exist in the dataset that is used to initialize the database. They include records that are not live sources. Live sources are therefore a subset of shot index numbers and not a suitable order.

SRF Order

The SRF Order contains information that is unique to each surface location, such as: Geometry: FOLD - Receiver fold of every surface location X_COORD - X coordinate of every surface location Y_COORD - Y coordinate of every surface location ELEV - Elevation of every surface location SLOOKUP - Starting address in SRF look-up table for every surface location Plus user parameters: Surface location statics Etc. Surface locations are surveyed positions where either a shot or receiver may be placed. For 2D data, surface locations are generally equivalent to stations. However, 3D data generally have separate locations for shots and receivers resulting in stations locations being the super-set of all shot and receiver positions. Receiver locations are therefore subset and not a suitable order.

Other Docs

Known Problems

Standard Orders178

Developer’s Programming Guide

Standard Orders (Continued) Order

Description

CDP Order

The CDP Order contains information that is unique to each CDP bin, such as: Geometry: FOLD - Fold of every CDP X_COORD - X coordinate of every CDP Y_COORD - Y coordinate of every CDP ELEV - Elevation of every CDP SRF - Nearest surface location to every CDP SLOOKUP - Starting address in CDP look-up table for every CDP Plus user parameters: CDP statics Etc.

ILN Order

The ILN Order contains information that is unique to each ILN bin, such as: Geometry: FOLD - Fold of every ILN SLOOKUP - Starting address in ILN look-up table for every ILN

XLN Order

The XLN Order contains information that is unique to each XLN bin, such as: Geometry: FOLD - Fold of every XLN SLOOKUP - Starting address in XLN look-up table for every XLN

OFB Order

The OFB Order contains information that is unique to each OFB bin, such as: Geometry: OFB_CNTR - Offset bin center MEAN_OFF - Mean offset within the bin FOLD - Fold of every OFB SLOOKUP - Starting address in OFB look-up table for every OFB Plus user parameters: Offset amplitude adjustment Etc.

CHN Order

The CHN Order contains information that is unique to each channel, such as: Channel gain constants Channel statics (if appropriate) Etc.

Other Docs

Known Problems

179

Developer’s Programming Guide

Trace Headers

This chapter describes trace headers, the primary form of inter-tool communication within the ProMAX system.

Topics covered in this chapter:

➲ Overview of Trace Headers ➲ Definition and Usage of Standard Header Entries

Other Docs

Known Problems

Overview of Trace Headers180

Developer’s Programming Guide

Overview of Trace Headers Trace headers are the primary form of inter-tool communication within the ProMAX system. They are typically used to communicate information which is unique to individual traces of the current dataset.. Global variables, another form of intertool communication, are typically used to communicate information that is shared by all traces of the current dataset. The Ordered Database, a third form of inter-tool communication, is typically used to communicate information that is global to all datasets within the line (unless the datasets are flagged as not matching the Ordered Database). Convenience is a governing factor in the form of communication that is used. For example, geometry information that is placed in the parameter database can be conveniently accessed by stand-alone programs that do not wish to read trace data. Trace headers within ProMAX are very flexible. Individual entries may have any length, order, or format. This has proven to be a tremendous benefit in accommodating new ideas as the system evolves. (We prefer to use the term trace header entry rather than trace header word to avoid the notion of fixed length.) The concept of an ensemble within the Executive is facilitated by trace headers. An ensemble is defined as any collection of traces which (normally) share the same primary sort key value. Typical examples are shot records, CDP gathers, and commonreceiver gathers. All of the traces within an ensemble have the header entry END_ENS set to false (NLASTpz in FORTRAN and NLAST in C), except the last trace, where it is set to true (LASTTRpz in FORTRAN and LASTTR in C). The global variable MAXDTRz (maxdtr in C) is the maximum number of traces per ensemble, which must be greater than the number of contiguous traces that have END_ENS set to NLASTpz, or else the trace executive will stop the flow with an error. You may be surprised to learn that when a flow is initially executed, nothing exists in the trace header. All trace header entries are created one-at-a-time during initialization phase by calls to the routine HDR_ADD (hdrAdd in C) or HDR_STD_ADD (hdrAddStd in C). This is done primarily by the input tool, which has special responsibility for creating all of the header entries that existed when a dataset was written out

Other Docs

Known Problems

Overview of Trace Headers181

Developer’s Programming Guide

in a prior flow. But any tool can create new trace header entries. The decision to create a new header entry is left entirely in the hands of the applications programmers whenever some piece of information should be passed downstream in the processing flow to other tools. Trace headers are passed from tool to tool in an array, but the format of the individual entries can vary. Currently supported formats are integer, real, double precision real (seldom used), logical (seldom used), and character (stuffed into integers). Each header entry can actually be an array of values, although the length must be a whole number of words. Complicated entities such as C structures can also be included in the trace headers. For convenience, the trace header is presented in the calling arguments of the execution routine of every tool in both integer AND real format to avoid the need for juggling the formats of common trace header entries (the integer and real array are equivalent within the toolcall routine and therefore occupy the same memory). Any tool may access the trace header entries that were added by tools that occur earlier in the flow, but first the tool must know where to find the entries and what their lengths and formats are. This information is provided by the routine HDR_NAMINFO (or hdrInfo in C), which returns the description, length, format, and index of a trace header entry that was specified by name. The routine HDR_INDINFO (or hdrIndexInfo in C) returns the name, description, length, and format of a trace header entry that was specified by index. For programmers to communicate via trace headers, there must be some agreement on what the names and definitions of headers are. A standard list can be found in the file $PROMAX_PORT_MISC/header.list. It is important to note that this is not the list of headers that WILL exist during any given flow, it is just a list of what certain headers will look like if they DO exist. It is also important to remind yourself that the order of the headers may be different between any two flows and has nothing to do with the order that they appear in header.list. The strict definition and use of all of the standard header entries is presented in at the end of this chapter. There is a subset of the standard trace header entries known as the guaranteed headers that are always guaranteed to exist in a

Other Docs

Known Problems

Overview of Trace Headers182

Developer’s Programming Guide

flow (although their positions may vary). The names, descriptions, and initial values of these headers are as follows: Guaranteed Headers Name

Description

Initial Values

SEQNO

Sequence number in ensemble.

(variable)

END_ENS

End-of-ensemble flag.

(variable)

EOJ

End-of-job flag.

(variable)

TRACENO

Trace number in seismic line.

(INULLpz)

TRC_TYPE

Trace type (data, aux, etc.).

(ILIVEpz)

TLIVE_S

Start time of live samples.

(0.0)

TFULL_S

Start time of full samples.

(0.0)

TFULL_E

End time of full samples.

(end of trace)

TLIVE_E

End time of live samples.

(end of trace)

LEN_SURG

Length of surgical mute taper.

(0.0)

TOT_STAT

Total static for this trace.

(0.0)

NA_STAT

Portion of static not applied.

(0.0)

AMP_NORM

Amplitude normalization factor.

(1.0)

TR_FOLD

Current trace fold.

(1.0)

SKEWSTAT

Multiplex skew static correction.

(0.0)

LINE_NO

Line number (hashed line name)*.

(hashed line name)

LSEG_END

Line segment end*.

(NLASTpz)

LSEG_SEQ

Line segment sequence number*.

(1)

The initial values are the values to which the headers are initialized before some tool in the flow changes them. The initial values for other (non-guaranteed) header entries must be explicitly set in the execution phase by the tool that created them. If the tool does not supply values, the Executive will insert system-wide null values (INULLpz or RNULLpz in FORTRAN and INULL and RNULL in C). This does not necessarily indicate that a bug is present. For example, if new header entries are created in one side of a branched flow, the header entries will be defined in the other side of the flow, but their values will be null. In most cases, tools call HDR_NAMINFO (hdrIndex in C) simply to find out where a standard header entry can be found Other Docs

Known Problems

Overview of Trace Headers183

Developer’s Programming Guide

during the current flow. Since this use is widespread, the indices of the standard headers is placed in a C structure or FORTRAN COMMON block for easy access. Use of any of these function requires inclusion of header.inc or cglobal.h. C programmers should note that the array index values provided for the standard headers in cglobal.h point to the same place in memory as the FORTRAN common block and that the array index values are appropriate for FORTRAN. A value of 1 must therefore be subtracted from a standard header index value to be correctly used in a ProMAX module written in C. The following macro is defined in cpromax.h to aid in making code clear when using standard headers in C: #define STDHDR(x) ( (stdHdr->x)-1)

in which “x” is a member of the StdHdr structure defined in cglobal.h, such as icdp. Headers can be deleted from the flow via a two step process in the Executive. In the init phase, the header entry is removed using EX_HDR_DELETE, but no traces are processed until the exec phase. HDR_DELETE_UPDATE actually deletes the trace header in the exec routine. The use of the trace header routines is not restricted to tools that are linked into the exec. Stand-alone programs can use them in exactly the same way.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries184

Developer’s Programming Guide

Definition and Usage of Standard Header Entries Header entries adhere to the following general rules: •

All standard header entries are one word in length.



Some header entries should not be modified by the user (except indirectly); they contain an asterisk in the description. Think of these as read-only.



All times are in milliseconds.



All elevations are relative to sea level and should be negative for headers such as the source elevation in marine shooting.



It is the convention within ProMAX that a negative static shifts data up (towards time 0.0), and a positive static shifts data down (away from time 0.0).



Other standard header entries may be added in future releases.

Alphabetical Reference of Trace Header Entries The following table lists all currently defined trace header entries in alphabetical order. Each entry is followed by its sequential number. Standard Trace Header Entries AMP_NORM (13)

FB_PICK (64)

OFFSET (41)

SOU_H2OD (37)

AOFFSET (42)

FFID (19)

PR_STAT (62)

62) SOU_SLOC (25)

FK_WAVEL (71)

PS_STAT (61)

SOU_STAT (58)

FK_WAVEN (70)

REC_DEP (48)

SOU_X (31)

CDP( 22)

FNL_STAT (55)

REC_ELEV (30)

SOU_Y (32)

CDP_ELEV (40)

FRN_TRNO (8)

REC_H2OD (36)

CDP_NFLD (23)

FT_FREQ (72)

REC_NFLD (26)

SR_AZIM (49)

CDP_SLOC (27)

GEO_COMP (45)

REC_SLOC (24)

TFULL_E (67)

CDP_X (38)

ILINE_NO (50)

REC_STAT (57)

TFULL_S (66)

Other Docs

Known Problems

Definition and Usage of Standard Header Entries185

Developer’s Programming Guide

Standard Trace Header Entries (Continued) CDP_Y (39)

LAST_TRC (4)

REC_X (28)

TLIVE_E (68)

CHAN (20)

LEN_SURG (69)

REC_Y (29)

TLIVE_S 965)

CR_STAT (60)

LINE_NO (7)

REPEAT (6)

TOT_STAT (53)

CS_STAT (59)

LSEG_END (9)

SEQNO (2)

TRACENO (5)

DEPTH (34)

LSEG_SEQ (10)

SEQ_DISK (17)

TRC_TYPE (12)

DISKITER (15)

NA_STAT (52)

SIN (11)

TRIMSTAT (63)

DMOOFF (73)

NCHANS (21)

SKEWSTAT (56)

TR_FOLD (14)

DS_SEQNO (16)

NMO_STAT (54)

SOURCE (18)

UPHOLE (35)

END_ENS (1)

OFB_CNTR (44)

SOU_COMP (46)

XLINE_NO (51)

EOJ (3)

OFB_NO (43)

SOU_ELEV (33)

Sequential Reference of Trace Header Entries The tables on the following pages provide detailed descriptions of the trace header entries in sequential order. The first table describes system-related headers. System-Related Headers #

Header

Name

Type

Description

1

END_ENS

End-of-ensemble flag*

integer

An ensemble is defined within the exec as any collection of traces which (normally) share the same primary sort key. Typical examples are shot records, CDP gathers, and common-receiver gathers. All of the traces within an ensemble have ENS_ENS set to false (NLASTpz), except the last trace, where it is set to true (LASTTRpz). The global variable MAXDTRz is the maximum number of traces per ensemble which must concur with the number of contiguous traces that have ENS_ENS set to NLASTpz.

2

SEQNO

Sequence number in ensemble

integer

The sequence number of an individual trace within the current ensemble.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries186

Developer’s Programming Guide

System-Related Headers (Continued) #

Header

Name

Type

Description

3

EOJ

End of job flag*

integer

The header entry EOJ is set to false (NLASTpx) for every trace in a flow, except under certain circumstances when complex tools will receive a dummy trace (and trace header) with EOJ set to true (LASTTRpz). The dummy trace (and trace header) should not be returned by the complex tool.

4

TRACENO

Trace number in seismic line*

integer

An internal reference number that is assigned by the system when the database is initialized. It represents the unique sequential trace number within the dataset that is used to initialize the database. TRACENO is undefined (and set to INULLpz) after stack. The value of TRACENO in the headers can be used to reference the TRC ordered parameter file in the parameter database if the global variable ITRNO_VALIDz is true (=1).

5

REPEAT

REPEATED data copy number

integer

Set by the Reproduce Traces processing directive to reflect the copy number when ensembles or all data is repeated. (Subsequent processing frequently keys on the REPEAT copy number for purposes of comparison.) The Reproduce Traces directive creates and sets the header REPEAT_T when traces are repeated on an individual basis.

6

LINE_NO

Line number (hashed line name)*

integer

Represents the hashed line (or survey) name to uniquely identify lines. (Individual inlines and crosslines of a 3D survey are part of the same survey and would have the same LINE_NO.) LINE_NO is used in operations that involve processing or storage of data from multiple lines. LINE_NO is over-ridden by the GeoQuest IES Input tool to reflect the hashed line name of the IES line. In that case, each version number of each inline or crossline receives a different LINE_NO.

7

FRN_TRNO

Foreign tracenumber-within line

integer

Used to store trace numbers from nonProMAX sources (such as the GeoQuest IES System) to distinguish between traces with a common LINE_NO.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries187

Developer’s Programming Guide

System-Related Headers (Continued) #

Header

Name

Type

Description

8

LSEG_END

Line segment end*

integer

A line segment is defined as a portion of the data in a flow (including all of the data) that should be processed as a continuous piece. For example, if two different lines were read into the same flow and a runmix were applied, LSEG_END would be used to ensure that data was not smeared from one line into the next. LSEG_END is set to false (NLASTpz) for all traces except the last trace in a segment, where it is set to true (LASTTRpz). When multiple lines are imported via the GeoQuest IES Input tool, the LSEG_END entry is set to true for the last trace of each inline, crossline, time slice, reconstruction cut, or 2D line.

9

LSEG_SEQ

Line segment sequence number*

integer

The sequence number of the current line segment in a processing flow. See discussion of LSEG_END.

10

SIN

Source index number (internal)*

integer

An internal reference number that is assigned by the system when the database is initialized. It represents the unique sequential source number (including test records, bad shots, etc.) within the dataset that is used to initialize the database. The value of SIN in the headers can be used to reference the SIN ordered parameter file in the parameter database if the global variable IGEOM_MATCHz is true (=1).

11

TRC_TYPE

Trace type (data, aux, etc.)

integer

Used most commonly to distinguish between live traces, dead traces, and aux traces. Other valid trace types include dummy, time break, uphole, sweep, timing, water break, and unknown (other).

Other Docs

Known Problems

Definition and Usage of Standard Header Entries188

Developer’s Programming Guide

System-Related Headers (Continued) #

Header

Name

Type

Description

12

AMP_NORM

Amplitude normalization factor

real

Defined as the average amplitude normalization that has been applied to a trace. For example, if a time-variant gain increased the amplitude of each sample of a trace by an average factor of 2.0, then it should also multiply the value of AMP_NORM by 2.0.

13

TR_FOLD

Actual trace fold

real

The number of traces that were summed to form the current trace. TR_FOLD should reflect the actual number of traces that were summed, even if the theoretical fold is different (in a process such as a weighted stack). Note that TR_FOLD is NOT an integer.

The following table describes input-related headers. Input-Related Headers #

Header

Name

Type

Description

14

DISKITER

Disk Data Input iteration*

integer

If Disk Data Input (or Tape Data Input) reads through the data multiple times (for the benefit of tools such as autostatics), DISKITER will reflect the current iteration number.

15

DS_SEQNO

Input dataset sequence number*

integer

If Disk Data Input or Tape Data Input are reading multiple datasets, DS_SEQNO reflects from which dataset a trace was originally read.

16

SEQ_DISK

Trace sequence number from disk

integer

Reflects the actual sequence number that a trace was read from tape or disk, if the input tool was Disk Data Input or Tape Data Input. SEQ_DISK is unique for every input trace, even in the case of multiple iterations or multiple datasets. For example, if a dataset containing 10 traces was read twice, on the second iteration the first trace would have SEQ_DISK equal to 11.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries189

Developer’s Programming Guide

The following table describes geometry-related headers. Geometry-Related Headers #

Header

Name

Type

Description

17

SOURCE

Live source number (usr-defined)

integer

Assigned by the user when the geometry is written and installed, and represents a number that the user wishes to use to refer to a given record. SINs that are not assigned a SOURCE number are not part of the live data and have SOURCE set equal to INULLpz.

18

FFID

Field file ID number

integer

The FFID of each SIN is extracted from the headers of the field data and stored in the parameter database for user reference. It is largely unused because it (unfortunately) cannot be assumed to be unique.

19

CHAN

Recording channel number

integer

The CHAN of each trace is extracted from the headers of the field data. CHAN is one of the few headers for which the values from the field data are trusted (although they can be corrected if in error). CHAN numbers can be used to reference the CHN ordered parameter file in the parameter database if the global parameter IGEOM_MATCHz is true (=1).

20

NCHANS

Number of channels of source

integer

The number of channels found within the SIN to which the current trace belongs. This is a seldom-used header entry.

21

CDP

CDP bin number

integer

Also known as the CMP, this is assigned by whatever method the user chooses to do binning. Note that the increment between CDP numbers is allowed to be greater than one. CDP numbers can be used to reference the CDP ordered parameter file in the parameter database if the global variable IGEOM_MATCHz is true (=1).

22

CDP_NFLD

Number of traces in CDP bin

integer

The fold of the CDP to which the trace belongs (as defined by the global geometry for the line). This is a seldom-used header entry.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries190

Developer’s Programming Guide

Geometry-Related Headers (Continued) #

Header

Name

Type

Description

23

REC_SLOC

Receiver index number (internal)*

integer

An internal reference number that is assigned when the database is initialized. It represents the unique sequential receiver number within the dataset that is used to initialize the database. The value of REC_SLOC in the headers can be used to reference the SRF ordered parameter file in the parameter database if the global variable IGEOM_MATCHz is true (=1).

24

SOU_SLOC

External source location number

integer

Typically the field station number assigned to sources. It may or may not be on a similar station grid as the receivers. See SRF_SLOC.

25

SRF_SLOC

External receiver location number

integer

Typically the field station number assigned to receiver positions. It may or may not be on a similar station grid as the sources. See SRF_SLOC.

26

REC_NFLD

Receiver fold

integer

The fold of the receiver gather at which the current trace was recorded. This is a seldom-used header entry.

27

CDP_SLOC

External CDP location number

integer

Typically the field station number assigned to CDP locations. It may or may not be on a similar station grid as the sources and receivers.

28

REC_X

Receiver X coordinate

real

Always the actual coordinates, not the receiver surfloc coordinates (if they are not coincident, as in the case of marine 3D). All coordinates are relative to reference coordinates in the parameter database (XREFz and YREFz).

29

REC_Y

Receiver Y coordinate

real

See discussion of coordinates under REC_X.

30

REC_ELEV

Receiver elevation

real

Should reflect the actual receiver elevation and NOT the elevation of the surface at the receiver X,Y (if the receiver is in a borehole). For marine shooting, the REC_ELEV should reflect the water depth of the cable (and should be a negative number). See further discussion of coordinates under REC_X.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries191

Developer’s Programming Guide

Geometry-Related Headers (Continued) #

Header

Name

Type

Description

31

SOU_X

Source X coordinate

real

Always the actual coordinates, not the coordinates of the nearest surfloc. All coordinates are relative to reference coordinates in the parameter database (XREFz and YREFz).

32

SOU_Y

Source Y coordinate

real

See discussion of coordinates under SOU_X.

33

SOU_ELEV

Source elevation

real

The elevation of the SURFACE at the X,Y of the source (and does NOT reflect a hole depth; see DEPTH), except that for marine shooting SOU_ELEV should reflect the water depth of the source (and should be a negative number). See further discussion of coordinates under SOU_X.

34

DEPTH

Source depth

real

The HOLE depth of the source. It should be defined as 0.0 if no hole exists. DEPTH should be 0.0 for marine shooting, regardless of the water depth of the source.

35

UPHOLE

Source uphole time

real

The uphole time observed for a buried source.

36

REC_H2OD

Water depth at receiver

real

In marine shooting, the water depth at the X,Y of the receiver (not routinely assigned).

37

SOU_H2OD

Water depth at source

real

In marine shooting, the water depth at the X,Y of the source (not routinely assigned).

38

CDP_X

X coordinate of CDP

real

The bin center (not the center of mass of the contributing traces). All coordinates are relative to reference coordinates in the parameter database (XREFz and YREFz).

39

CDP_Y

Y coordinate of CDP

real

See discussion of CDP_X.

40

CDP_ELEV

Elevation of CDP

real

Elevations of CDPs are problematic because they are not specified and may be difficult to interpolate. Historically, the elevation of each CDP was taken as the elevation of the nearest surfloc. The elevations of CDPs are interpolated from the elevations of surflocs.

41

OFFSET

Signed sourcereceiver offset

real

The separation between source and receiver for a given trace. If the source is at a lower surfloc than the receiver then the offset is positive, otherwise the offset is negative (the convention is the same for 3D, although the sign of OFFSET has little meaning).

Other Docs

Known Problems

Definition and Usage of Standard Header Entries192

Developer’s Programming Guide

Geometry-Related Headers (Continued) #

Header

Name

Type

Description

42

AOFFSET

Absolute value of offset

real

Literally abs(OFFSET). AOFFSET is a convenience for users for specifying parameters in some situations.

43

OFB_NO

Offset bin number

integer

The sequential bin number of offset bins that are created during geometry installation. OFB_NO can be used to reference the OFB ordered parameter file in the parameter database if the global variable IGEOM_MATCHz is true (=1).

44

OFB_CNTR

Offset bin center

real

The offset of the center of an offset bin. See discussion of OFB_NO.

The following table describes special geometry-related headers for VSP/Crosshole, 3D, and multi-component recording. Special Geometry-Related Headers #

Header

Name

Type

Description

45

GEO_COMP

Geophone component (x,y,z)

integer

For multi-component recording, GEO_COMP is set to 1 for the vertical component of motion, 2 for the east-west (or inline for 3D) component of motion, and 3 for the north-south (or crossline for 3D) component of motion.

46

SOU_COMP

Source component (x,y,z)

integer

For multi-component shooting, SOU_COMP is set to 1 for the vertical component of motion, 2 for the east-west (or inline for 3D) component of motion, and 3 for the north-south (or crossline for 3D) component of motion.

47

SRC_DEP

Source depth below surface

real

For borehole sources, SRC_DEP is the depth of the source below the surface.

48

REC_DEP

Receiver depth below surface

real

For borehole receivers, REC_DEP is the depth of the receiver below the surface.

49

SR_AZIM

Source to receiver azimuth

real

In 3D surveys, SR_AZIM is defined as the azimuth FROM the source to the receiver, clockwise, where 0.0 is north (or the Y axis of the survey coordinate system). The units are radians.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries193

Developer’s Programming Guide

Special Geometry-Related Headers #

Header

Name

Type

Description

50

ILINE_NO

3D inline number

integer

In 3D surveys, ILINE_NO is defined as the number of the common inline to which a trace belongs, where inlines and crosslines are defined by the CDP binning, but the inline direction is the direction of source motion (if applicable). For example, in a marine 3D survey, a single sail line with a single streamer would tend to produce a set of traces with a common INLINE_NO but varying XLINE_NO.

51

XLINE_NO

3D crossline number

integer

In 3D surveys, XLINE_NO is defined as the number of the common crossline to which a trace belongs. See the discussion of ILINE_NO.

The following table describes statics-related headers. Statics-Related Headers #

Header

Name

Type

Description

52

NA_STAT

Portion of static not applied*

real

Statics are normally not fully applied within ProMAX, except to the nearest whole sample interval (for efficiency, and to avoid the effects of multiple interpolations). NA_STAT represents the fractional sample portion of the static that is not applied. NA_STAT is applied during NMO in normal processing, but must also be applied by any application that is sensitive to small static shifts (such as velocity analysis). The Apply Fraction Statics tool can also be used for this purpose, either by users or in a menu macro process.

53

TOT_STAT

Total static for this trace*

real

Represents the total static that SHOULD be applied to the trace (minus NA_STAT). TOT_STAT can be used to remove previously applied statics.

54

NMO_STAT

NMO datum static (do not apply)

real

The static that WAS applied to move from NO datum (the original data) to the NMO datum.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries194

Developer’s Programming Guide

Statics-Related Headers (Continued) #

Header

Name

Type

Description

55

FNL_STAT

Static to move to final datum

real

The static that will be (or was) applied to move from the NMO datum to the final datum. This is typically applied after CDP stack by the stacking tool.

56

SKEWSTAT

Multiplex skew static

real

The multiplex skew static (channel static due to differential delays in older recording systems) that is assigned during input of the field data.

57

REC_STAT

Total static for receiver

real

The portion of TOT_STAT that is attributed to the receiver (from elevation statics).

58

SOU_STAT

Total static for source

real

The total portion of TOT_STAT that is attributed to the source (from elevation statics).

59

CS_STAT

Corr. autostatics source static

real

The source static that was computed by a correlation autostatics program. It exists for convenience during parameterization of the process that applies autostatics solutions to the data. CS_STAT was dropped for Releases 5.0+ (see AS_STAT).

60

CR_STAT

Corr. autostatics recvr static

real

The receiver static that was computed by a correlation autostatics program. See also CS_STAT. CR_STAT was dropped for Releases 5.0+ (see AR_STAT).

61

PS_STAT

Power autostatics source static

real

The source static that was computed by a power autostatics program. It exists for convenience during parameterization of the process that applies autostatics solutions to the data. PS_STAT was dropped for releases 5.0+ (see AS_STAT).

62

PR_STAT

Power autostatics recvr static

real

The receiver static that was computed by a power autostatics program. See also PS_STAT. PR_STAT was dropped for releases 5.0+ (see AR_STAT).

63

TRIMSTAT

Trim static

real

The static that was computed by a trim statics (non-surface consistent) program.

64

FB_PICK

First break pick time

real

The first break pick time of the trace.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries195

Developer’s Programming Guide

The following table describes mute-related headers. Mute-Related Headers #

Header

Name

Type

Description

65

TLIVE_S

Start time of live samples

real

The time of the first live (non-zero) sample, if a top mute was applied. The difference between TFULL_S and TLIVE_S indicates the taper length of a top mute, if one was applied. The value of TLIVE_S does NOT enter the equation for conversion from sample number to time, or vice versa (it is NOT a recording delay time).

66

TFULL_S

Start time of full samples

real

The time of the first full untapered sample, whose amplitude is totally unaffected by a top mute if one was applied. The difference between TFULL_S and TLIVE_S indicates the taper length of a top mute, if one was applied.

67

TFULL_E

End time of full samples

real

The time of the last full untapered sample, whose amplitude is totally unaffected by a bottom mute if one was applied. The difference between TLIVE_E and TFULL_E indicates the taper length of a bottom mute, if one was applied.

68

TLIVE_E

End time of live samples

real

The time of the last live (non-zero) sample, if a bottom mute was applied. The difference between TLIVE_E and TFULL_E indicates the taper length of a bottom mute, if one was applied.

69

LEN_SURG

Length of surgical mute taper

real

The length of the taper of a surgical mute if one was applied. If multiple surgical mutes were applied with different taper lengths (which is discouraged), LEN_SURG will represent the last applied.

Other Docs

Known Problems

Definition and Usage of Standard Header Entries196

Developer’s Programming Guide

The following table describes special applications-related headers. Special Applications-Related Headers #

Header

Name

Type

Description

70

FK_WAVEN

Wavenumber of F-K domain trace

real

The common wavenumber of a series of amplitudes (a trace) after transformation to the F-K domain.

71

FK_WAVEL

Wavelength of F-K domain trace

real

The common wavelength of a series of amplitudes (a trace) after transformation to the F-K domain.

72

FT_FREQ

Frequency of F-T domain trace

real

The common frequency of a series of amplitudes (a trace) after conversion to the F-T domain.

73

DMOOFF

Offset bin for DMO

real

The offset of a common offset bin that is created for purposes common offset DMO.

The following table describes standard headers. Standard Headers #

Header

Name

Type

Description

74

AS_STAT

Autostatics source static

real

The source static that was computed by an autostatics program. It exists for convenience during parameterization of the process that applies autostatics solutions to the data. AS_STAT replaces PS_STAT and CS_STAT.

75

AR_STAT

Autostatics receiver static

real

The receiver static that was computed by an autostatics program. See also AS_STAT. NOTE: AS_STAT replaces PR_STAT and CR_STAT.

Other Docs

Known Problems

197

Developer’s Programming Guide

Parameter Tables

This chapter describes ProMAX parameter tables.

Topics covered in this chapter:

➲ Overview of Parameter Tables ➲ Structure of ProMAX Tables ➲ Table Rules ➲ Table Interpolation ➲ X Values in Tables ➲ Table Extrapolation ➲ Table Subroutine Categories ➲ Examples of Table Routines

Other Docs

Known Problems

Overview of Parameter Tables198

Developer’s Programming Guide

Overview of Parameter Tables ProMAX parameter tables are useful for storing and interpolating parameters, such as velocity functions, time windows, mutes, etc. The general philosophy behind ProMAX tables is that parameters, such as first break mute times, are stored on disk and retrieved into memory when they are needed. Interpolation of the parameters can then be done as needed by the ProMAX parameter interpolation routines. For example, suppose a ProMAX user picks mute times on a set of shot records. Those mute times will be stored in a ProMAX table on disk. When the user wants to apply those mutes, the mute times are read into memory and traces for which no mute times were specifically picked will be interpolated by the ProMAX interpolation routines. The application of the routines is, of course, under the control of the programmer.

Structure of ProMAX Tables A ProMAX table is a framework for storing information at a location within a 3-dimensional volume. The following figure shows a 3-dimensional volume with axes X1, X2, and Y. A black dot marked P is a location at which information can be stored.

X1

X2 P

Y

A three dimensional volume with point P where data can be stored

Other Docs

Known Problems

Overview of Parameter Tables199

Developer’s Programming Guide

A consistent terminology is used to describe locations within ProMAX tables. As in the previous figure, a location within the table is described in terms of the X1, X2, and Y axes. Data stored at a point within the table is referred to as a Z value. Therefore, a ProMAX table is a description of Z(X1, X2, Y). In fact, any number of Z values can be stored at an X1, X2, Y location, so this description might be more accurately be written as Z[ ](X1,X2, Y), where the “[ ]” indicates that a vector of Z values can be stored. An example use of a ProMAX table would be in the storage of NMO functions for a 3D seismic survey. Suppose that NMO functions had been picked for several CDP bin locations in the survey, as in the following figure. These functions could be stored in a ProMAX table using the X1, X2 coordinates to describe the spatial location of the CDP bins, the Y axis to describe the times at which velocities had been picked, and the Z values to describe the velocity at each picked time. In this case there would only be one Z value at each (X1, X2, Y) location. X1

Surface locations of CDP bins X2

Y Marks a time at which a velocity is stored

Velocity functions stored at CDP bin locations in a table

Other Docs

Known Problems

Overview of Parameter Tables200

Developer’s Programming Guide

Table Rules Some important rules about the structure of tables are as follows:. •

The number of X1, X2 locations in a table can vary from table to table and can also be changed within a table (points can be added and deleted).



There can be any number of Y locations at any X1, X2 location.



The number of Z values stored at X1, X2, Y locations is fixed for a table.

Table Interpolation The following figure shows a set of scattered points in the X1, X2 plane of a ProMAX table. These points might be surface locations in a 3D seismic survey or points in any other reference frame that is appropriate for a ProMAX table. Assume that a set of parameters have been picked for each of these points; in other words, there are Z values stored at Y locations at each point in the plane. Also in the plane is a point marked P, which is at coordinates (X1p, X2p). No parameters have been picked at P; the goal of interpolation will be to approximate values for Z(X1p, X2p, Y) based on the points that are near to P. The question that is then raised is, “Which of the points that are near to P should be used for the interpolation?”

Other Docs

Known Problems

Overview of Parameter Tables201

Developer’s Programming Guide

X1

P(X1p, X2p)

X2

Points scattered in the X1, X2 plane of a ProMAX table The answer to this question in ProMAX tables begins by dividing the X1, X2 plane into a set of triangles, as shown in the following figure. The triangles used to divide the X1, X2 plane are called Delaunay triangles, which is a unique set of triangles for a given set of points that comes as close as possible to making all of the triangles equilateral. The Delaunay triangles method tries to avoid creating long skinny triangles because they make interpolation inaccurate.

Other Docs

Known Problems

Overview of Parameter Tables202

Developer’s Programming Guide

X1

P

X2

Delaunay triangles divide the X1, X2 plane of a ProMAX table Suppose that interpolation is needed for a Z value at coordinates (X1p, X2p, Y). The table interpolation routines first determine which triangle contains P. The following figure shows the point P in the X1, X2 plane along with the triangle that contains P. The vertices of the triangle in the figure are A, B, and C, and have X1, X2 coordinates (X1a,X2a), (X1b, X2b), and (X1c, X2c), respectively. The Y axis extends downward from the X1-X2 plane. The next step in the interpolation of Z(X1p, X2p, Y) is to linearly interpolate values of Z(X1a, X2a, Y), Z(X1b, X2b, Y), and Z(X1c, X2c, Y) along the Y axis of the respective points of A, B, and C. Points of known Z(X1, X2, Y) are shown as dark circles in the figure and interpolated points are shown as open circles.

Other Docs

Known Problems

Overview of Parameter Tables203

Developer’s Programming Guide

X1 B X2

P A

C

Y

(X1a, X2a, Y)

Interpolated Z(X1, X2, Y ) Known Z(X1, X2, Y)

Interpolation of Z(X1, X2, Y) along the Y axis from the triangle vertices A, B, and C A plane can now be formed in X1, X2, Z space that contains the points (X1a, X2a, Za), (X1b, X2b, Zb), and (X1c, X2c, Zc), as is shown in the following figure. The value of Z(X1p, X2p, Y) is then calculated at the point where the line parallel to Y that intersects P also intersects the plane.

Other Docs

Known Problems

Overview of Parameter Tables204

Developer’s Programming Guide

X1 B X2

P A

C

Z

(X1a, X2a, Z)

Interpolated (X1, X2, Z) Vertex of triangle

Interpolation of (X1p, X2p, Z) from nearby points

X Values in Tables The primary use of tables is for interpolation of parameters in 2D and 3D seismic data processing. The parameters that are interpolated are frequently associated with some integer ensemble number, such as CDP bin number or Source Index Number (SIN). Both CDP and SIN can be associated with spatial locations on the ground such as the CDP bin location or the source location. When there is a 1-to-1 mapping between ensemble number and spatial coordinates, ProMAX table information can be referred to by either ensemble number or by the X1, X2 coordinates. For example, interpolation of parameters can be done with the subroutine tblInterpXY using just the ensemble number, which is referred to as X in the calling arguments. The interpolation

Other Docs

Known Problems

Overview of Parameter Tables205

Developer’s Programming Guide

can also be done with the routine tblInterpZ which uses X1, X2 pairs to describe the location at which interpolation is to be done. Tables can be built and used via either or both methods. In the internals of the code, the tables are referenced by the X1, X2 values, so mapping goes on between X and X1, X2.

Table Extrapolation The preceeding discussion of value interpolation assumed that there were points bracketing the location for which interpolation was being done. In cases where a value is beyond the limit of known table values, the new value must be extrapolated from values within the table. Extrapolation in the X1, X2 plane: The rules for extrapolation in the X1, X2 plane are simple. If interpolation is requested for an X1, X2 location outside the table, then the point on the table that is nearest the requested point is found. The Y and Z values at the requested point are set equal to the Y and Z values of the nearest point. Note that the nearest point might be (probably will be) an interpolated point. See the following figure.

X1 Requested point

Nearest point on table

X2

indicates a point in the table

The nearest point on the table is used to set values at exterior points Extrapolation along Y: Extrapolation in Y can be done either by setting the Y value equal to the nearest defined Y value or by linearly interpolating based on the slope between the last two Other Docs

Known Problems

Overview of Parameter Tables206

Developer’s Programming Guide

defined points along Y. The method of extrapolation is set by running the routine tblSetExtrap in C or TBL_SET_EXTRAP in FORTRAN.

Table Subroutine Categories Four general categories of table subroutines are available to the ProMAX programmer: •

table initialization and creation routines to input/output data to/from disk files



table value editing routines for adding, deleting, and changing table values



routines to find values at locations in the table and to find global minimum and maximum values



interpolation routines

A set of routines in each of these categories that be called from the C programming language, with an analogous set for FORTRAN (with one important exception which is discussed later). Use the command aman -k tbl

to print a list of all of the ProMAX table routines and a brief description of their purpose. In order to access the on-line manual pages for ProMAX routines, you must have the path to PROMAX_HOME/port/bin, where PROMAX_HOME is the path name where the ProMAX directory structure is installed. To see documentation on an individual table routine, such as tblCountX, type aman tblCountX.

This provides an explanation of calling arguments, along with a description of the function of the subroutine. If you look at a list of the table routines you may notice that there is a large number of routine names which begin with the letters tbl or TBL. Many of these routines are used for working with data that can be stored in the 2D tables. Many of the tbl routines actually call other routines that work directly with data Other Docs

Known Problems

Overview of Parameter Tables207

Developer’s Programming Guide

in the underlying 3D data structure. These direct access routines have names that begin with tb3. Direct access to the underlying 3D data structure can be gained by first calling the routine tblFetchTb3 from C, which returns a pointer to the location of the table which can be used with the tb3 routines. There is not a set of FORTRAN routine names that begin with the letters tb3, and there currently is no direct access to the underlying C structures from FORTRAN. Direct access to the underlying C structures is not necessary, however, since FORTRAN routines are provided that access the tables via X1 and X2 arguments.

Other Docs

Known Problems

Examples of Table Routines208

Developer’s Programming Guide

Examples of Table Routines One of the best ways to demonstrate the use of the ProMAX table routines is through example code. The following examples are in both FORTRAN and C. You will not miss anything if you read the code examples in one language and not in the other. Both sets of code accomplish the same tasks; a discussion accompanies each code fragment.

FORTRAN Code Examples This section contains two examples of FORTRAN table routines.

Example 1 The first FORTRAN example comes from the subroutine INIT_AMP_RATIO, which can be found in the example source code amp_ratio.f. This code is actually altered slightly from the original amp_ratio.f to minimize code that is extraneous to this discussion. In this example, a table that holds start and end times for a time gate is read from the database. The table DB_TBL_GET allocates memory for the table and loads the table values into that memory. The routine returns an argument called ITBL_HANDLE. This is a numerical value that should not be changed, as it is the memory location of the table. This value of ITBL_HANDLE is the way in which the table is referenced. Any subroutines that make use of the table will pass ITBL_HANDLE to the routine so that the table can be uniquely identified. C ..... Get the name of the time gate file from the menu CALL EX_CGETPARM( 'GATENAME', 1, CGATENAME, NCHARS ) C ..... Get the gate table from the database CALL DB_TBL_GET( 'GAT', CGATENAME, ITBL_HANDLE, IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( & 'Cannot open time gate ' //CGATENAME ) ENDIF

Other Docs

Known Problems

Examples of Table Routines209

Developer’s Programming Guide

C ......Get info on the gate table CALL TBL_INFO( .TRUE., ITBL_HANDLE, CPRIM_KEY,\ CSCND_KEY, & CZ_DESC, CTABLE_DESC, IDUMMY(1), IDUMMY(2),\ IDUMMY(3), & NTIMES, RDUMMY(1), RDUMMY(2), RDUMMY(3),\ RDUMMY(4), & RDUMMY(5), RDUMMY(6) ) C ......There had better be two time values (upper and\ lower gate) IF ( NTIMES .NE. 2 ) CALL EX_ERR_FATAL( & 'Invalid gate (must have an upper and lower\ gate)' ) C ......We will need the index of the primary and\ secondary key CALL HDR_NAMINFO( CPRIM_KEY, CDESC_DB, LENGTH,\ IFORMAT_PKEY, & IH_PKEY, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'The primary key of the time gate\ (' //CPRIM_KEY & //') is not in the header' ) CALL HDR_NAMINFO( CSCND_KEY, CDESC_DB, LENGTH,\ IFORMAT_SKEY, & IH_SKEY, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'The secondary key of the time gate\ (' //CSCND_KEY & //') is not in the header' )

After the table is created with DB_TBL_GET, information about the table is retrieved from the table by use of the program TBL_INFO. You can investigate the other calling arguments for this subroutine by typing aman tbl_info. Most of the arguments returned in this code are dummy variables. The only information about the table that is needed is the primary and secondary keys on which the table is based (the X and Y axes of the table) and the number of Z values in the table, represented here as the argument NTIMES. After checking to be sure that there are only two times, a start and end time, stored in the table, the code gets the trace header index and format of the primary and secondary keys by use of the header routine called HDR_NAMINFO. This piece of code came from the initialization subroutine in amp_ratio. The table is read into memory only one time. It will be preserved there until it is intentionally closed by the

Other Docs

Known Problems

Examples of Table Routines210

Developer’s Programming Guide

programmer, or until the routine fails through EX_ERR_FATAL or some other means of stopping execution. Next we see where the table that is created is used in the subroutine EXEC_AMP_RATIO, which can also be found in the example code file amp_ratio.f. The subroutine EXEC_AMP_RATIO is given one data trace at a time to process, and the time gate is interpolated for that trace. The trace for which the time gate is interpolated is referred to as the current trace in the following discussion. The subroutine EX_GET_REALKEY is used to get a floating point representation of the primary key value (the table’s X axis value) and of the secondary key value (the table’s Y axis value). This is because the table interpolation routines only work with floating point values. FLOAT PKEYVAL, SKEYVAL, TGATE(2) INTEGER IERR C ......... Interpolate the gate times from the table CALL EX_GET_REALKEY( RTHDR(IH_PKEY), IFORMAT_PKEY,\ PKEYVAL ) CALL EX_GET_REALKEY( RTHDR(IH_SKEY), IFORMAT_SKEY,\ SKEYVAL ) CALL TBL_INTERP_XY( ITBL_HANDLE, PKEYVAL,\ SKEYVAL,TGATE, NOINTERP )

The routine TBL_INTERP_XY is called next; it interpolates the time gate values from the table. Note that the calling arguments include ITBL_HANDLE, which was passed from INIT_AMP_RATIO through the common block in amp_ratio.inc. The input arguments PKEYVAL and SKEYVAL are the X and Y locations in the table at which interpolation of the time gate values will be done. The routine TBL_INTERP_XY returns the start and end time of the time gate TGATE. It also returns the value of NOINTERP, which is used in control of extrapolation. We could also do the interpolation by direct use of the X1, X2 coordinates. This is only desirable if X1 and X2 are readily available through trace headers, as occurs in the following code: C ......... Get the X1,X2 coordinates from the headers\ and interpolate X1 = RTHDR( IH_X1 ) X2 = RTHDR( IH_X2 ) CALL TBL_INTERP_Z( ITBL_HANDLE, X1, X2, 2, 1,\ SKEYVAL, SKEYVAL, & TIMES, NOINTERP )

Other Docs

Known Problems

Examples of Table Routines211

Developer’s Programming Guide

Example 2 In order to create a new ProMAX table, the user must have first created a table name while working in the user interface. The program that will fill the table with data must get the name of the table into which the table data will be written. The name of the table is acquired with the routine EX_CGETPARM in the INIT_ subroutine, as is shown in the previous example. Note that the name of the table that is acquired from a call to EX_CGETPARM is the 8-character label (or hash name) which is generated from the table description input by the user. In order write to the table, you must find the descriptive name by use of the routine TBL_DESC_FROM_DB. Memory must then be allocated for the table, which is done using TBL_ALLOCATE. The following code shows a working ProMAX module which writes a new velocity table to the database. C----------------------------------------------------------------------------C C Description: C Standard initialization routine C C Output arguments: C LEN_SAVE - number of 4-byte words to save for\ re-entrancy C ITOOLTYPE - processing tool type C C----------------------------------------------------------------------------SUBROUTINE INIT_DUMMYA( LEN_SAV, ITOOLTYPE ) #include "dummya.inc" INTEGER LEN_SAV, ITOOLTYPE INTEGER NCHARS, IERR C ..... declare variable for user-given descriptive table name CHARACTER CTABLE_DESC*128 C ..... declare variable for lable or name created from\ CTABLE_DESC CHARACTER CHASH_NAME*8 C ..... Call for the input parameter by name. Note the\ padding in the C ..... character constant. It is the programmers\ responsibility to C ..... provide the correct type of return argument.

Other Docs

Known Problems

Examples of Table Routines212

Developer’s Programming Guide

CALL EX_CGETPARM( 'VEL_FILE', 1, CHASH_NAME,\ NCHARS ) C ..... Get the table description from the hash name.\ The description is C ..... needed both when the table is allocated and\ written to the data base CALL TBL_DESC_FROM_DB( 'VEL', CHASH_NAME,\ CTABLE_DESC, IERR ); IF( IERR .NE. 0 ) THEN CALL EX_ERR_FATAL('Table description not\ found.') END IF C ..... Allocate space for a new table CALL TBL_ALLOCATE( ID_TABLE, 1, 'CDP', 'VEL',\ 'TIME', & CTABLE_DESC, IERR ) C ..... Set the number of words that need to be saved\ for re-entrancy. C ..... Note that LENSAVED declared within the include file, hopefully C ..... to avoid oversights LEN_SAV = LENSAVED C ..... Set the tool type to simple (one trace in, one\ trace out) ITOOLTYPE = ISIMPLEpz RETURN END

C----------------------------------------------------------------------------C C Description: C Standard execution routine C C Input/output arguments: C TRACE - array of trace samples C ITHDR - trace header (as integer) C RTHDR - trace header (as floating point) C C----------------------------------------------------------------------------SUBROUTINE EXEC_DUMMYA( TRACE, ITHDR, RTHDR ) #include "dummya.inc" REAL TRACE(NUMSMPz), RTHDR(NTHz) REAL XVALUE, YVALUE, ZVALUE INTEGER ITHDR(NTHz), IERR

IF ( CLEANUPz ) THEN CALL TBL_TO_DATABASE( ID_TABLE, 'VEL', IERR )

Other Docs

Known Problems

Examples of Table Routines213

Developer’s Programming Guide

C ......... We don't want control to pass into the main\ body RETURN ENDIF C ..... Add values to the table XVALUE = 1.0; YVALUE = 2.0; ZVALUE = 3.0; CALL TBL_ADD_XY( ID_TABLE, XVALUE, YVALUE, \ ZVALUE ) RETURN END C----------------------------------------------------------------------------C Include file for DUMMYA C----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc" COMMON /SAVED_PARMS/ SAVE1z, ID_TABLE INTEGER ID_TABLE, LENSAVED C ..... Specify the number of variables to save. DATA LENSAVED /2/ C ..... ID_TABLE memory handle for the velocity table

C Code Examples This section contains two examples of C table routines.

Example 1 The first C example comes from the subroutine init_amp_ratio_, which can be found in the example source code ampRatio.c. This code is actually altered slightly from the original ampRatio.c to minimize code that is extraneous to this discussion. In this example, a table that holds start and end times for a time gate is read from the database. The table database allocates memory for the table and loads the table values into that memory. The routine returns a (void*) pointer to the table location. The returned value is the way in which the table is referenced. Any subroutines that make use of the table require Other Docs

Known Problems

Examples of Table Routines214

Developer’s Programming Guide

this pointer value as an argument to allow the table to be uniquely identified. char *tblName, *xKeyName, *yKeyName; void* tblPointer; int xIndex, yIndex; /* Get the name of the time gate file from the menu */ exParGetString( "TBLNAME", tblName ); /* Get the gate from the database */ tblPointer = tblFromDatabase( "MUT", tblName ); if( tblPointer == NULL ){ exErrFatal("Cannot open time gate"); } /* Get the name of X and Y axes of the table and the\ header indicies */ xKeyName = tblDescX( tblPointer ); yKeyName = tblDescY( tblPointer ); if( xKeyName == NULL || yKeyName == NULL ){ exErrFatal("Table X or Y header names do not exist\ in the dataset."); } xIndex = hdrIndex( xKeyName); yIndex = hdrIndex( yKeyName); /* Get the number of Z values (times), it should be 2 */ if( tblCountZ( tblPointer ) != 2 ){ exErrFatal("Time gate must have a start and end\ time."); }

After the table is read into memory, the names of the X and Y axes are retrieved using tlbDescX and tblDescY. The header array indices for the X and Y keys are also retrieved using hdrIndex. The number of Z values (times) is checked using tblCountZ to be sure that it is exactly two (a start and end time), otherwise this would be an invalid table for this operation. This piece of code came from the initialization subroutine in ampRatio.c. The table is read into memory only one time. It will be preserved there until it is intentionally closed by the programmer, or until the routine fails through exErrFatal or some other means of stopping execution. Next we see where the table that is created above is used in the subroutine exec_amp_ratio, which can also be found in the example code file ampRatio.c. The subroutine exec_amp_ratio is given one data trace at a time to process, and the time gate is

Other Docs

Known Problems

Examples of Table Routines215

Developer’s Programming Guide

interpolated for that trace, which is referred to as the current trace in the following discussion. In the following code segment, floating point representations of the primary and secondary key values are found (the table interpolation routines only work with floating point values for X and Y coordinates) and the time gate is interpolated. The values of xKeyName, etc., from init_amp_ratio were passed to exec_amp_ratio through the external parms structure. float pkeyVal, skeyVal, tgate[2]; int iErr; /* get the primary and secondary key values */ pkeyVal = rthdr[xIndex]; if( hdrFormat( xKeyName ) == HDRINT ) pkeyVal =\ (float)ithdr[xIndex]; skeyVal = rthdr[yIndex]; if( hdrFormat( yKeyName ) == HDRINT ) skeyVal =\ (float)ithdr[yIndex]; /* interpolate the time gates */ iErr = tblInterpXY( tblPointer, pkeyVal, skeyVal,\ tgate ); if( iErr != 0 ){ exErrFatal("Error interpolating time gate."); }

The routine tblInterpXY is called which interpolates the time gate values from the table. tlbInterpXY also returns an error code to iErr which must be 0, else the interpolation could not be done for some reason. We could also do the interpolation by direct use of the X1, X2 coordinates. This is only desirable if X1 and X2 are readily available through trace headers, as occurs in the following code: /*. Get the X1,X2 coordinates from the headers and interpolate */ x1 = rthdr[ indexX1 ]; x2 = rthdr[ indexX2 ]; iErr = tblInterpZ( tblPointer, x1, x2, 1, 1, 1,\ &skeyVal, tgate ); if( iErr != 0 ){ exErrFatal("Error interpolating table value."); }

Other Docs

Known Problems

Examples of Table Routines216

Developer’s Programming Guide

Example 2 In order to create a new ProMAX table, the user must have first created a table name while working in the user interface. The program that will fill the table with data must get the name of the table into which the data will be written. The name of the table is acquired with the routine exParGetString in the init_ subroutine, as shown in the previous example. The name of the table that is acquired from a call to exParGetString is the 8character label (or hash name) which is generated from the table description input by the user. In order write to the table, you must find the descriptive name by use of the routine tblDescFromDatabase. Memory must then be allocated for the table, which is done using tblAllocate. The following code shows a working ProMAX module which writes a new velocity table to the database. /* This example shows how to create a new table in the database. */ /* The example if for adding a new RMS velocity table */ /* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS void *tblPntr; ENDPARMS(parms) /***-----------------------------------------------------------------Description: Standard initialization routine output arguments: len_save - number of 4-byte words to save for\ re-entrancy itooltype - processing tool type --------------------------------------------------------------------***/ void init_dummya_(int *len_sav, int *itooltype) { /* local variabes */ char *tblDesc; /* user-given descriptive table name */

Other Docs

Known Problems

Examples of Table Routines217

Developer’s Programming Guide

char *tblHashName; /* 8-character name created from\ tblDesc */ /* get the table hash name from a menu, this is an\ 8-character */ /* string created from the original user-given table\ name */ exParGetString("VEL_FILE", &tblHashName ); /* get the table description from the hash name. The\ description is */ /* needed when the table is written to the data base */ tblDesc = tblDescFromDatabase( "VEL", tblHashName ); if( tblDesc == NULL ){ exErrFatal("Table description not found."); } /* allocate space for a new table, one z value per x1,x2,y location */ parms->tblPntr = tblAllocate( 1, "CDP", "TIME",\ "VEL", tblDesc ); if( parms->tblPntr == NULL ){ exErrFatal("couldn’t allocate space for a new\ table."); } /* Set the number of words that need to be saved for re-entrancy. */ *len_sav = NPARMS (parms); /* set the tool type */ *itooltype = ISIMPLE;

}

/******************************************************* ************** * * Description: * Standard execution routine * * Input/output arguments: * trace - array of trace samples * ithdr - trace header (as integer) * rthdr - trace header (as floating [point) * ******************************************************** **************/

void exec_dummya_(float *trace, int *ithdr, float *rthdr) { float xValue, yValue, zValue; int iErr; if( globalRuntime->cleanup ){

Other Docs

Known Problems

Examples of Table Routines218

Developer’s Programming Guide

/* .. The last trace has been processed, write the table to the database */ tblToDatabase( parms->tblPntr, "VEL" ); return; } /* add values to the table */ xValue = 1.0; yValue = 2.0; zValue = 3.0; tblAddXY( parms->tblPntr, xValue, yValue, &zValue );

Other Docs

Known Problems

219

Developer’s Programming Guide

Memory Management

This chapter describes memory-related routines within the ProMAX system.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲ ➲

Other Docs

Overview of Memory Management C Memory Management Multi-dimensional Arrays Multi-Dimensional Routine Names FORTRAN Memory Management Big Vector Routines

Known Problems

Overview of Memory Management220

Developer’s Programming Guide

Overview of Memory Management There are three groups of memory-related routines within the ProMAX system. The first group of routines are to be called from the C programming language and are for allocating multidimensional arrays in single calls. The second group of routines is for use in pseudo-dynamic memory allocation in FORTRAN. The third group of routines is used for manipulating arrays that are larger than the available memory. In this case disk space is used as slow memory for data storage. All three groups of routines are discussed separately in the following sections. The environmental variable $PROMAX_HOME is used in this document. This variable is equal to the path name of the directory at which the ProMAX directory tree is installed on your system. The default installation directory is /advance.

Other Docs

Known Problems

C Memory Management221

Developer’s Programming Guide

C Memory Management The C programming language provides a number of very flexible capabilities for handling different kinds of data. C structures, for example, allow associated data to be placed in a data object that can be named and passed from one routine to another. Because of the flexibility of C, there are several general types of memory management routines within ProMAX which handle different collections of data. The types of data collections which are handled include: • • • • •

multi-dimensional arrays deques heaps queues stacks

These data types and the memory management routines that handle them are discussed in the following sections.

Multi-dimensional Arrays The array memory management functions are intended to simplify manipulation of multi-dimensional arrays in scientific programming in C. These functions are useful only because true multi-dimensional arrays in C cannot have variable dimensions (as in FORTRAN). For example, the following function IS NOT valid in C: void badFunc(a,n1,n2) float a[n2][n1]; { a[n2-1][n1-1] = 1.0; }

However, the following function IS valid in C: void goodFunc(a,n1,n2) float **a; { a[n2-1][n1-1] = 1.0; }

Therefore, the memory functions do not allocate true multi-dimensional arrays, as described in the C specification.

Other Docs

Known Problems

C Memory Management222

Developer’s Programming Guide

Instead, they allocate and initialize pointers (and pointers to pointers) so that, for example, a[i2][i1]

behaves like a 2D array. The array dimensions are numbered, which makes it easy to add functions for arrays of higher dimensions. In particular, the 1st dimension of length n1 is always the fastest dimension, the 2nd dimension of length n2 is the next fastest dimension, and so on. Note that the 1st (fastest) dimension n1 is the first argument to the memory functions, but that the 1st dimension is the last subscript in a[i2][i1]. The allocation of pointers to pointers implies that more storage is required than is necessary to hold a true multi-dimensional array. The fraction of the total storage allocated that is used to hold pointers is approximately 1/(n1+1). This extra storage is unlikely to represent a significant waste for large n1. The memory management functions are significantly different from similar functions described by Press et al in Numerical Recipes in C, 1988—particularly the following functions: • • • •

allocate arrays of arbitrary size elements allocate contiguous storage for arrays abort if allocation fails (unlike malloc) do not provide arbitrary lower and upper bounds for arrays

Contiguous storage enables an allocated multi-dimensional array to be passed to a C function that expects a onedimensional array. For example, to allocate and zero an n1 by n2 two-dimensional array of floats, one could use a = memAlloc2(n1,n2,sizeof(float)); zeroFloatArray(n1*n2,a[0]);

where zeroFloatArray is a function defined as void zeroFloatArray(int n, float *a) { int i; for (i=0; i t therefore (member ? '(1 2 3)) => (1 2 3)



Lisp keywords are symbols beginning with colons, such as :test. Parameter attributes are symbols ending in colons, such as type:. Symbols beginning or ending with colons are, in fact, both Lisp keywords and are ‘eq’; for example, (eq :test test:) => t. Keywords eval to themselves and will printout in the manner they were first encountered; that is, colon at the start or end.

Known Problems

Usable Lisp Functions248

Developer’s Programming Guide

Lisp Primitives Lisp Primitive

Common Lisp

Reference

cdr

Yes

1

car

Yes

1

length

Yes

1

first

Yes

1

second

Yes

1

third

Yes

1

fourth

Yes

1

fifth

Yes

2

sixth

Yes

2

seventh

Yes

2

nth

Yes

2

nthcdr

Yes

1

last

Yes

1

list

Yes

1

append

Yes

1

cons

Yes

1

nconc

Yes

1

push

Yes

1

rplaca

Yes

2

pushend

No

destructive (setq a '(1 2 3)) (pushend 4 a) a=>1 2 3 4)

remove

Almost

1, :test fixed as equal (see remq)

remq

No

like remove with :test fixed as eq

delete

Almost

a=1, :test fixed as equal (see delq)

delq

No

like delete with :test fixed as eq

copy

No

same as copy-list

copy-list

Yes

1

eq

Yes

1

equal

Yes

1

=

Yes

1

Other Docs

Known Problems

Usable Lisp Functions249

Developer’s Programming Guide

Lisp Primitives (Continued) Lisp Primitive

Common Lisp

Reference

>

Yes

2

>=

Yes

2


numsmp-1)*globalRuntime->samprat; } /* Convert the time gate values to samples */ imin_samp = tgate[0] / globalRuntime->samprat + imax_samp = tgate[1] / globalRuntime->samprat + imin_samp = MAX (imin_samp, 0); imin_samp = MIN (imin_samp, globalRuntime->numsmp imax_samp = MAX (imax_samp, 0); imax_samp = MIN (imax_samp, globalRuntime->numsmp if (imin_samp > imax_samp) { int isave; /* assume that they were menat to be reversed isave= imin_samp; imin_samp= imax_samp; imax_samp=isave; }

0.5; 0.5; - 1); - 1);

*/

/* Pass the buffers off to a routine where the real work is done */ amp_ratio_work (trace, parms->scratch, globalRuntime->numsmp, parms->ngate, imin_samp, imax_samp, globalRuntime->samprat, &ratio_max,&ratio_time); if (parms->load_hdr == 1) { /* Load the values into the header */ rthdr [parms->ih_ratio_max] = ratio_max; rthdr [parms->ih_ratio_time] = ratio_time; } if (parms->load_db) { /* Load the values into the database

*/

loc_trc = ithdr [STDHDR (itraceno)]; fprintf(stdout,”loc_trc = %d\n”,loc_trc); if (loc_trc != INULL) { opfBufPutFloat (parms->dbPtr1, loc_trc, ratio_max); opfBufPutFloat (parms->dbPtr2, loc_trc, ratio_time); } }

} /** C--------------------------------------------------------------------C C Actual work routine C

Other Docs

Known Problems

ampRatio.c354

Developer’s Programming Guide

C--------------------------------------------------------------------**/ void amp_ratio_work (float *trace, float *scratch, int nsamps, int ngate, int imin_samp, int imax_samp, float samprate, float *ratio_max, float *ratio_time) { int i, istart, iend, ind_max; float sum_above, sum_below, rabove, rbelow; /* Set the starting and end of the zone of interest istart = 0 - ngate; iend = nsamps + ngate; /* Sum the first two gates sum_above = 0.0; rabove = 0.0;

*/

*/

sum_below = 0.0; rbelow = 0.0; for (i = istart + ngate; i< istart + ngate * 2; i ++) if (i >=0 && i< nsamps && trace[i]!=0.0) { sum_below = sum_below + fabs (trace[i]); rbelow = rbelow + 1.0; } for (i = istart + ngate; i< iend - ngate ; i ++) { /* Compute the ratio */ if (sum_above > 0.0 && rabove >0.0 && rbelow >0.0) { scratch [i] = (sum_below/ rbelow) / (sum_above / rabove); } else if (trace[i] != 0.0) { scratch [i] = 0.0; } /* Drop a sample from each gate and add the next one */ if (i - ngate >= 0) { sum_above = sum_above - fabs (trace [i - ngate]) + fabs (trace [i]); } else { sum_above = sum_above + fabs (trace[i]); rabove = rabove + 1.0; } if (i + ngate +1< nsamps) { sum_below = sum_below - fabs (trace[i + 1]) + fabs (trace [i + ngate + 1]); } else { if (i +1 < nsamps && trace[i+1]!=0.0) { sum_below = sum_below - fabs (trace[i + 1]); rbelow = rbelow - 1.0; } }

Other Docs

Known Problems

ampRatio.c355

Developer’s Programming Guide

} /* Put the final results in place. Note that we have not handled the edge problem, we have simply made the function undefined */ for (i = 0; i < nsamps; i++) { trace [i] = scratch[i]; } /* Find the maximum of the ratio function */ *ratio_max = trace [imin_samp ]; ind_max = imin_samp; for (i = imin_samp ; i< imax_samp; i++) { if (trace [i] > *ratio_max) { ind_max = i; *ratio_max = trace [i]; } } /* Covert the index of the maximum to time*/ *ratio_time = (float) (ind_max ) * samprate; }

Other Docs

Known Problems

ampRatio.c356

Other Docs

Developer’s Programming Guide

Known Problems

357

Developer’s Programming Guide

Appendix: Simple Tool Examples

This appendix provides an example of a simple tool, a tool that processes one data trace at a time. It demonstrates the creation and use of trace headers, ordered parameter files, and database tables. The program is a crude first break picker. It allows the option of loading some of the derived parameters into the trace headers and the database, and limiting the time window in which the first break picker will search. The menu and FORTRAN examples are presented first, then the C source code is presented. The menu file can be used with either the C or FORTRAN code.

Topics covered in this appendix:

➲ ➲ ➲ ➲

Other Docs

amp_ratio.menu amp_ratio.inc amp_ratio.f ampRatio.c

Known Problems

amp_ratio.menu358

Developer’s Programming Guide

amp_ratio.menu '( name: AMP_RATIO label: "Amplitude Ratio" value_tab: 35 parameter: GATELEN text: "Amplitude gate length" type: typein: type_desc: ( real: 7 1.0e-5 nil ) value: 100.0 mouse_text: "Enter the sliding gate length used to compute amplitude ratios." parameter: time_gate_opt text: "Confine the maximum?" type: boolean: value: nil mouse_text: "Select 'Yes' if you wish to use a gate to constrain the search for the maximum in the amp ratio." parameter: GATENAME text: " Select gate parameter file" type: function: type_desc: ((parm_list "GAT") parms) value: "INVALID" selected_item: "** No Parameter File Selected **" mouse_text: "Use Mouse Button 1 to select a gate parameter file from the parameter file menu." parameter: LOAD_HDR text: "Load the results into the header? type: boolean: value: t mouse_text: "Select 'Yes' if you wish to load the maximum of the amp ratio and its time into the trace header." parameter: LOAD_DB text: "Load the results into the database? type: boolean: value: t mouse_text: "Select 'Yes' if you wish to load the maximum of the amp ratio and its time into the database." exec_data: ("AMP_RATIO" ("GENERAL" ("GATELEN" implicit: (value 'GATELEN)) ("GATENAME" implicit: ( if (value 'time_gate_opt) (value 'GATENAME) "NO__GATE" )) ("LOAD_HDR" implicit: ( if (value 'LOAD_HDR) 1 0 ) ) ("LOAD_DB" implicit: ( if (value 'LOAD_DB) 1 0 ) ) ) ) rules: ( (rule1 ( value 'time_gate_opt )

) (do_show 'GATENAME)

(do_not_show 'GATENAME))

)

Other Docs

Known Problems

amp_ratio.inc359

Developer’s Programming Guide

amp_ratio.inc C-----------------------------------------------------------------------------C Include file for AMP_RATIO C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc"

& & &

COMMON /SAVED_PARMS/ SAVE1z, NGATE, IX_SCRATCH, IH_RATIO_MAX, IH_RATIO_TIME, LOAD_HDR, LOAD_DB, ID_MAX, ID_TIME, IH_TRACENO, IKEY_TRC, USE_GATE, ITBL_HANDLE, IH_PKEY, IH_SKEY, IFORMAT_PKEY, IFORMAT_SKEY

& & &

INTEGER NGATE, IX_SCRATCH, LENSAVED, IH_RATIO_MAX, IH_RATIO_TIME, LOAD_HDR, LOAD_DB, ID_MAX, ID_TIME, IH_TRACENO, IKEY_TRC, ITBL_HANDLE, IH_PKEY, IH_SKEY, IFORMAT_PKEY, IFORMAT_SKEY LOGICAL USE_GATE

C C C C

..... ..... ..... .....

Specify the number of variables to save. This is set here (rather than in the initialization routine) so that programmers don't forget to change it when they are changing the contents of the common block SAVED_PARMS. DATA LENSAVED /17/

C C C C C C C C C C C C C C C C

..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... .....

NGATE - number of samples in gate IX_SCRATCH - index of scratch buffer IH_RATIO_MAX - index in header of amp ratio maximum IH_RATIO_TIME - index in header of amp ratio time LOAD_HDR - flag for option to load the results into the trace header LOAD_DB - flag for option to load the results into the database ID_MAX - token for buffered database I/O for amp ratio max ID_TIME - token for buffered database I/O for amp ratio time IH_TRACENO - index in header of trace number IKEY_TRC - key to the trace ordered database USE_GATE - logical flag to use a gate to confine the amp ratio max ITBL_HANDLE - handle for the time gate table IH_PKEY - header index of the primary key of the time gate table IH_SKEY - header index of the secondary key of the time gate table IFORMAT_PKEY - format of the primary key of the time table IFORMAT_SKEY - format of the secondary key of the time table

Other Docs

Known Problems

amp_ratio.f360

Developer’s Programming Guide

amp_ratio.f C-----------------------------------------------------------------------------C C Description: C Standard initialization routine C C Output arguments: C LEN_SAVE - number of 4-byte words to save for re-entrancy C ITOOLTYPE - processing tool type C C-----------------------------------------------------------------------------SUBROUTINE INIT_AMP_RATIO( LEN_SAV, ITOOLTYPE ) C ..... The include file 'amp_ratio.inc' contains a nested include C for the global parameters #include "amp_ratio.inc" C ..... Include file with error definitions #include "hdr_err.inc" #include "db_err.inc" INTEGER LEN_SAV, ITOOLTYPE, IERR, LENGTH, IFORMAT, NCHARS, & IDUMMY(3), NTIMES REAL GATELEN, RDUMMY(6) CHARACTER CDESC_HDR*32, CDESC_DB*80, CGATENAME*8, CPRIM_KEY*8, & CSCND_KEY*8, CTABLE_DESC*128, CZ_DESC*8 C ..... Set a default that is illegal (in case there is a menu problem) GATELEN = -1.0 C ..... Call for the input parameter by name. Note the padding in the C ..... character constant. It is the programmers responsibility to C ..... provide the correct type of return argument. CALL EX_GETPARM( 'GATELEN ', 1, GATELEN ) C ..... Convert the gate length to samples NGATE = NINT( GATELEN / SAMPRATz ) C ..... Check for reasonable input IF ( NGATE .LT. 1 .OR. NGATE*2+1 .GT. NUMSMPz ) THEN CALL EX_ERR_FATAL( 'Gate length is illegal' ) ENDIF C ..... See if the user wants to confine the maximum to fall within a gate USE_GATE = .FALSE. CGATENAME = ' ' CALL EX_CGETPARM( 'GATENAME', 1, CGATENAME, NCHARS ) IF ( CGATENAME .NE. ' ' & .AND. CGATENAME .NE. 'NO__GATE' ) THEN C ......... Something was specified USE_GATE = .TRUE. C ......... Get the gate from the database CALL DB_TBL_GET( 'GAT', CGATENAME, ITBL_HANDLE, IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL(

Other Docs

Known Problems

amp_ratio.f361

&

Developer’s Programming Guide

'Cannot open time gate ' //CGATENAME ) ENDIF

C ......... Get info on the gate table CALL TBL_INFO( .TRUE., ITBL_HANDLE, CPRIM_KEY, CSCND_KEY, & CZ_DESC, CTABLE_DESC, IDUMMY(1), IDUMMY(2), IDUMMY(3), & NTIMES, RDUMMY(1), RDUMMY(2), RDUMMY(3), RDUMMY(4), & RDUMMY(5), RDUMMY(6) ) C ......... There had better be two time values (upper and lower gate) IF ( NTIMES .NE. 2 ) CALL EX_ERR_FATAL( & 'Invalid gate (must have an upper and lower gate)' ) C ......... We will need the index of the primary and secondary key CALL HDR_NAMINFO( CPRIM_KEY, CDESC_DB, LENGTH, IFORMAT_PKEY, & IH_PKEY, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'The primary key of the time gate (' //CPRIM_KEY & //') is not in the header' )

& & &

CALL HDR_NAMINFO( CSCND_KEY, CDESC_DB, LENGTH, IFORMAT_SKEY, IH_SKEY, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( 'The secondary key of the time gate (' //CSCND_KEY //') is not in the header' ) ENDIF

C ..... See if the user wants to load the results into the trace header LOAD_HDR = 0 CALL EX_GETPARM( 'LOAD_HDR', 1, LOAD_HDR ) IF ( LOAD_HDR .EQ. 1 ) THEN C ......... Add new trace header entries CDESC_HDR = 'Maximum value of amp ratio' CALL HDR_ADD( 'RATIOMAX', CDESC_HDR, 1, IREAL4pz, & IH_RATIO_MAX, IERR ) IF ( IERR .NE. 0 ) THEN IF ( IERR .EQ. IERR_HDR_EXSTpz ) THEN C ................. That's OK, but somewhat unexpected CALL EX_ERR_WARN( & 'RATIOMAX already exists in header' ) ELSE C ................. This will virtually never happen, but just in case CALL EX_ERR_FATAL( 'Error adding header entry' ) ENDIF ENDIF CDESC_HDR = 'Time of amp ratio maximum' CALL HDR_ADD( 'RATIOTIM', CDESC_HDR, 1, IREAL4pz, & IH_RATIO_TIME, IERR ) IF ( IERR .NE. 0 ) THEN IF ( IERR .EQ. IERR_HDR_EXSTpz ) THEN C ................. That's OK, but somewhat unexpected CALL EX_ERR_WARN( & 'RATIOTIM already exists in header' ) ELSE C ................. This will virtually never happen, but just in case

Other Docs

Known Problems

amp_ratio.f362

Developer’s Programming Guide

CALL EX_ERR_FATAL( 'Error adding header entry' ) ENDIF ENDIF ENDIF C ..... See if the user wants to load the results into the database LOAD_DB = 0 CALL EX_GETPARM( 'LOAD_DB ', 1, LOAD_DB ) IF ( LOAD_DB .EQ. 1 ) THEN C ......... Open the database to store the amp ratio information C against trace

&

IF ( ITRNO_VALIDz .NE. 1 ) THEN CALL EX_ERR_FATAL( 'Cannot load data into the TRC order' //' without valid trace numbers (geom assigned)' ) ENDIF

C ......... We will need the index of the trace number CALL HDR_NAMINFO( 'TRACENO ', CDESC_HDR, LENGTH, IFORMAT, & IH_TRACENO, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'TRACENO not found in header' ) C ......... Lock the TRC order since we will be writing to it CALL DB_ORDLOCK( 'TRC', IKEY_TRC, IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( 'Error locking TRC database' ) ENDIF C ......... Create the new entries in the database CDESC_DB = 'Maximum value of amp ratio' CALL DB_PARMCRE( IKEY_TRC, ' ', 'F_B_PICK', 'RATIOMAX', & CDESC_DB, 1, IREAL4pz, RNULLpz, IERR ) IF ( IERR .NE. 0 .AND. IERR .NE. IERR_DB_PEXSpz ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( 'Error creating database entry' ) ENDIF

&

CDESC_DB = 'Time of amp ratio maximum' CALL DB_PARMCRE( IKEY_TRC, ' ', 'F_B_PICK', 'RATIOTIM', CDESC_DB, 1, IREAL4pz, RNULLpz, IERR ) IF ( IERR .NE. 0 .AND. IERR .NE. IERR_DB_PEXSpz ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( 'Error creating database entry' ) ENDIF

C ......... Initialize the token for buffered database I/O ID_MAX = 0 ID_TIME = 0 ENDIF C ..... Reserve a scratch buffer that we will need in exec phase CALL MEM_RESBUFF( NUMSMPz, IX_SCRATCH, IERR ) C ..... Set the number of words that need to be saved for re-entrancy.

Other Docs

Known Problems

amp_ratio.f363

Developer’s Programming Guide

C ..... Note that LENSAVED declared within the include file, hopefully C ..... to avoid oversights LEN_SAV = LENSAVED C ..... Set the tool type to simple (one trace in, one trace out) ITOOLTYPE = ISIMPLEpz RETURN END

C-----------------------------------------------------------------------------C C Description: C Standard execution routine C C Input/output arguments: C TRACE - array of trace samples C ITHDR - trace header (as integer) C RTHDR - trace header (as floating point) C C-----------------------------------------------------------------------------SUBROUTINE EXEC_AMP_RATIO( TRACE, ITHDR, RTHDR ) #include "amp_ratio.inc" C ..... Include the file that allows use of the "space array" and memory C ..... management routines. #include "mem.inc" INTEGER ITHDR(NTHz), LOC_TRC, IERR, ISAVE, IMIN_SAMP, IMAX_SAMP REAL TRACE(NUMSMPz), RTHDR(NTHz), RATIO_MAX, RATIO_TIME, & PKEYVAL, SKEYVAL, TGATE(2)

C C C

C

IF ( CLEANUPz ) THEN IF ( LOAD_DB .EQ. 1 ) THEN ............. Flush the buffers for buffered database I/O ............. Note that errors only give rise to warnings in cleanup phase. ............. Also note that the "location" is now 0. CALL DB_BUFFRDPUT( ID_MAX, IKEY_TRC, 'F_B_PICK', & 'RATIOMAX', 0, 0.0, .TRUE., IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_WARN( & 'Error loading data into database' ) ENDIF CALL DB_BUFFRDPUT( ID_TIME, IKEY_TRC, 'F_B_PICK', & 'RATIOTIM', 0, 0.0, .TRUE., IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_WARN( & 'Error loading data into database' ) ENDIF ENDIF ......... We don't want control to pass into the main body RETURN ENDIF

IF ( USE_GATE ) THEN C ......... Interpolate the gate times from the table

Other Docs

Known Problems

amp_ratio.f364

Developer’s Programming Guide

&

CALL EX_GET_REALKEY( RTHDR(IH_PKEY), IFORMAT_PKEY, PKEYVAL ) CALL EX_GET_REALKEY( RTHDR(IH_SKEY), IFORMAT_SKEY, SKEYVAL ) CALL INT_GET( ITBL_HANDLE, 0, 0, PKEYVAL, SKEYVAL, TGATE, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( 'Error interpolating time gate' )

&

ELSE C ......... Use the entire trace TGATE(1) = 0.0 TGATE(2) = FLOAT(NUMSMPz-1) * SAMPRATz ENDIF C ..... Convert the time gate values to samples IMIN_SAMP = NINT( TGATE(1) / SAMPRATz ) + 1 IMAX_SAMP = NINT( TGATE(2) / SAMPRATz ) + 1 C ..... Don't let them go out of bounds IMIN_SAMP = MAX0( IMIN_SAMP, 1 ) IMIN_SAMP = MIN0( IMIN_SAMP, NUMSMPz ) IMAX_SAMP = MAX0( IMAX_SAMP, 1 ) IMAX_SAMP = MIN0( IMAX_SAMP, NUMSMPz ) IF ( IMIN_SAMP .GT. IMAX_SAMP ) THEN C ......... Let's assume that they were meant to be reversed ISAVE = IMIN_SAMP IMIN_SAMP = IMAX_SAMP IMAX_SAMP = ISAVE ENDIF C ..... Pass CALL & &

the buffers off to a routine where the real work is done AMP_RATIO_WORK( TRACE, RSPACEz(IX_SCRATCH), NUMSMPz, NGATE, IMIN_SAMP, IMAX_SAMP, SAMPRATz, RATIO_MAX, RATIO_TIME )

IF ( LOAD_HDR .EQ. 1 ) THEN C ......... Load the values into the header RTHDR(IH_RATIO_MAX) = RATIO_MAX RTHDR(IH_RATIO_TIME) = RATIO_TIME ENDIF IF ( LOAD_DB .EQ. 1 ) THEN C ......... Load the values into the database LOC_TRC = ITHDR( IH_TRACENO ) IF ( LOC_TRC .NE. INULLpz ) THEN

&

&

&

&

Other Docs

CALL DB_BUFFRDPUT( ID_MAX, IKEY_TRC, 'F_B_PICK', 'RATIOMAX', LOC_TRC, RATIO_MAX, .FALSE., IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( 'Error loading data into database' ) ENDIF CALL DB_BUFFRDPUT( ID_TIME, IKEY_TRC, 'F_B_PICK', 'RATIOTIM', LOC_TRC, RATIO_TIME, .FALSE., IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_PROMAX_ERR( IERR ) CALL EX_ERR_FATAL( 'Error loading data into database' )

Known Problems

amp_ratio.f365

Developer’s Programming Guide

ENDIF ENDIF ENDIF RETURN END

C-----------------------------------------------------------------------------C C Actual work routine C C------------------------------------------------------------------------------

&

& &

SUBROUTINE AMP_RATIO_WORK( TRACE, SCRATCH, NSAMPS, NGATE, IMIN_SAMP, IMAX_SAMP, SAMPRATE, RATIO_MAX, RATIO_TIME ) IMPLICIT NONE INTEGER NSAMPS, NGATE, I, ISTART, IEND, IND_MAX, IMIN_SAMP, IMAX_SAMP REAL TRACE(NSAMPS), SCRATCH(NSAMPS), SUM_ABOVE, SUM_BELOW, RATIO_MAX, RATIO_TIME, SAMPRATE, RABOVE, RBELOW

C ..... Set the starting and end of the zone of interest ISTART = 1 - NGATE IEND = NSAMPS + NGATE C ..... Sum the first two gates SUM_ABOVE = 0.0 RABOVE = 0.0

110

SUM_BELOW = 0.0 RBELOW = 0.0 DO 110 I=ISTART+NGATE+1,ISTART+NGATE*2 IF ( I .GE. 1 .AND. I .LE. NSAMPS ) THEN SUM_BELOW = SUM_BELOW + ABS( TRACE(I) ) RBELOW = RBELOW + 1.0 ENDIF CONTINUE

C ..... Now move down the trace DO 120 I=ISTART+NGATE,IEND-NGATE C ......... Compute the ratio IF ( SUM_ABOVE .GT. 0.0 .AND. RABOVE .GT. 0.0 & .AND. RBELOW .GT. 0.0 ) THEN SCRATCH(I) = (SUM_BELOW/RBELOW) / (SUM_ABOVE/RABOVE) ELSE SCRATCH(I) = 0.0 ENDIF C ......... Drop a sample from each gate and add the next one IF ( I-NGATE .GE. 1 ) THEN SUM_ABOVE = SUM_ABOVE - ABS( TRACE(I-NGATE) ) & + ABS( TRACE(I) ) ELSE SUM_ABOVE = SUM_ABOVE + ABS( TRACE(I) )

Other Docs

Known Problems

amp_ratio.f366

Developer’s Programming Guide

RABOVE = RABOVE + 1.0 ENDIF

&

120

IF ( I+NGATE+1 .LE. NSAMPS ) THEN SUM_BELOW = SUM_BELOW - ABS( TRACE(I+1) ) + ABS( TRACE(I+NGATE+1) ) ELSE IF ( I+1 .LE. NSAMPS ) THEN SUM_BELOW = SUM_BELOW - ABS( TRACE(I+1) ) RBELOW = RBELOW - 1.0 ENDIF ENDIF CONTINUE

C ..... Put the final results in place. Note that we have not handled the C ..... edge problem, we have simply made the function undefined. DO 130 I=1,NSAMPS IF ( I .LE. ISTART+NGATE-1 ) THEN TRACE(I) = 0.0 ELSEIF ( I .GE. IEND-NGATE+1 ) THEN TRACE(I) = 0.0 ELSE TRACE(I) = SCRATCH(I) ENDIF 130 CONTINUE C ..... Find the maximum of the ratio function RATIO_MAX = TRACE(IMIN_SAMP) IND_MAX = IMIN_SAMP DO 140 I=IMIN_SAMP,IMAX_SAMP IF ( TRACE(I) .GT. RATIO_MAX ) THEN IND_MAX = I RATIO_MAX = TRACE(I) ENDIF 140 CONTINUE C ..... Convert the index of the maximum to time RATIO_TIME = FLOAT( IND_MAX - 1 ) * SAMPRATE RETURN END

Other Docs

Known Problems

ampRatio.c367

Developer’s Programming Guide

ampRatio.c

/* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS int ngate, ih_ratio_max, ih_ratio_time, load_hdr, load_db, id_max, id_time, itraceno, use_gate, itabl_handle, ih_pkey, ih_skey, iformat_pkey, iformat_skey; float *scratch; void *gate_tbl; void *db_trc, *dbPtr1,*dbPtr2; ENDPARMS(parms) int n_trc; static char *sccsid = "@(#)

ampRatio.c 50.2 4/8/94";

void amp_ratio_work (float *trace, float *scratch, int nsams, int ngate, int imin_samp, int imax_samp, float samprate, float *ratio_max, float *ratio_time); /***------------------------------------------------------------------Description: Standard initialization routine output arguments: LEN_SAVE - number of 4-byte words to save for re-entrancy ITOOLTYPE - processing tool type ---------------------------------------------------------------------***/ void init_amp_ratio_(int *len_sav, int *itooltype) { extern int n_trc; float gatelen; char *cgatename ; n_trc = 0; /* get the gate length */ gatelen=-1.0; exParGetFloat ("GATELEN",

&gatelen);

/* convert the gate length from time to samples*/ parms->ngate = (int) (gatelen / globalRuntime->samprat+0.5); /* check for reasonable input */ if (parms->ngate < 1 || parms->ngate * 2 + 1 >globalRuntime->numsmp) exErrFatal ("Gate length is illegal");

Other Docs

Known Problems

ampRatio.c368

Developer’s Programming Guide

/* See if the user wants to confine the maximum to fall within a gate*/ parms->use_gate = FALSE; exParGetString ("GATENAME", &cgatename); if (strcmp (cgatename, "") != 0 && strcmp (cgatename, "NO__GATE") != 0) { parms->use_gate = TRUE; /* Get the gate from the database */ parms->gate_tbl = tblFromDatabase ("GAT", cgatename); if (parms->gate_tbl == NULL) exErrFatal ("Cannot open time gate %s !!", cgatename); if (tblCountZ(parms->gate_tbl) != 2) exErrFatal ("invalid gate (Gate must have an upper and lower gate!)"); /* We will need the index of the primary and secondary key

*/

parms->ih_pkey = hdrIndex (tblDescX(parms->gate_tbl) ); if (!parms->ih_pkey) exErrFatal ("The primary key of the time gate (%s) is not in the header!", tblDescX(parms->gate_tbl) ); parms->ih_skey= hdrIndex ( tblDescY(parms->gate_tbl) ); if (!parms->ih_skey) exErrFatal ("The secondary key of the time gate is (%s) is not in the header!", tblDescY(parms->gate_tbl) ); } /* See if the user wants to load the results into the trace header parms->load_hdr = 0; exParGetInt ("LOAD_HDR", &parms->load_hdr); if (parms->load_hdr ) { /* Add new trace header entries

*/

*/

if (hdrExists ("RATIOMAX") ) { if (hdrFormat ("RATIOMAX") == HDRFLOAT) exErrWarn ("RATIOMAX header already exists!"); else exErrFatal ("RATIOMAX header already exists but is of wrong type!"); } else { parms->ih_ratio_max = hdrAdd ("RATIOMAX", "Time of amp ratio maximum", 1, HDRFLOAT); if (parms->ih_ratio_max== 0) { /* This should virtually never happen. */ exErrFatal("Error adding header RATIOMAX"); } } if (hdrExists ("RATIOTIM") ) { if (hdrFormat ("RATIOTIM") == HDRFLOAT) exErrWarn ("RATIOTIM header already exists!"); else exErrFatal ("RATIOTIM header already exists but is of wrong type!"); } else {

Other Docs

Known Problems

ampRatio.c369

Developer’s Programming Guide

parms->ih_ratio_time = hdrAdd ("RATIOTIM", "Time of amp ratio maximum", 1, HDRFLOAT); if (parms->ih_ratio_time== 0) { /* This should virtually never happen. */ exErrFatal("Error adding header RATIOTIM"); } } } /* See if the user wants to load the results into the database*/ parms->load_db = 0; exParGetInt ("LOAD_DB", &parms->load_db); if (parms->load_db){ int ierr; /* Open the database to store the amp ratio information against trace*/ if (globalRuntime->itrno_valid != 1){ fprintf(stderr,"itrno_valid = %d\n",globalRuntime->itrno_valid); exErrFatal ("Cannot load data into the TRC order without valid trace numbers (geom assigned?)"); } else{ fprintf(stdout,"itrno_valid = 1, continuing in init phase\n"); } if (!hdrExists ("TRACENO")) { exErrFatal ("TRACENO not found in header"); } /* Lock the TRC order since we will be writing to it

*/

parms->db_trc = opfLock ("TRC"); if (!opfExists ( "TRC") ) exErrFatal ("Error locking TRC database."); /* Create the new entries in the database */ if (!opfParmExists(parms->db_trc, "F_B_PICK", "RATIOMAX") ) { ierr = opfParmCreate (parms->db_trc , "F_B_PICK", "RATIOMAX", "Maximum value of amp ratio", 1, PARFLOAT); if (ierr !=0 ) exErrFatal ("Error creating database entry"); else{ fprintf(stdout,"created new db entry RATIOMAX\n"); } } if (!opfParmExists(parms->db_trc, "F_B_PICK", "RATIOTIM") ) { ierr = opfParmCreate (parms->db_trc, "F_B_PICK", "RATIOTIM", "Time of amp ration maximum", 1, PARFLOAT); if (ierr !=0 ) exErrFatal ("Error creating database entry"); else{ fprintf(stdout,"created new db entry RATIOTIM\n"); } } /* Initialize the token for buffered database I/O

Other Docs

*/

Known Problems

ampRatio.c370

Developer’s Programming Guide

parms->dbPtr1= opfInitBufPut(parms->db_trc,"F_B_PICK","RATIOMAX"); parms->dbPtr2= opfInitBufPut(parms->db_trc,"F_B_PICK","RATIOTIM"); } /* Reserve a scratch buffer that we will need in exec phase */ parms->scratch = (float *) malloc (globalRuntime->numsmp * sizeof (float) ); /* Set the number of words that need to be saved for re-entrancy.

*/

*len_sav = NPARMS (parms); /* Set the tool type to simple (one trace in, one trace out) */ *itooltype = ISIMPLE; } /********************************************************************* * * Description: * Standard execution routine * * Input/output arguments: * trace - array of trace samples * ithdr - trace header (as integer) * rthdr - trace header (as floating [point) * **********************************************************************/

void exec_amp_ratio_(float *trace, int *ithdr, float *rthdr) { int loc_trc, ierr, isave, imin_samp, imax_samp; float ratio_max, ratio_time, pkeyval, skeyval, tgate[2]; float fltPkey, fltSkey; /* get the gate length */ if (globalRuntime->cleanup) { if (parms->load_db) { /* Flush the buffers for buffered database I/O Note that errors only give rise to warnings in cleanup phase. Also note that the "location" is now 0. */

if (opfCloseBufPut (parms->dbPtr1) !=0) exErrWarn ("Error loading data into } if (opfCloseBufPut (parms->dbPtr2) !=0) exErrWarn ("Error loading data into } opfClose (parms->db_trc);

{ database."); { database.");

} /* We don't want control to pass into the main body return;

*/

} if (parms->use_gate){ /* Interpolate the gate times from the table*/ if( hdrIndexFormat(parms->ih_pkey) == HDRINT ){ fltPkey = (float)ithdr[parms->ih_pkey];

Other Docs

Known Problems

ampRatio.c371

Developer’s Programming Guide

} else{ fltPkey = rthdr[parms->ih_pkey]; } if( hdrIndexFormat(parms->ih_skey) == HDRINT ){ fltSkey = (float)ithdr[parms->ih_skey]; } else{ fltSkey = rthdr[parms->ih_skey]; } if (tblInterpXY (parms->gate_tbl, fltPkey, fltSkey, tgate )){ exErrFatal ("Error interpolating time gate"); } } else { /* use entire trace */ tgate[0]=0.0; tgate[1]= (globalRuntime->numsmp-1)*globalRuntime->samprat; } /* Convert the time gate values to samples */ imin_samp = tgate[0] / globalRuntime->samprat + imax_samp = tgate[1] / globalRuntime->samprat + imin_samp = MAX (imin_samp, 0); imin_samp = MIN (imin_samp, globalRuntime->numsmp imax_samp = MAX (imax_samp, 0); imax_samp = MIN (imax_samp, globalRuntime->numsmp if (imin_samp > imax_samp) { int isave; /* assume that they were meant to be reversed isave= imin_samp; imin_samp= imax_samp; imax_samp=isave; }

0.5; 0.5; - 1); - 1);

*/

/* Pass the buffers off to a routine where the real work is done */ amp_ratio_work(trace, parms->scratch, globalRuntime->numsmp, parms->ngate, imin_samp, imax_samp, globalRuntime->samprat, &ratio_max,&ratio_time); if (parms->load_hdr == 1) { /* Load the values into the header */ rthdr [parms->ih_ratio_max] = ratio_max; rthdr [parms->ih_ratio_time] = ratio_time; } if (parms->load_db) { /* Load the values into the database

*/

loc_trc = ithdr [STDHDR (itraceno)]; fprintf(stdout,"loc_trc = %d\n",loc_trc); if (loc_trc != INULL) { opfBufPutFloat (parms->dbPtr1, loc_trc, ratio_max); opfBufPutFloat (parms->dbPtr2, loc_trc, ratio_time); } }

Other Docs

Known Problems

ampRatio.c372

Developer’s Programming Guide

} /*--------------------------------------------------------------------Actual work routine ----------------------------------------------------------------------*/ void amp_ratio_work (float *trace, float *scratch, int nsamps, int ngate, int imin_samp, int imax_samp, float samprate, float *ratio_max, float *ratio_time) { int i, istart, iend, ind_max; float sum_above, sum_below, rabove, rbelow; /* Set the starting and end of the zone of interest istart = 0 - ngate; iend = nsamps + ngate; /* Sum the first two gates sum_above = 0.0; rabove = 0.0;

*/

*/

sum_below = 0.0; rbelow = 0.0; for (i = istart + ngate; i< istart + ngate * 2; i ++) if (i >=0 && i< nsamps && trace[i]!=0.0) { sum_below = sum_below + fabs (trace[i]); rbelow = rbelow + 1.0; } for (i = istart + ngate; i< iend - ngate ; i ++) { /* Compute the ratio */ if (sum_above > 0.0 && rabove >0.0 && rbelow >0.0) { scratch [i] = (sum_below/ rbelow) / (sum_above / rabove); } else if (trace[i] != 0.0) { scratch [i] = 0.0; } /* Drop a sample from each gate and add the next one */ if (i - ngate >= 0) { sum_above = sum_above - fabs (trace [i - ngate]) + fabs (trace [i]); } else { sum_above = sum_above + fabs (trace[i]); rabove = rabove + 1.0; } if (i + ngate +1< nsamps) { sum_below = sum_below - fabs (trace[i + 1]) + fabs (trace [i + ngate + 1]); } else { if (i +1 < nsamps && trace[i+1]!=0.0) { sum_below = sum_below - fabs (trace[i + 1]);

Other Docs

Known Problems

ampRatio.c373

Developer’s Programming Guide

rbelow = rbelow - 1.0; } }

} /* Put the final results in place. Note that we have not handled the edge problem, we have simply made the function undefined */ for (i = 0; i < nsamps; i++) { trace [i] = scratch[i]; } /* Find the maximum of the ratio function */ *ratio_max = trace [imin_samp ]; ind_max = imin_samp; for (i = imin_samp ; i< imax_samp; i++) { if (trace [i] > *ratio_max) { ind_max = i; *ratio_max = trace [i]; } } /* Covert the index of the maximum to time*/ *ratio_time = (float) (ind_max ) * samprate;

Other Docs

Known Problems

ampRatio.c374

Other Docs

Developer’s Programming Guide

Known Problems

375

Developer’s Programming Guide

Appendix: Ensemble Tool Examples

The appendix provides examples of two types of ProMAX Ensemble tools: AVO and trace interpolation. The AVO routine outputs a single trace for each trace ensemble that is input. The trace interpolation routine creates and outputs a trace between each of the input traces.

Topics covered in this appendix:

➲ AVO Ensemble Tools ➲ avo.menu ➲ avo.inc ➲ avo.f ➲ avoC.c ➲ Trace Interpolation Tools ➲ prestk_interp.menu ➲ prestk_interp.inc ➲ prestk_interp.f ➲ prestk_interp.c

Other Docs

Known Problems

AVO Ensemble Tools376

Developer’s Programming Guide

AVO Ensemble Tools This routine outputs a single trace for each trace ensemble that is input. The trace sample values are the slope or intercept of a least squares fit line through the trace amplitudes, as a function of offset at a given time. The menu file is presented, then the FORTRAN version and C versions of the code are presented. The menu file works with either the FORTRAN or C versions.

Fortran Note: This example shows things that are often done in ensemble tools. The important points to remember from the INIT_ subroutine are that MAXDTRz (the maximum number of traces in an ensemble that will be leaving the EXEC_ subroutine) is set to 1, the type of data is set to ISTACKEDpz, and the trace number is no longer valid after this process, so ITRNO_VALIDz is set to 0. The important points in the EXEC_ subroutine are that the trace header values (particularly the end-of-ensemble flag) are set, and the trace executive is notified that the number of traces to be returned from EXEC_ AVO_DEMO is 1. This is accomplished through setting the calling argument NSTORED to 1.

C Note: This example shows things that are often done in ensemble tools. The important points to remember from the init_ subroutine are that globalRuntime->maxdtr (the maximum number of traces in an ensemble that will be leaving the exec_ subroutine) is set to 1, the type of data is set to ISTACKED, and the trace number is no longer valid after this process, so ITRNO_VALID is set to 0. The important points in the exec_ subroutine are that the trace header values (particularly the end-of-ensemble flag) are set, and the trace executive is notified that the number of traces to be returned from exec_avo_demo_ is 1. This is accomplished through setting the calling argument *nStored to 1.

Other Docs

Known Problems

avo.menu377

Developer’s Programming Guide

avo.menu '( name: AVO_DEMO label: "AVO Demo" value_tab: 35 parameter: AVO_OPT text: "Type of AVO output" type: circular: type_desc: ( ("Slope" 1 "Output AVO slope." ) ("Intercept" 2 "Output AVO intercept." ) ) value: 1 mouse_text: "Use MB1 to rotate between choices for type of AVO output." exec_data: ("AVO_DEMO" ("GENERAL" ("AVO_OPT" implicit: (value 'AVO_OPT)) ) ) )

Other Docs

Known Problems

avo.inc378

Developer’s Programming Guide

avo.inc C-----------------------------------------------------------------------------C Include file for AVO_DEMO C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc"

&

COMMON /SAVED_PARMS/ SAVE1z, IH_OFFSET, IH_END_ENS, IH_TRACENO, IH_TR_FOLD, IX_X, IX_Y, IX_WT, IOPT

&

INTEGER LENSAVED, IX_X, IX_Y, IH_OFFSET, IH_END_ENS, IH_TRACENO, IH_TR_FOLD, IX_WT, IOPT

C C C C

..... ..... ..... .....

Specify the number of variables to save. This is set here (rather than in the initialization routine) so that programmers don't forget to change it when they are changing the contents of the common block SAVED_PARMS. DATA LENSAVED /8/

C C C C C C C C

..... ..... ..... ..... ..... ..... ..... .....

IH_OFFSET - index in header of offset IH_END_ENS - index in header of end-of-ensemble flag IH_TRACENO - index in header of trace number IH_TR_FOLD - index in header of trace fold IX_X - index of buffer to store X values before linear regression IX_Y - index of buffer to store Y values before linear regression IX_WT - index of buffer to store weight values before linear regression IOPT - option for slope or intercept

Other Docs

Known Problems

avo.f379

Developer’s Programming Guide

avo.f C-----------------------------------------------------------------------------C C Description: C Standard initialization routine C C Output arguments: C LEN_SAVE - number of 4-byte words to save for re-entrancy C ITOOLTYPE - processing tool type C C-----------------------------------------------------------------------------SUBROUTINE INIT_AVO_DEMO( LEN_SAV, ITOOLTYPE ) C ..... The include file "avo.inc" contains an include for the global C ..... parameters ("global.inc") #include "avo.inc" INTEGER LEN_SAV, ITOOLTYPE, IERR, LENGTH, IFORMAT, ISLOPEPZ, & INTERCEPTPZ CHARACTER CDESC*32 PARAMETER ( ISLOPEPZ=1, INTERCEPTPZ=2 ) C ..... Issue a fatal error if the data is already stacked IF ( IDTYPz .EQ. ISTACKEDpz ) CALL EX_ERR_FATAL( & 'This process cannot operate on stacked data' ) C ..... Call for the input parameters by name. Allow user to output C ..... slope or intercept. IOPT = 0 CALL EX_GETPARM( 'AVO_OPT ', 1, IOPT ) IF ( IOPT .NE. ISLOPEPZ .AND. IOPT .NE. INTERCEPTPZ ) THEN CALL EX_ERR_FATAL( 'AVO_OPT not recognized' ) ENDIF C ..... Get buffers to store the values before linear regression CALL MEM_RESBUFF( MAXDTRz, IX_X, IERR ) CALL MEM_RESBUFF( MAXDTRz, IX_Y, IERR ) CALL MEM_RESBUFF( MAXDTRz, IX_WT, IERR ) C ..... Get the indices of the headers that we need CALL HDR_NAMINFO( 'OFFSET ', CDESC, LENGTH, & IH_OFFSET, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'OFFSET not found in header' ) CALL HDR_NAMINFO( 'END_ENS ', CDESC, LENGTH, & IH_END_ENS, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'END_ENS not found in header' ) CALL HDR_NAMINFO( 'TRACENO ', CDESC, LENGTH, & IH_TRACENO, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL( & 'TRACENO not found in header' ) CALL HDR_NAMINFO( 'TR_FOLD ', CDESC, LENGTH, & IH_TR_FOLD, IERR ) IF ( IERR .NE. 0 ) CALL EX_ERR_FATAL(

Other Docs

IFORMAT,

IFORMAT,

IFORMAT,

IFORMAT,

Known Problems

avo.f380

Developer’s Programming Guide

&

'TR_FOLD not found in header' )

C ..... The trace number is no longer valid, since we are now stacked ITRNO_VALIDz = 0 C ..... Set the general data type to stacked (not really stacked, but C ..... more similar to stacked than other types) IDTYPz = ISTACKEDpz C ..... Reset the maximum number of data traces per ensemble, for C ..... subsequent tools. MAXDTRz = 1 C ..... Set the number of words that need to be saved for re-entrancy LEN_SAV = LENSAVED C ..... Set the tool type to ensemble ITOOLTYPE = IENSEMBLEpz RETURN END

C-----------------------------------------------------------------------------C C Description: C Standard execution routine C C Input/output arguments: C TRACES - 2-d array of trace samples C ITHDRS - 2-d array of trace header (as integer) C RTHDRS - 2-d array of trace header (as floating point) C NSTORED - number of stored traces C C-----------------------------------------------------------------------------SUBROUTINE EXEC_AVO_DEMO( TRACES, ITHDRS, RTHDRS, NSTORED ) C ..... The include file "avo.inc" contains an include for the global C ..... parameters ("global.inc") #include "avo.inc" C ..... Include the file that allows use of the "space array" #include "mem.inc" INTEGER NSTORED, ITHDRS(NTHz,NSTORED), INDEX, IERR REAL TRACES(NUMSMPz,NSTORED), RTHDRS(NTHz,NSTORED) C ..... No action required in cleanup phase IF ( CLEANUPz ) RETURN C ..... Call CALL & &

the actual work routine AVO_DEMO_WORK( TRACES, ITHDRS, NUMSMPz, NTHz, NSTORED, IH_OFFSET, IOPT, RSPACEz(IX_X), RSPACEz(IX_Y), RSPACEz(IX_WT), TRACES )

C ..... The output trace is the end of a one-trace ensemble ITHDRS(IH_END_ENS,1) = LASTTRpz C ..... Assign the "fold" RTHDRS(IH_TR_FOLD,1) = FLOAT(NSTORED)

Other Docs

Known Problems

avo.f381

Developer’s Programming Guide

C ..... Make the trace number null ITHDRS(IH_TRACENO,1) = INULLpz C ..... Set the number of output traces NSTORED = 1 RETURN END

C-----------------------------------------------------------------------------C C Actual work routine C C------------------------------------------------------------------------------

&

& &

SUBROUTINE AVO_DEMO_WORK( TRACES, RTHDRS, NUMSMP, NTH, NSTORED, IH_OFFSET, IOPT, X, Y, WT, TRACE ) IMPLICIT NONE INTEGER NUMSMP, NTH, NSTORED, IH_OFFSET, I, J, ISLOPEPZ, INTERCEPTPZ, IOPT REAL TRACES(NUMSMP,NSTORED), RTHDRS(NTH,NSTORED), TRACE(NUMSMP), SLOPE, RINTER, X(NSTORED), Y(NSTORED), WT(NSTORED) PARAMETER ( ISLOPEPZ=1, INTERCEPTPZ=2 )

C ..... Special case for one trace IF ( NSTORED .EQ. 1 ) THEN IF ( IOPT .EQ. ISLOPEPZ ) THEN C ............. The slope is undefined, but let's use zero CALL VFILL( 0.0, TRACE, 1, NUMSMP ) ELSEIF ( IOPT .EQ. INTERCEPTPZ ) THEN C ............. The intercept is just the sample values CALL VMOV( TRACES, 1, TRACE, 1, NUMSMP ) ENDIF RETURN ENDIF C ..... The X values (offset) will not change, so let's get them once DO 100 J=1,NSTORED X(J) = RTHDRS(IH_OFFSET,J) C ......... While we're at it, let's set the weights WT(J) = 1.0 100 CONTINUE C ..... Loop over all of the samples in the traces DO 120 I=1,NUMSMP C ......... Load the values at a particular time DO 110 J=1,NSTORED Y(J) = TRACES(I,J) 110 CONTINUE C ......... Regress a line through the points CALL WT_LIN_REG( X, Y, WT, NSTORED, SLOPE, RINTER ) IF ( IOPT .EQ. ISLOPEPZ ) THEN TRACE(I) = SLOPE ELSEIF ( IOPT .EQ. INTERCEPTPZ ) THEN

Other Docs

Known Problems

avo.f382

Developer’s Programming Guide

TRACE(I) = RINTER ENDIF 120

CONTINUE RETURN END

C-----------------------------------------------------------------------------C SUBROUTINE WT_LIN_REG( X_IN, Y_IN, WT, NPTS, A, B ) C-----------------------------------------------------------------------------C C Description: C C Weighted linear regression routine. Least squares analysis C is performed to solve y = Ax + B. Does not handle infinite C slope. C C Input arguments: C X_IN - array of X values C Y_IN - array of Y values C WT - array of weights C C Output arguments: C A - slope C B - intercept C C-----------------------------------------------------------------------------SUBROUTINE WT_LIN_REG( X_IN, Y_IN, WT, NPTS, A, B ) IMPLICIT NONE INTEGER I, NPTS REAL X_IN(NPTS), Y_IN(NPTS), WT(NPTS) REAL X, SWX, SWY, SW, SWXY, SWX2, B, A C ..... Initialize: SWX = 0.0 SWY = 0.0 SW = 0.0 SWXY = 0.0 SWX2 = 0.0 C ..... Compute the constants for a weighted linear regression DO 100 I=1, NPTS X = WT(I) * X_IN(I) SWX = SWX + X SWY = SWY + WT(I) * Y_IN(I) SW = SW + WT(I) SWXY = SWXY + X * Y_IN(I) SWX2 = SWX2 + X * X_IN(I) 100 CONTINUE C ..... Compute the slope and intercept A = (SW * SWXY - SWY * SWX) / (SW * SWX2 - SWX * SWX) B = (SWY - A * SWX) / SW RETURN END

Other Docs

Known Problems

avoC.c383

Developer’s Programming Guide

avoC.c /* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" /* define the saved BEGINPARMS int outputOpt ; float *xVals ; float *yVals ; float *weights; ENDPARMS (parms)

parameters (user input, etc) */ /* /* /* /*

flag to output slope or intercept */ vector of x values for lin regression */ vector of y values for lin regression */ vector of weights used in lin regression */

/* functions defined and used internally */ static void avoDemoWork( float*, float*, int ); static void avoDemoWtLinReg( float*, float*, float*, int, float*, float* ); /* functions defined elsewhere and used here */ float **fVecTo2d( float*, int, int );

/* define option names */ #define ISLOPE 1 #define INTERCEPT 2 /*-------------------------------------------------------------------*/ /* init_avo_exer /* /* initialization routine for ProMAX module avo_exer /* /*-------------------------------------------------------------------*/ void init_avo_demo_( int *len_sav, int *itooltype ) { /* local versions int outputOpt float *xVals float *yVals float *weights

of external parameters */ ; ; ; ;

/* local variables */ int iErr; /* connect with global variables */ GlobalRuntime *gr = globalRuntime; /* issue a fatal error if data is already stacked */ if( gr->idtyp == ISTACKED ){ exErrFatal("This process does not operate on stacked data."); } /* see if the user wants to output slope or intercept */ outputOpt = 0; exParGetInt( "AVO_OPT", &outputOpt );

Other Docs

Known Problems

avoC.c384

Developer’s Programming Guide

if( outputOpt != ISLOPE && outputOpt != INTERCEPT ){ exErrFatal("Output option not recognized."); } /* allocate space needed for linear regression */ xVals = (float*)malloc( gr->maxdtr * sizeof(float)); yVals = (float*)malloc( gr->maxdtr * sizeof(float)); weights = (float*)malloc( gr->maxdtr * sizeof(float)); if( xVals == NULL || yVals == NULL || weights == NULL ){ exErrFatal("Memory allocation error in init phase."); } /* check for the existance of headers we will need */ if( hdrExists("OFFSET") != 1 ){ exErrFatal(" 'OFFSET' was not found in the trace headers."); } if( hdrExists("END_ENS") != 1 ){ exErrFatal(" 'END_ENS' was not found in the trace headers."); } if( hdrExists("TRACENO") != 1 ){ exErrFatal(" 'TRACENO' was not found in the trace headers."); } if( hdrExists("TR_FOLD") != 1 ){ exErrFatal(" 'TR_FOLD' was not found in the trace headers."); } /* the trace number is no longer valid, 1 trace/ensemble output */ gr->itrno_valid = 0; /* set the general data type to stacked. Its not really stacked */ /* but it is closer to stacked than any of the other types. */ gr->idtyp = ISTACKED; /* reset the maximum number of data traces output per ensemble. */ /* This value is set for subsequent tools */ gr->maxdtr = 1; /* set the number of words that need to be saved for re-entrancy */ *len_sav = NPARMS(parms); /* set the tool type */ *itooltype = IENSEMBLE; /* set the external parms->outputOpt parms->xVals parms->yVals parms->weights

saved parameters */ = outputOpt; = xVals ; = yVals ; = weights ;

} /*-------------------------------------------------------------------*/ /* exec_avo_exer /* /* execution routine for ProMAX module avo_exe /* /* input and output args: /* traces - the data traces in continuous memory /* rthdrs - floating point trace headers

Other Docs

Known Problems

avoC.c385

Developer’s Programming Guide

/* ithdrs - integer trace headers /* nStored - number of traces in the input array /*-------------------------------------------------------------------*/

void exec_avo_demo_( float *traces, float *rthdrs, int *ithdrs, int *nStored ) {

/* local int float float float

versions of outputOpt = *xVals = *yVals = *weights =

external parameters */ parms->outputOpt; parms->xVals ; parms->yVals ; parms->weights ;

/* local variables */ int iErr; /* connect with global variables */ GlobalRuntime *gr = globalRuntime; /* see if we are in cleanup phase */ if( gr->cleanup ){ free( xVals ); free( yVals ); free( weights ); return; } /* call the actual work routine */ avoDemoWork( traces, rthdrs, *nStored ); /* the header being output is the first one in the array. */ /* the output trace is the last one in the ensemble */ ithdrs[hdrIndex("END_ENS")] = LASTTR; /* assign the fold */ rthdrs[hdrIndex("TR_FOLD")] = (float)(*nStored); /* make the trace number NULL */ ithdrs[hdrIndex("TRACENO")] = INULL; /* set the number of output traces to be picked up by the */ /* trace executive and sent to subsequent modules */ *nStored = 1; } /*-------------------------------------------------------------------*/ /* acutual work routine /* input/output /* traces - the input traces in continuous memory /* input: /* rthdrs - floating point header array, in continuous memory /* nStored - number of traces in the ensemble /*-------------------------------------------------------------------*/

Other Docs

Known Problems

avoC.c386

Developer’s Programming Guide

void avoDemoWork( float *traces, float *rthdrs, int nStored ) { /* local variables */ int i, j, iErr; float **rhdrs, **tracs; float slope, intercept; /* connect with global variables */ GlobalRuntime *gr = globalRuntime; /* handle the special case of one trace */ if( nStored == 1 ){ if( parms->outputOpt == ISLOPE ){ /* ..... the slope is undefined, use zero for output */ vFill( 0.0, traces, 1, gr->numsmp ); } else{ /* ..... the intercept is just the sample values in the trace */ return; } } /* put the 1D array into a more convenient form for C */ rhdrs = fVecTo2d( rthdrs, nStored, gr->nth ); tracs = fVecTo2d( traces, nStored, gr->numsmp ); /* the offset values will not change, get them just once */ for( i = 0; i < nStored; i++ ){ parms->xVals[i] = rhdrs[i][hdrIndex("OFFSET")]; /* .. fill the weighting array while were at it */ parms->weights[i] = 1.0; } /* loop over all the samples */ for( i = 0; i < gr->numsmp; i++ ){ /* .. load the sample amplitudes at the current time */ for( j = 0; j < nStored ; j++ ){ parms->yVals[j] = tracs[j][i]; } /* .. fit a lsf line through the points */ avoDemoWtLinReg( parms->xVals, parms->yVals, parms->weights, nStored, &slope, &intercept ); /* .. output the appropriate value */ if( parms->outputOpt == ISLOPE ){ traces[i] = slope; } else{ traces[i] = intercept; } } /* free the memory allocated in fVecTo2D */ free( rhdrs ); free( tracs );

Other Docs

Known Problems

avoC.c387

Developer’s Programming Guide

} /*-------------------------------------------------------------------*/ /* weighted linear regression routine. A least squares analysis is /* performed to solve y=ax+ b. Does not handle infinite slope. /* /* input: /* x_in - array of input x values /* y_in - arrry of input y values /* wt - array of input weights /* npts - number of points in regression /* /* output: /* a - slope /* b - intercept /* /*-------------------------------------------------------------------*/ void avoDemoWtLinReg( float *x_in, float *y_in, float *wt, int npts, float *a, float *b ) { /* variables */ int i; float x, swx, swy, sw, swxy, swx2; /* initialize */ swx = 0.0; swy = 0.0; sw = 0.0; swxy = 0.0; swx2 = 0.0; /* compute the constants for a linear regression */ for( i= 0; i< npts; i++ ){ x = wt[i] * x_in[i]; swx += x; swy += wt[i]*y_in[i]; sw += wt[i]; swxy += x*y_in[i]; swx2 += x*x_in[i]; } /* compute the slope and intercept */ *a = ( (sw*swxy)-(swy*swx))/( (sw*swx2)-(swx*swx)); *b = (swy - (*a * swx) )/sw; }

Other Docs

Known Problems

Trace Interpolation Tools388

Developer’s Programming Guide

Trace Interpolation Tools The following sections show how an ensemble tool can output more traces than it inputs. The tool simply creates and outputs a trace between each of the input traces.

Other Docs

Known Problems

prestk_interp.menu389

Developer’s Programming Guide

prestk_interp.menu '( name: PRESTK_INTERP label: "Prestack Interpolation" value_tab: 35 exec_data: ("PRESTK_INTERP" ("GENERAL" ("dummy" implicit: 1) ) ) )

Other Docs

Known Problems

prestk_interp.inc390

Developer’s Programming Guide

prestk_interp.inc C-----------------------------------------------------------------------------C Include file for PRESTK_INTERP C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc" COMMON /SAVED_PARMS/ SAVE1z INTEGER LENSAVED DATA LENSAVED /1/

Other Docs

Known Problems

prestk_interp.f391

Developer’s Programming Guide

prestk_interp.f C C C C

..... ..... ..... .....

This is an example of an ensemble tool that outputs MORE traces than it inputs. It is a simple minded pre-stack trace interpolator, that outputs one traces between every existing pair of traces (as the simple mean of the sample values). SUBROUTINE INIT_PRESTK_INTERP( LEN_SAV, ITOOLTYPE )

#include "prestk_interp.inc" INTEGER LEN_SAV, ITOOLTYPE

&

IF ( IDTYPz .EQ. ISTACKEDpz ) CALL EX_ERR_FATAL( 'This process not intended for stacked data' )

C ..... The trace number is no longer valid, since we are adding new traces C ..... that have no corresponding slots in the database. ITRNO_VALIDz = 0 C ..... Reset the maximum number of data traces per ensemble MAXDTRz = MAXDTRz*2 - 1 C ..... Set the number of words for re-entrancy and the tool type LEN_SAV = LENSAVED ITOOLTYPE = IENSEMBLEpz RETURN END

SUBROUTINE EXEC_PRESTK_INTERP( TRACES, ITHDRS, RTHDRS, NSTORED ) #include "prestk_interp.inc" INTEGER NSTORED, ITHDRS(NTHz,NSTORED), I, J REAL TRACES(NUMSMPz,NSTORED), RTHDRS(NTHz,NSTORED) C ..... No action required in cleanup phase IF ( CLEANUPz ) RETURN C ..... Can't interpolate with just one trace in the ensemble IF ( NSTORED .EQ. 1 ) RETURN C ..... Interpolate the sample values DO 110 I=1,NUMSMPz DO 100 J=NSTORED,2,-1 TRACES(I,J*2-1) = TRACES(I,J) TRACES(I,J*2-2) = (TRACES(I,J) + TRACES(I,J-1)) / 2.0 100 CONTINUE 110 CONTINUE C ..... Take care of the headers too (interpolated traces will have a copy C ..... of the header just before them) DO 130 I=1,NTHz DO 120 J=NSTORED,2,-1 ITHDRS(I,J*2-1) = ITHDRS(I,J)

Other Docs

Known Problems

prestk_interp.f392

120 130

Developer’s Programming Guide

ITHDRS(I,J*2-2) = ITHDRS(I,J-1) CONTINUE CONTINUE

C ..... Set the number of output traces NSTORED = NSTORED*2 - 1 RETURN END

Other Docs

Known Problems

prestk_interp.c393

Developer’s Programming Guide

prestk_interp.c /* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS int dummy; /* doesn't actually have to be here */ ENDPARMS(parms) void init_prestk_interp_(int *len_sav, int *itooltype); void exec_prestk_interp_(float *trace, int *ithdr, float *rthdr, int* nStored);

/*------------------------------------------------------------------Description: Initialization routine for prestack interp output arguments: len_save - number of 4-byte words to save for re-entrancy itooltype - processing tool type ---------------------------------------------------------------------*/ void init_prestk_interp_(int *len_sav, int *itooltype) { /* get access to global runtime variables */ GlobalRuntime *gr = globalRuntime; /* The trace number is no longer valid, since we are adding new traces */ /* that have no corresponding slots in the database.*/ gr->itrno_valid = FALSE; /* Reset the maximum number of data traces per ensemble */ gr->maxdtr = 2*gr->maxdtr - 1;

/* Set the number of words that need to be saved for re-entrancy. *len_sav = NPARMS (parms);

*/

/* set the tool type */ *itooltype = IENSEMBLE; }

/********************************************************************* * * Description: * Execution routine for prestack interp

Other Docs

Known Problems

prestk_interp.c394

Developer’s Programming Guide

* * Input/output arguments: * traces - array of trace samples * ithdrs - trace header (as integer) * rthdrs - trace header (as floating [point) /* nStored - number of traces input and output * **********************************************************************/

void exec_prestk_interp_(float *traces, int *ithdrs, float *rthdrs, int*nStored) { GlobalRuntime *gr = globalRuntime; int i, j; /* A place to store the 2-d input array, these are just pointers */ /* to locations in the input traces and rthdrs arrays. */ float **trcs, **rhdrs; /* No action required in cleanup phase */ if( gr->cleanup ){ return; } /* Can't interpolate with just one trace in the ensemble */ if( *nStored == 1 ){ return; }

/* Put all /* that is trcs = rhdrs =

available trace and header locations into an array */ easy to handle. */ fVecTo2d( traces, 2*(*nStored)-1, gr->numsmp ); fVecTo2d( rthdrs, 2*(*nStored)-1, gr->nth );

/* Interpolate the sample values */ for( i = 0; i < gr->numsmp; i++ ){ for( j = *nStored-1; j > 0; j-- ){ trcs[2*j][i] = trcs[j][i]; trcs[2*j-1][i] = (trcs[j][i] + trcs[j-1][i])/2.; } } /* Take care of the headers too, interpolated traces will have a copy */ /* of the header just before them */ for( i = 0; i < gr->nth; i++ ){ for( j = *nStored-1; j > 0; j-- ){ rhdrs[2*j][i] = rhdrs[j][i]; rhdrs[2*j-1][i] = rhdrs[j-1][i]; } } /* Set the number of output traces */ *nStored = 2*(*nStored) - 1;

Other Docs

Known Problems

prestk_interp.c395

Developer’s Programming Guide

free( trcs ); free( rhdrs );

}

Other Docs

Known Problems

prestk_interp.c396

Other Docs

Developer’s Programming Guide

Known Problems

397

Developer’s Programming Guide

Appendix: Panel Tool Examples

This appendix provides examples of panel tools. The examples demonstrate several important points: •

the panel parameters must be chosen



the panel parameters must be reported to the trace executive through the routine EX_PANEL_PARMS, and the panel parameters can be changed by EX_PANEL_PARMS



the programmer should be aware of how to handle the padded trace array

The examples also demonstrate how panels overlap and mix, depending upon the panel parameters chosen. A menu file is presented first, followed by FORTRAN and C examples. The menu file serves both the C and FORTRAN code.

Topics covered in this appendix:

➲ ➲ ➲ ➲

Other Docs

panel_test.menu panel_test.inc panel_test.f panelTest.c

Known Problems

panel_test.menu398

Developer’s Programming Guide

panel_test.menu '( name: PANEL_TEST label: "Panel Test" value_tab: 42 parameter: PANLSIZE text: "Panel size" type: typein: type_desc: ( int: 7 nil nil ) value: 21 mouse_text "Enter the number of traces per panel." parameter: PANLEDGE text: "Panel edge" type: typein: type_desc: ( int: 7 nil nil ) value: 5 mouse_text "Enter the size of the panel edge." parameter: PANL_MIX text: "Panel mix" type: typein: type_desc: ( int: 7 nil nil ) value: 0 mouse_text "Enter the size of the panel mix." parameter: PANL_TPD text: "Panel trace pad" type: typein: type_desc: ( int: 7 nil nil ) value: 0 mouse_text "Enter the size of the panel trace pad." parameter: PANL_SPD text: "Panel sample pad" type: typein: type_desc: ( int: 7 nil nil ) value: 0 mouse_text "Enter the size of the panel sample pad." exec_data: ("PANEL_TEST" ("GENERAL" ("version" implicit: "%Z%%M% ("PANLSIZE" implicit: (value ("PANLEDGE" implicit: (value ("PANL_MIX" implicit: (value ("PANL_TPD" implicit: (value ("PANL_SPD" implicit: (value ) ) )

Other Docs

%I% %G%" ) 'PANLSIZE)) 'PANLEDGE)) 'PANL_MIX)) 'PANL_TPD)) 'PANL_SPD))

Known Problems

panel_test.inc399

Developer’s Programming Guide

panel_test.inc C-----------------------------------------------------------------------------C Include file for PANEL_TEST C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc" COMMON /SAVED_PARMS/ SAVE1z, RVAL, NPAD_SAMPS, NPAD_TRACES INTEGER LENSAVED, NPAD_SAMPS, NPAD_TRACES REAL RVAL C ..... Specify the number of variables to save DATA LENSAVED /4/ C ..... SCCS:

Other Docs

@(#)panel_test.inc

50.1

3/10/94

Known Problems

panel_test.f400

Developer’s Programming Guide

panel_test.f

SUBROUTINE INIT_PANEL_TEST( LEN_SAV, ITOOLTYPE ) #include "panel_test.inc" INTEGER LEN_SAV, ITOOLTYPE, NPANEL_SIZE, NPANEL_EDGE, & NPANEL_MIX CHARACTER CSCCS_KEY*50 CALL EX_GETPARM( 'PANLSIZE', 1, CALL EX_GETPARM( 'PANLEDGE', 1, CALL EX_GETPARM( 'PANL_MIX', 1, C ..... Get padding in time and traces, CALL EX_GETPARM( 'PANL_TPD', 1, CALL EX_GETPARM( 'PANL_SPD', 1,

NPANEL_SIZE ) NPANEL_EDGE ) NPANEL_MIX ) send through .inc file NPAD_TRACES ) NPAD_SAMPS )

C ..... Set the panel parameters (and allow them to be returned with a C ..... different value). CALL EX_PANEL_PARMS( NPANEL_SIZE, NPANEL_EDGE, NPANEL_MIX, & NPAD_TRACES, NPAD_SAMPS ) RVAL = 1.0 LEN_SAV = LENSAVED ITOOLTYPE = IPANELpz RETURN END

SUBROUTINE EXEC_PANEL_TEST( TRACES, ITHDRS, RTHDRS, NSTORED ) #include "panel_test.inc" INTEGER NSTORED, ITHDRS(NTHz,NSTORED), I, J REAL TRACES(NUMSMPz+NPAD_SAMPS,NSTORED), RTHDRS(NTHz,NSTORED) IF ( CLEANUPz ) RETURN

100 110

DO 110 J=1,NSTORED DO 100 I=1,NUMSMPz+NPAD_SAMPS TRACES(I,J) = RVAL CONTINUE CONTINUE

C ..... increase the sample value for this panel RVAL = RVAL + 1.0 C ..... Write values to padded traces, you will not see the result, C ..... this is just to demonstrate where the padded traces are. DO 130 J = NSTORED+1, NSTORED+NPAD_TRACES DO 120 I = 1, NUMSMPz+NPAD_SAMPS TRACES(I,J) = RVAL 120 CONTINUE

Other Docs

Known Problems

panel_test.f401

130

Developer’s Programming Guide

CONTINUE

RETURN END

Other Docs

Known Problems

panelTest.c402

Developer’s Programming Guide

panelTest.c /* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" /* define the saved parameters (user input, etc) */ BEGINPARMS int nPadTraces; /* number of padding traces */ int nPadSamples; /* number of padding samples */ float sampValue; /* value of trace samples */

ENDPARMS (parms) /*-------------------------------------------------------------------*/ /* init_panel_test /* /* initialization routine for ProMAX module sine_wave /* /*-------------------------------------------------------------------*/ void init_panel_test_( int *len_sav, int *itooltype ) { /* local variables */ int panelSize; int panelEdge; int panelMix; /* local versions of external parms */ int nPadTraces; int nPadSamples; /* get the panel parameters exParGetInt( "PANLSIZE", exParGetInt( "PANLEDGE", exParGetInt( "PANL_MIX",

*/ &panelSize ); &panelEdge ); &panelMix );

/* get the padding parameters and pass to exec */ exParGetInt( "PANL_TPD", &nPadTraces ); exParGetInt( "PANL_SPD", &nPadSamples ); parms->nPadTraces = nPadTraces; parms->nPadSamples = nPadSamples; /* set the panel parameters. Note that the argument values can be */ /* changed by this routine*/ exPanelParms( &panelSize, &panelEdge, &panelMix, &nPadTraces, &nPadSamples ); /* initialize the trace sample value */ parms->sampValue = 1.0; /* set the number of words that need to be saved for re-entrancy */ *len_sav = NPARMS(parms);

Other Docs

Known Problems

panelTest.c403

Developer’s Programming Guide

/* set the tool type to panel */ *itooltype = IPANEL; } /*-------------------------------------------------------------------*/ /* exec_panelTest /* /* execution routine for ProMAX module panel_test /* input and output args: /* traces - the data traces in continuous memory /* rthdrs - floating point trace headers /* ithdrs - integer trace headers /* nStored - number of traces in the input array panel /*-------------------------------------------------------------------*/ void exec_panel_test_( float *traces, float *rthdrs, int *ithdrs, int *nStored ) { /* local variables */ float **tracs; int i,j; int nTraces, nSamples; /* connect with global variables */ GlobalRuntime *gr = globalRuntime; /* see if we are in cleanup phase */ if( gr->cleanup ){ return; } /* /* /* /*

calculate the number of samples per trace in the */ padded array. The trace length in the padded array is NOT */ reflected in the global parameter gr->numsmp. The programmer */ must account for the padding.*/ nSamples = gr->numsmp + parms->nPadSamples;

/* calculate the number of traces that are actually in the array. */ /* The value *nStored is the number of live data traces. The */ /* programmer must also keep track of the padding traces. */ nTraces = *nStored + parms->nPadTraces; /* put the 1D traces array into a more convenient form for C */ tracs = fVecTo2d( traces, nTraces, nSamples ); /* set the trace sample values for the live traces in the panel */ for( i = 0; i < *nStored; i++ ){ for( j = 0; j < nSamples; j++ ){ tracs[i][j] = parms->sampValue; } } /* set the trace sample values for the padding traces. You won't */ /* see the result of this, it is just for demonstration of where */ /* the padding traces are located in the trace array */ for( i = *nStored; i < nTraces; i++ ){

Other Docs

Known Problems

panelTest.c404

Developer’s Programming Guide

for( j = 0; j < nSamples; j++ ){ tracs[i][j] = parms->sampValue; } } /* increase the trace sample value for the next panel */ parms->sampValue += 1.0; /* free the memory allocated by fVecTo2d() */ free( tracs ); }

Other Docs

Known Problems

405

Developer’s Programming Guide

Appendix: Single Buffer Tool Examples

This appendix provides examples of two types of single buffer tools. The first type is an ensemble definition program in FORTRAN. The second type is a pre-stack trace interpolation program that has the same output as prestk_interp, a C language ensemble tool.

Topics covered in this appendix:

➲ ➲ ➲ ➲ ➲

Other Docs

ens_define.menu ens_define.inc ens_define.f interp_sb.menu interp_sb.c

Known Problems

ens_define.menu406

Developer’s Programming Guide

ens_define.menu '( name: ENS_DEFINE label: "Ensemble Re-define" value_tab: 51

parameter: PRIM_KEY text: "Select PRIMARY key to re-define output ensembles" type: function: type_desc: (header_list headers) value: "NONE " selected_item: "**INVALID**" mouse_text: "Use MB1 to select a header word from the headers menu as the PRIMARY key for re-defining input ensembles."

parameter: MAXTR text: "Maximum traces per output ensemble" type: typein: type_desc: ( int: 5 1 nil ) value: 0 mouse_text: "What is the maximum number of traces per output ensemble AFTER redefining the ensembes?"

exec_data: ("ENS_DEFINE" ("GENERAL" ("version" implicit: "@(#)ens_define.menu ("PRIM_KEY" implicit: (value 'PRIM_KEY)) ("MAXTR" implicit: (value 'MAXTR)) ) )

40.1

11/24/92" )

)

Other Docs

Known Problems

ens_define.inc407

Developer’s Programming Guide

ens_define.inc C-----------------------------------------------------------------------------C Include file for ENS_DEFINE. C C Original code by S. Rutt Bridges, April 30, 1991. C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc"

&

C C C C C C C

..... ..... ..... ..... ..... ..... .....

COMMON /SAVED_PARMS/ SAVE1z, IH_PRIM, ITRC_COUNT, IX_TRACE, IX_THDR, IFIRST_ENTRY, NOUT_EXEC INTEGER IH_PRIM, ITRC_COUNT, IX_TRACE, IX_THDR, IFIRST_ENTRY INTEGER LENSAVED, NOUT_EXEC DATA LENSAVED /7/ IH_PRIM - index of the primary key used to merge ensembles ITRC_COUNT - counter for the number of dumped traces in an ensemble IX_TRACE - address of the memory used to hold the last trace IX_THDR - address of the memory used to hold the last trace header IFIRST_ENTRY - value is 1 the very first time the executive is entered Number of traces to output in exec phase SCCS: @(#)ens_define.inc 31.4 6/18/92

Other Docs

Known Problems

ens_define.f408

Developer’s Programming Guide

ens_define.f C-----------------------------------------------------------------------------C C Initialization routine for ENS_DEFINE C C ENS_DEFINE is an ensemble definition tool. It resets ensemble flags C based on a change in a header word. Output ensembles may be larger C or smaller, depending on the changes in the header word. Examples of C the use of this tool: C C 1. In the Ensemble Decon Parameter Test macro, we input data that C has a primary sort key of 'REPEAT' and a secondary sort key of C 'CDP'. For the subsequent CDP stack, we need to have ensembles C defined by CDP. ENS_DEFINE provides this function. C C 2. Typically for Radon Filtering, we need to use Ensemble Split to C seperate the positive and negative offsets. However, for a C subsequent CDP stack these traces need to form a single ensemble. C ENS_DEFINE provides this function. C C 3. To create a single ensemble from a CDP stack, you should use the C special option of setting the primary header key word to NONE C which will force the entire dataset to be combined into a single C ensemble. C C Output parameters: C LEN_SAV - the number of parameters that must be saved C ITOOLTYPE - the tool type C C Original code by S. Rutt Bridges, April 30, 1991. C-----------------------------------------------------------------------------SUBROUTINE INIT_ENS_DEFINE( LEN_SAV, ITOOLTYPE ) #include "ens_define.inc" #include "runtime.inc" CHARACTER CDESC*32, CPRIM_KEY*8 INTEGER LEN_SAV, ITOOLTYPE, IERR, LENGTH, IFORMAT, NCHARS INTEGER MAXDTR, IOPTION CHARACTER CSCCS_KEY*50 DATA CSCCS_KEY /'@(#)ens_define.f 31.5 6/18/92'/ C ..... issue a stern warning if the tool is within an IF conditional IF (IF_DEPTHrz.NE.0) CALL EX_ERR_HELP( 'Warning: Ensemble ' & // 'Re-define should not be used within IF''s !' & //'|Click here to CONTINUE (job may fail)' & //'|Click here to STOP'//CNULLpz, IOPTION ) IF (IOPTION.EQ.3) CALL EX_ERR_STOP( & 'Stopping execution as requested...' ) C ..... initialize for the exec phase ITRC_COUNT = 0 IFIRST_ENTRY = 1 C ..... get the primary key name for ensemble redefinition

Other Docs

Known Problems

ens_define.f409

Developer’s Programming Guide

CALL EX_CGETPARM( 'PRIM_KEY', 1, CPRIM_KEY, NCHARS ) cdd - this in now allowed, as a way of indicating that we don't care what the cdd primary key is c IF (CPRIM_KEY(1:4) .EQ. 'NONE') c & CALL EX_ERR_FATAL( 'Primary key must be specified.' ) IF ( CPRIM_KEY(1:4) .EQ. 'NONE' ) THEN IH_PRIM = 0 ELSE C ......... get the index in the trace header of the primary key CALL HDR_NAMINFO( CPRIM_KEY, CDESC, LENGTH, IFORMAT, & IH_PRIM, IERR ) IF (IERR.NE.0) CALL EX_ERR_FATAL('Primary key '//CPRIM_KEY// & ' does not occur in the trace header.') ENDIF C ..... reset the primary ensemble sort order IF (CPRIM_KEY(1:3).EQ.'CDP') THEN IPSORTz = ICDPpz ELSEIF (CPRIM_KEY(1:6).EQ.'SOURCE') THEN IPSORTz = ISINpz ELSEIF (CPRIM_KEY(1:3).EQ.'SIN') THEN IPSORTz = ISINpz ELSEIF (CPRIM_KEY(1:4).EQ.'FFID') THEN IPSORTz = ISINpz ELSEIF (CPRIM_KEY(1:8).EQ.'REC_SLOC') THEN IPSORTz = IRECSLOCpz ELSEIF (CPRIM_KEY(1:4).EQ.'CHAN') THEN IPSORTz = ICHANpz ELSEIF (CPRIM_KEY(1:6).EQ.'OFFSET') THEN IPSORTz = IOFFSETpz ELSEIF (CPRIM_KEY(1:7).EQ.'AOFFSET') THEN IPSORTz = IOFFSETpz ELSE IPSORTz = IUNKNOWNpz ENDIF C ..... get the maximum number of traces per output ensemble CALL EX_GETPARM( 'MAXTR ', 1, MAXDTR ) IF (MAXDTR.LE.0) CALL EX_ERR_FATAL( 'The maximum number of ' & // 'traces per output ensemble MUST be specified!' ) MAXDTRz = MAXDTR C ..... Set the maximum number of traces to buffer, and the trace and C ..... sample padding. CALL EX_BUFF_PARMS( MAXDTRz+1, 0, 0 ) C ..... This tool can process trace headers only CALL EX_THDRONLY_OK C ..... set the number of words that need to be saved and the tool type LEN_SAV = LENSAVED ITOOLTYPE = ISNL_BUFFpz RETURN END

Other Docs

Known Problems

ens_define.f410

Developer’s Programming Guide

C-----------------------------------------------------------------------------C Flow routine for ENS_DEFINE. C------------------------------------------------------------------------------

&

SUBROUTINE FLOW_ENS_DEFINE( TRACES, ITHDRS, RTHDRS, NSTORED, IFOUND_EOJ, NOUTPUT, NOVERLAP )

#include "ens_define.inc" INTEGER NSTORED, ITHDRS(NTHz,NSTORED), IFOUND_EOJ, NOUTPUT, & NOVERLAP, J REAL TRACES(NUMSMPz,NSTORED), RTHDRS(NTHz,NSTORED) C ..... Initialize the number to output to zero (not ready yet) NOUTPUT = 0 IF ( IFOUND_EOJ .EQ. 1 ) THEN C ......... We have reached the end of the data, output whatever we have NOUTPUT = NSTORED RETURN ENDIF IF ( IH_PRIM .NE. 0 ) THEN C ......... We are looking for the primary key value to change DO 100 J=2,NSTORED IF ( ITHDRS(IH_PRIM,J) .NE. ITHDRS(IH_PRIM,J-1) ) THEN C ................. Output up to the change NOUTPUT = J-1 NOUT_EXEC = NOUTPUT C ................. Save what it not output for next time NOVERLAP = NSTORED - NOUTPUT RETURN ENDIF 100 CONTINUE ENDIF RETURN END

C-----------------------------------------------------------------------------C C Execution routine for ENS_DEFINE C C Input parameters: C NTR_BUFF - size of the trace buffer (MAX(NSTORED, NOUTPUT) C from FLOW_ENS_DEFINE C C Input/output parameters: C ITHDRS - 2-d array of INTEGER*4 trace header entries C RTHDRS - 2-d array of REAL*4 trace header entries C C Unused parameters: C TRACES - 2-D array of trace samples C C Original code by S. Rutt Bridges, April 30, 1991.

Other Docs

Known Problems

ens_define.f411

Developer’s Programming Guide

C Rewritten as a single buffered tools by D.E. Diller, May 12, 1992 C C-----------------------------------------------------------------------------SUBROUTINE EXEC_ENS_DEFINE( TRACES, ITHDRS, RTHDRS, NTR_BUFF ) #include "ens_define.inc" #include "header.inc" INTEGER NTR_BUFF, J, ITHDRS(NTHz,NTR_BUFF) REAL TRACES(NUMSMPz,NTR_BUFF), RTHDRS(NTHz,NTR_BUFF) C ..... no action needed for cleanup mode IF (CLEANUPz) RETURN C ..... Set the header values DO 100 J=1,NOUT_EXEC ITHDRS(ISEQNOz,J) = J ITHDRS(IEND_ENSz,J) = NLASTpz 100 CONTINUE ITHDRS(IEND_ENSz,NOUT_EXEC) = LASTTRpz RETURN END

Other Docs

Known Problems

interp_sb.menu412

Developer’s Programming Guide

interp_sb.menu '( name: INTERP_SB label: "Single Buffer Trace Interpolation" value_tab: 35 exec_data: ("INTERP_SB" ("GENERAL" ("dummy" implicit: 1) ) ) )

Other Docs

Known Problems

interp_sb.c413

Developer’s Programming Guide

interp_sb.c /*--------------------------------------------------------------------------*/ /* interp_sb /* An example of a single buffer tool in C. The program collects an ensemble /* of data traces then passes the ensemble on to the exec_ subroutine. /* The program uses a change in Primary sort key value to detect that /* there has been a change in the ensemble rather than by using the END_ENS /* flag. This is done for demonstration of the nOverlap variable in /* the flow routine. The output data traces are the original traces /* plus a trace that is linearly interpolated between each input trace. /* The output ensemble therefore has 2*N - 1 traces where /* N is the number of input traces. /* /* The example called interp_db.c is identical to this example except that /* this is a single buffer tool and therefore the input trace buffer also /* serves as the ouptut trace buffer. /*-------------------------------------------------------------------------*/ /* include promax interface, globals, error codes, etc */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS int nInEnsemble; /* the number of traces passed to the exec subroutine */ int pKeyHdrIndx; /* index of the primary sort key trace header (FFID, CDP etc) */ ENDPARMS(parms) /*-------------------------------------------------------------------*/ /* /* description: /* initialization routine for interp_sb: /* /* output arguments: /* len_sav - length of the saved common block /* itooltype - type of tool /* /*-------------------------------------------------------------------*/ void init_interp_sb_( len_sav, itooltype ) int *len_sav, *itooltype; { int max_to_buffer, nPadTraces, nPadSamples; /* set the global pointers */ GlobalRuntime *gr = globalRuntime; /* set the number of words that need to be saved */ *len_sav = NPARMS(parms);

Other Docs

Known Problems

interp_sb.c414

/* /* /* /* /* /*

Developer’s Programming Guide

Get the header array index for the primary sort key of the input data. */ Note that 1 is subracted from the value of globalRuntime->ipkey */ to obtain the index of the primary sort key that is appropriate for */ the C language. The global variable trace indicies are indexed for */ FORTRAN. This rule is true for globalRuntime->iskey as well as */ the standard headers. */ parms->pKeyHdrIndx = gr->ipkey - 1;

/* /* /* /* /*

Notify trace exec of maximum number of traces it will have to hold for */ this module. The ex_buff_parms_() routine must be called in the init */ subroutine of any buffered tool. NOTE that the routine is a FORTRAN */ routine that is being directly called by this C routine so the address of */ the variables are passed. */ max_to_buffer = 2*gr->maxdtr; nPadTraces = 0; nPadSamples = 0; ex_buff_parms_( &max_to_buffer, &nPadTraces, &nPadSamples );

/* we are going to output more traces per ensemble than are input */ gr->maxdtr = (2*gr->maxdtr) - 1; /* set the tool type */ *itooltype = ISNL_BUFF; }

/*--------------------------------------------------------------------*/ /* /* flow tool for interp_sb /* /* input args /* traces - 2d array of data traces /* ithdrs - 2-d array of input integer trace headers /* rthdrs - 2-d array of input float trace headers /* /* /*--------------------------------------------------------------------*/ flow_interp_sb_( float *traces, int *ithdrs, float *rthdrs, int *nStored, int *ifound_eoj, int *nOutput, int *nOverlap ) { GlobalRuntime *gr = globalRuntime; float **rhdrs;

/* /* /* /* /*

if( *nStored == 1 ){ .. There is only one trace so we can't compare it to anything. This */ .. situtation should only occur on the first call to the routine unless */ .. the last trace input is a single-trace ensemble in which case more */ .. code would be needed to handle that special case. We leave out that */ .. extra code here as it would clutter the example. */ *nOutput = 0; return; }

Other Docs

Known Problems

interp_sb.c415

Developer’s Programming Guide

/* put the trace headers into a 2d array that is easy to handle */ rhdrs = fVecTo2d( rthdrs, *nStored, gr->nth ); /* see if the primary key header value of most recent input trace has changed */ if( rhdrs[*nStored-2][parms->pKeyHdrIndx] != rhdrs[*nStored-1][parms>pKeyHdrIndx] || *ifound_eoj != 0 ){ /* .. we have found enough traces to process */ /* .. Set the number of traces to output from exec, a non-zero value tells */ /* .. the trace executive that it is time to pass the traces to */ /* .. exec_interp_sb_ */ *nOutput = 2*(*nStored-1) - 1; /* .. set the number of traces that are being passed to the exec_ subroutine */ parms->nInEnsemble = *nStored - 1; /* .. Let the last trace input on this call be the first trace input */ /* .. on the next call since it is part of the next ensemble that we */ /* .. will be collecting. */ *nOverlap = 1; } else{ /* .. signal the trace executive that we aren't ready to pass traces exec_ */ *nOutput = 0; } free( rhdrs );

} /*-------------------------------------------------------------------*/ /* /* description: /* exectution routine for interp_sb /* /* input arguments: /* nStored - the amount of memory available (in traces) in the /* traces array. NOTE that the actual number of traces /* input was determined in the flow_interp_sb routine and /* passed to exec_interp_sb via the parms->nInEnsemble /* variable. /* /* input/output arguments: /* traces - 2-d array of traces /* ithdrs - 2-d array of integer trace headers /* rthdrs - 2-d array of float trace headers /* /*-------------------------------------------------------------------*/ void exec_interp_sb_( float *traces, int *ithdrs, float *rthdrs, int *nStored ) { float **rhdrs, **tracs;

Other Docs

Known Problems

interp_sb.c416

Developer’s Programming Guide

int i,j; /* set the global pointers */ GlobalRuntime *gr = globalRuntime; /* if cleaning up release memory, close files, in this case there is nothing to do */ if( gr->cleanup){ return; }

/* Arrange the trace and header arrays into 2d arrays that are easy to handle. */ tracs = fVecTo2d( traces, *nStored, gr->numsmp ); rhdrs = fVecTo2d( rthdrs, *nStored, gr->nth ); /* Interpolate the sample values */ for( i = 0; i < gr->numsmp; i++ ){ for( j = parms->nInEnsemble-1; j > 0; j-- ){ tracs[2*j][i] = tracs[j][i]; tracs[2*j-1][i] = (tracs[j][i] + tracs[j-1][i])/2.; } } /* Take care of the headers too, interpolated traces will have a copy */ /* of the header just before them */ for( i = 0; i < gr->nth; i++ ){ for( j = parms->nInEnsemble-1; j > 0; j-- ){ rhdrs[2*j][i] = rhdrs[j][i]; rhdrs[2*j-1][i] = rhdrs[j-1][i]; } } free( tracs ); free( rhdrs ); }

Other Docs

Known Problems

417

Developer’s Programming Guide

Appendix: Double Buffer Tool Examples

This appendix provides examples of a double buffer tool. The FORTRAN example calculates semblance for an input CDP gather. The C example is a pre-stack interpolation routine that outputs the same result as interp_sb.c, a single buffer tool.

Topics covered in this appendix:

➲ ➲ ➲ ➲ ➲

Other Docs

semblance.menu semblance.inc semblance.f interp_db.menu interp_db.c

Known Problems

semblance.menu418

Developer’s Programming Guide

semblance.menu '( name: SEMBLANCE label: "Semblance Vel Analysis" parameter: VSTART text: "Minimum analysis velocity" type: typein: type_desc: ( real: 7 1.0e-5 nil ) value: -1.0 mouse_text: "Minimum expected stacking velocity." parameter: VEND text: "Maximum analysis velocity" type: typein: type_desc: ( real: 7 1.0e-4 nil ) value: -1.0 mouse_text: "Maximum expected stacking velocity." parameter: NVELS text: "Number of test velocities" type: typein: type_desc: ( int: 3 1 999 ) value: 25 mouse_text: "Number of constant velocities to use to compute semblance values." parameter: NORM text: "Semblance normalization mode" type: pop_choose: type_desc: ( ("Scale Time Slice" 1 "Divide time slice of semblance values by the maximum in time slice.") ("Scale Panel" 2 "Divide all semblance values by the maximum semblance in panel.") ("No Scaling" 3 "Do no scaling of semblance values.") ) mouse_text: "Choose method of normalizing semblances." parameter: RNOISE text: "Noise factor for normalization" type: typein: type_desc: ( real: 3 0.0 nil ) value: 0.1 mouse_text: "Add this value to maximum semblance (in time slice, or panel) before scaling. Recommend 0.0 to 0.4" parameter: STRETCH text: "Stretch factor" type: typein: type_desc: ( real: 3 0.0 999.0 ) value: 50.0 mouse_text: "Maximum percentage stretch allowed in NMO calculation?" parameter: MIN_FOLD text: "Minimum fold" type: typein:

Other Docs

Known Problems

semblance.menu419

Developer’s Programming Guide

type_desc: ( int: 2 1 99 ) value: 4 mouse_text: "Minimum fold required for semblance calculation?"

exec_data: ("SEMBLANCE" ("GENERAL" ("version " implicit: "@(#)semblance.menu ("VSTART " implicit: (value 'VSTART)) ("VEND " implicit: (value 'VEND)) ("NVELS " implicit: (value 'NVELS)) ("NORM " implicit: (value 'NORM)) ("RNOISE " implicit: (value 'RNOISE)) ("STRETCH " implicit: (value 'STRETCH)) ("MIN_FOLD" implicit: (value 'MIN_FOLD)) ) )

40.1

11/24/92" )

rules: (

(rule1 ( or ( = (value 'NORM) 1 ) ( = (value 'NORM) 2 ) ) (progn (do_show 'RNOISE)) (progn (do_not_show 'RNOISE)))

(VEL_default_rule1 (and ( = ( value 'VSTART) -1.0 ) ( = ( db_parmget_line "IUNITSz") 1 ) ) (set_value 'VSTART 4500.0 ) ) (VEL_default_rule2 ( = ( value 'VSTART) -1.0 ) (set_value 'VSTART 1400.0 ) ) (VEL_default_rule3 (and ( = ( value 'VEND) -1.0 ) ( = ( db_parmget_line "IUNITSz") 1 ) ) (set_value 'VEND 20000.0 ) ) (VEL_default_rule4 ( = ( value 'VEND) -1.0 ) (set_value 'VEND 7000.0 ) ) ) )

Other Docs

Known Problems

semblance.inc420

Developer’s Programming Guide

semblance.inc IMPLICIT NONE C-----------------------------------------------------------------------------C Include file for SEMBLANCE C-----------------------------------------------------------------------------#include "global.inc"

& &

COMMON /SAVED_PARMS/ SAVE1z, VSTART, VEND, NVELS, IX_VELSAVE, IH_VEL, MIN_FOLD, RNOISE, NORM, STRETCH, NOUT, NFFT, IX_WORK2, IX_WORK3, IX_VNMO

&

REAL VSTART, VEND, RNOISE, STRETCH INTEGER NVELS, LENSAVED, IX_VELSAVE, MIN_FOLD, NORM, IX_VELS, IH_VEL, NOUT, NFFT, IX_WORK2, IX_WORK3, IX_VNMO

C ..... Specify the number of variables to save DATA LENSAVED /15/ C C C C C C C C C C C C C C

..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... ..... .....

VSTART - starting velocity VEND - ending velocity NVELS - number of velocities IX_VELS - index of buffer of semblance velocities IH_VEL - index of new trace header word MIN_FOLD - min fold to calculate semblance NORM - semblance normalization option RNOISE - noise factor STRETCH - maximum stretch allowed for nmo NOUT - number of "traces" to output NFFT - length of FFT for envelope calculation IX_WORK2 - index of work buffer 1 IX_WORK3 - index of work buffer 2 IX_VNMO - index of buffer for VNMO

C ..... SCCS: @(#)semblance.inc

Other Docs

31.4

5/14/92

Known Problems

semblance.f421

Developer’s Programming Guide

semblance.f C-----------------------------------------------------------------------------C Initialization routine for SEMBLANCE C-----------------------------------------------------------------------------SUBROUTINE INIT_SEMBLANCE( LEN_SAV, ITOOLTYPE ) #include "semblance.inc" #include "hdr_err.inc" #include "header.inc" CHARACTER CDESC*32 INTEGER IERR, IFORMAT, LEN_SAV, ITOOLTYPE, LENGTH, IOK_HDR, & MAXDTRZ_SAVE CHARACTER CSCCS_KEY*50 DATA CSCCS_KEY /'@(#)semblance.f 31.6 5/14/92'/ C ..... Must not be stacked data IF ( IDTYPz .EQ. ISTACKEDpz ) CALL EX_ERR_FATAL( & 'Cannnot operate on stacked data' ) C ..... The trace number is no longer valid, and the geometry no longer matches ITRNO_VALIDz = 0 IGEOM_MATCHz = 0 C ..... Get the starting velocity VSTART = -1.0 CALL EX_GETPARM( 'VSTART ', 1, VSTART ) IF ( VSTART .EQ. -1.0 ) CALL EX_ERR_FATAL( & 'Starting velocity must be specified' ) C ..... Get the ending velocity VEND = -1.0 CALL EX_GETPARM( 'VEND ', 1, VEND ) IF ( VEND .EQ. -1.0 ) CALL EX_ERR_FATAL( & 'Ending velocity must be specified' ) C ..... Get the number of velocities NVELS = 0 CALL EX_GETPARM( 'NVELS ', 1, NVELS ) IF ( NVELS .LE. 0 ) CALL EX_ERR_FATAL( & 'Number of velocities must be specified' ) C ..... We will actually output 3 extra traces NOUT = NVELS + 3 C ..... get the normalization method NORM = 0 CALL EX_GETPARM( 'NORM ', 1, NORM ) IF ( NORM .LE. 0 ) CALL EX_ERR_FATAL( & 'Norm method is not specified' ) C ..... Get the noise factor RNOISE = 0.1 CALL EX_GETPARM( 'RNOISE ', 1, RNOISE ) IF ( RNOISE .LT. 0.0 ) CALL EX_ERR_FATAL(

Other Docs

Known Problems

semblance.f422

&

Developer’s Programming Guide

'Noise factor must be specified' )

C ..... Get the minimum fold MIN_FOLD = 3 CALL EX_GETPARM( 'MIN_FOLD', 1, MIN_FOLD ) IF ( MIN_FOLD .LT. 0 ) CALL EX_ERR_FATAL( & 'Minimum fold must be specified' ) C ..... Get the nmo stretch factor in percent STRETCH = 50.0 CALL EX_GETPARM( 'STRETCH ', 1, STRETCH ) IF ( STRETCH .LE. 0 ) CALL EX_ERR_FATAL( & 'Stretch factor must be specified' ) C ..... Reserve a buffer to save the velocities CALL MEM_RESBUFF( NVELS, IX_VELSAVE, IERR ) C ..... Deal with the maximum number of traces per ensemble MAXDTRZ_SAVE = MAXDTRz IF ( NOUT .GT. MAXDTRz ) THEN MAXDTRz = NOUT ENDIF C ..... Get length of FFT for envelope calculation CALL GET_NFFT(NINT(NUMSMPz*1.25), NFFT) C ..... Get and keep some CALL MEM_RESBUFF( CALL MEM_RESBUFF( CALL MEM_RESBUFF(

small working buffers NFFT+2, IX_WORK2, IERR ) NFFT+2, IX_WORK3, IERR ) NUMSMPz, IX_VNMO, IERR )

cdd - No, lie so we can use branched flows cddC ..... Reset the data type to transformed unstacked data cdd IDTYPz = IUS_TRANSpz cdd - No, lie so we can use branched flows cddC ..... Reset the domain to semblance cdd IDOMAINz = ISEMBpz C ..... Made IF ( & IF ( & IF ( &

sure that OFFSET, END_ENS, and TRACENO are present IOK_HDR(IOFFSETz) .NE. 1 ) CALL EX_ERR_FATAL( 'OFFSET not found in trace header' ) IOK_HDR(IEND_ENSz) .NE. 1 ) CALL EX_ERR_FATAL( 'END_ENS not found in trace header' ) IOK_HDR(ITRACENOz) .NE. 1 ) CALL EX_ERR_FATAL( 'TRACENO not found in trace header' )

C ..... Create a velocity trace header word CDESC = 'Semblance velocity' CALL HDR_ADD( 'SEMB_VEL', CDESC, 1, IREAL4pz, IH_VEL, IERR ) IF ( IERR .EQ. IERR_HDR_EXSTpz ) THEN CALL EX_ERR_WARN( & 'Semblance velocity already exists in trace header' ) IERR = 0 ELSEIF ( IERR .NE. 0 ) THEN CALL EX_ERR_FATAL( 'Cannot create new trace header word' ) ENDIF C ..... Set the maximum number of traces to buffer, and the trace and

Other Docs

Known Problems

semblance.f423

Developer’s Programming Guide

C ..... sample padding. CALL EX_BUFF_PARMS( MAXDTRZ_SAVE, 0, 0 ) C ..... Set the number of words that need to be saved and set the tool type LEN_SAV = LENSAVED ITOOLTYPE = IDBL_BUFFpz RETURN END

C-----------------------------------------------------------------------------C Flow routine for SEMBLANCE. C------------------------------------------------------------------------------

&

SUBROUTINE FLOW_SEMBLANCE( TRACES, ITHDRS, RTHDRS, NSTORED, IFOUND_EOJ, NOUTPUT, NOVERLAP )

#include "semblance.inc" #include "header.inc" INTEGER NSTORED, ITHDRS(NTHz,NSTORED), IFOUND_EOJ, NOUTPUT, & NOVERLAP REAL TRACES(NUMSMPz,NSTORED), RTHDRS(NTHz,NSTORED) C ..... We're just looking for the end of an ensemble IF ( IFOUND_EOJ .EQ. 1 & .OR. ITHDRS(IEND_ENSz,NSTORED) .EQ. LASTTRpz ) THEN NOUTPUT = NOUT ELSE NOUTPUT = 0 ENDIF C ..... We don't want to see any of the input traces again NOVERLAP = 0 RETURN END

C-----------------------------------------------------------------------------C Execution routine for SEMBLANCE. C------------------------------------------------------------------------------

& &

SUBROUTINE EXEC_SEMBLANCE( TRACES_IN, ITHDRS_IN, RTHDRS_IN, NTR_BUFF_IN, TRACES_OUT, ITHDRS_OUT, RTHDRS_OUT, NTR_BUFF_OUT )

#include "semblance.inc" #include "mem.inc" #include "header.inc" INTEGER NTR_BUFF_IN, NTR_BUFF_OUT, J, IERR, IX_WORK1, & ITHDRS_IN(NTHz,NTR_BUFF_IN), & ITHDRS_OUT(NTHz,NTR_BUFF_OUT) REAL TRACES_IN(NUMSMPz,NTR_BUFF_IN), & TRACES_OUT(NUMSMPz,NTR_BUFF_OUT),

Other Docs

Known Problems

semblance.f424

& &

Developer’s Programming Guide

RTHDRS_IN(NTHz,NTR_BUFF_IN), RTHDRS_OUT(NTHz,NTR_BUFF_OUT) IF (CLEANUPz) RETURN

C ..... Calculate the semblance (also need one addition scratch buffer) CALL MEM_RESBUFF( NTR_BUFF_IN*NUMSMPz, IX_WORK1, IERR ) CALL SEMBLANCE_WORK( TRACES_IN, NUMSMPz, RTHDRS_IN, NTHz, & NTR_BUFF_IN, RSPACEz(IX_WORK1), RSPACEz(IX_WORK2), & RSPACEz(IX_WORK3), TRACES_OUT, NVELS, NOUT, & RSPACEz(IX_VNMO), IOFFSETz, SAMPRATz, VSTART, VEND, & IH_VEL, RSPACEz(IX_VELSAVE), INA_STATz, NFFT, NORM, & RNOISE, MIN_FOLD, STRETCH ) CALL MEM_FREEBUFF( NTR_BUFF_IN*NUMSMPz, IX_WORK1, IERR ) C ..... Create the output headers DO 100 J=1,NTR_BUFF_OUT C ......... Just copy the last header CALL VMOV( ITHDRS_IN(1,NTR_BUFF_IN), 1, & ITHDRS_OUT(1,J), 1, NTHz ) C ......... Make the offset zero RTHDRS_OUT(IOFFSETz,J) = 0.0 C ......... Set the velocity and round off to nearest 10 for labeling neatness RTHDRS_OUT(IH_VEL,J) = NINT( RSPACEz(IX_VELSAVE+J-1) & / 10.0 ) * 10.0 C ......... Make the trace number null ITHDRS_OUT(ITRACENOz,J) = INULLpz C ......... Set the END_ENS flag ITHDRS_OUT(IEND_ENSz,J) = NLASTpz IF ( J .EQ. NTR_BUFF_OUT ) & ITHDRS_OUT(IEND_ENSz,J) = LASTTRpz 100 CONTINUE RETURN END

& & &

& & & & &

SUBROUTINE SEMBLANCE_WORK( TRACES, NSAMPS, RTHDRS, NTH, NSTORED, TRWORK, WORK2, WORK3, VELARR, NVELS, NOUT, VNMO, IH_OFFSET, SAMPRATE, VSTART, VEND, IH_VEL, VELSAVE, IH_NA_STAT, NFFT, NORM, RNOISE, MIN_FOLD, STRETCH) IMPLICIT NONE INTEGER NSAMPS, NTH, NSTORED, IH_OFFSET, I, J, K, NVELS, NSUMS, IH_VEL, LENHALF, NFFT, NORM, MIN_FOLD, IH_NA_STAT, NOUT REAL TRACES(NSAMPS,NSTORED), TRWORK(NSAMPS,NSTORED), STATIC, VNMO(NSAMPS), VELARR(NSAMPS,NOUT), SAMPRATE, VSTART, VEND, VINC, VEL, OFFSET, VELT, VELV, VELSAVE(NVELS), STRETCH, RTHDRS(NTH,NSTORED), WORK2(NSAMPS), SUM, SUM2, WORK3(NSAMPS), RMAX, RNOISE, RMAX_PANL

C ..... Initialize: VINC = (VEND - VSTART) / FLOAT(NVELS-1) C ..... AGC all of the data first (1000.0 ms or the trace length) LENHALF = NINT( ( 1000.0 / SAMPRATE ) / 2.0 ) IF ( LENHALF .GT. NSAMPS/2 ) LENHALF = NSAMPS/2

Other Docs

Known Problems

semblance.f425

& 90

Developer’s Programming Guide

DO 90 J=1,NSTORED CALL STAT_AGC_RUNAVG( TRACES(1,J), WORK2, NSAMPS, LENHALF, 0 ) CONTINUE

C ..... Loop thru all of the velocities DO 200 K=1,NVELS C ......... Velocity increment based on delta-t VELT = 1.0 / & ( 1.0/VSTART - (1.0/VSTART-1.0/VEND) * FLOAT(K-1) & / FLOAT(NVELS-1) ) C ......... Velocity increment based on delta-v VELV = VSTART + FLOAT(K-1) * VINC C ......... Use compromise between delta-t and delta-v VEL = ( VELT + VELV ) / 2.0 C ......... Store the velocity in the trace header RTHDRS(IH_VEL,K) = VEL C ......... Save the velocity VELSAVE(K) = VEL C ......... Load the velocity array DO 100 I=1,NSAMPS VNMO(I) = VEL 100 CONTINUE C ......... Loop thru all of the traces, applying NMO DO 110 J=1,NSTORED CALL VMOV( TRACES(1,J), 1, TRWORK(1,J), 1, NSAMPS ) OFFSET = ABS( RTHDRS(IH_OFFSET,J) ) STATIC = RTHDRS(IH_NA_STAT,J) CALL NMO_APPLY( TRWORK(1,J), VNMO, WORK2, WORK3, & STRETCH, OFFSET, NSAMPS, SAMPRATE, STATIC, 2 ) 110 CONTINUE C ......... Calculate the semblance DO 130 I=1,NSAMPS SUM = 0.0 SUM2 = 0.0 NSUMS = 0 DO 120 J=1,NSTORED IF ( TRWORK(I,J) .NE. 0.0 ) THEN C ..................... Stack SUM = SUM + TRWORK(I,J) C ..................... Stack the squares SUM2 = SUM2 + TRWORK(I,J)*TRWORK(I,J) NSUMS = NSUMS + 1 ENDIF 120 CONTINUE IF ( NSUMS .GT. MIN_FOLD ) THEN C ................. signed semblance C ............. try bias to force down the low fold times VELARR(I,K) = (SUM*ABS(SUM)) / (FLOAT(NSUMS)*SUM2) ELSE VELARR(I,K) = 0.0 ENDIF 130 CONTINUE C ......... compute envelope of signed semblance

Other Docs

Known Problems

semblance.f426

Developer’s Programming Guide

CALL VMOV( VELARR(1,K), 1, VNMO, 1, NSAMPS) C ......... compute the 90 degree phase shifted version of the semb. trace CALL PHASEFILTER( VNMO, NSAMPS, 90.0, NFFT, WORK2, WORK3) C ......... compute the env of semblance DO 135 I=1,NSAMPS VELARR(I,K) = SQRT( VNMO(I)**2 + VELARR(I,K)**2 ) 135 CONTINUE C ......... Smooth in time fixed 3 samples - smooth remaining notches LENHALF = 3 / 2 CALL STAT_AGC_RUNAVG( VELARR(1,K), WORK2, NSAMPS, LENHALF, 1 & )

200

CONTINUE

C ...... calculate power bar on side of plot C ........ find max semblance CALL MAXV( VELARR(1,1), 1, RMAX_PANL, J, NVELS*NSAMPS) DO 210 I=1, NSAMPS C ......... find max semblance across time slice CALL MAXV( VELARR(I,1), NSAMPS, RMAX, J, NVELS) C ......... scale each time slice IF (RMAX_PANL .GT. 0.0) THEN VELARR(I,NVELS+1) = 0.0 VELARR(I,NVELS+2) = RMAX/RMAX_PANL VELARR(I,NVELS+3) = RMAX/RMAX_PANL ENDIF 210 CONTINUE C ...... Scale semblances if necessary IF ( NORM .EQ. 1 ) THEN DO 220 I=1, NSAMPS C ............ find max semblance across time slice CALL MAXV( VELARR(I,1), NSAMPS, RMAX, J, NVELS) C ............ scale each time slice individually IF (RMAX .GT. 0.0) THEN RMAX = (1.0 + RNOISE) / (RMAX + RNOISE) CALL VSMUL(VELARR(I,1), NSAMPS, RMAX, VELARR(I,1), & NSAMPS, NVELS - 2) ENDIF 220 CONTINUE ELSEIF ( NORM .EQ. 2) THEN RMAX = RMAX_PANL C ......... scale all semblance values IF (RMAX .GT. 0.0) THEN RMAX = (1.0 + RNOISE) / (RMAX + RNOISE) CALL VSMUL(VELARR(1,1), 1, RMAX, VELARR(1,1), & 1, NVELS*NSAMPS) ENDIF ENDIF RETURN END

Other Docs

Known Problems

interp_db.menu427

Developer’s Programming Guide

interp_db.menu '( name: INTERP_DB label: "Double Buffer Trace Interpolation" value_tab: 35 exec_data: ("INTERP_DB" ("GENERAL" ("dummy" implicit: 1) ) ) )

Other Docs

Known Problems

interp_db.c428

Developer’s Programming Guide

interp_db.c /*--------------------------------------------------------------------------*/ /* interp_db /* An example of a double buffer tool in C. The program collects an ensemble /* of data traces then passes the ensemble on to the exec_ subroutine. /* The program uses a change in Primary sort key value to detect that /* there has been a change in the ensemble rather than by using the END_ENS /* flag. This is done for demonstration of the nOverlap variable in /* the flow routine. The output data traces are the original traces /* plus a trace that is linearly interpolated between each input trace. /* The output ensemble therefore has 2*N - 1 traces where /* N is the number of input traces. /* /* The single buffer tool example called interp_sb.c is identical to /* this example except that this is a double buffer tool and therefore /* the data traces from the input buffer are mapped to the separate /* output trace buffer. /*-------------------------------------------------------------------------*/ /* include promax interface, globals, error codes, etc */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS int nInEnsemble; /* the number of traces passed to the exec subroutine */ int pKeyHdrIndx; /* index of the primary sort key trace header (FFID, CDP etc) */ ENDPARMS(parms) /*-------------------------------------------------------------------*/ /* /* description: /* initialization routine for interp_sb: /* /* output arguments: /* len_sav - length of the saved common block /* itooltype - type of tool /* /*-------------------------------------------------------------------*/ void init_interp_db_( len_sav, itooltype ) int *len_sav, *itooltype; { int max_to_buffer, nPadTraces, nPadSamples; /* set the global pointers */ GlobalRuntime *gr = globalRuntime; /* set the number of words that need to be saved */ *len_sav = NPARMS(parms);

Other Docs

Known Problems

interp_db.c429

/* /* /* /* /* /*

Developer’s Programming Guide

Get the header array index for the primary sort key of the input data. */ Note that 1 is subracted from the value of globalRuntime->ipkey */ to obtain the index of the primary sort key that is appropriate for */ the C language. The global variable trace indicies are indexed for */ FORTRAN. This rule is true for globalRuntime->iskey as well as */ the standard headers. */ parms->pKeyHdrIndx = gr->ipkey - 1;

/* /* /* /* /*

Notify trace exec of maximum number of traces it will have to hold for */ this module. The ex_buff_parms_() routine must be called in the init */ subroutine of any buffered tool. NOTE that the routine is a FORTRAN */ routine that is being directly called by this C routine so the address of */ the variables are passed. */ max_to_buffer = 2*gr->maxdtr; nPadTraces = 0; nPadSamples = 0; ex_buff_parms_( &max_to_buffer, &nPadTraces, &nPadSamples );

/* we are going to output more traces per ensemble than are input */ gr->maxdtr = (2*gr->maxdtr) - 1; /* set the tool type */ *itooltype = IDBL_BUFF; }

/*--------------------------------------------------------------------*/ /* /* flow tool for interp_db /* /* /* /*--------------------------------------------------------------------*/ flow_interp_db_( float *traces, int *ithdrs, float *rthdrs, int *nStored, int *ifound_eoj, int *nOutput, int *nOverlap ) { GlobalRuntime *gr = globalRuntime; float **rhdrs;

/* /* /* /* /*

if( *nStored == 1 ){ .. There is only one trace so we can't compare it to anything. This */ .. situtation should only occur on the first call to the routine unless */ .. the last trace input is a single-trace ensemble in which case more */ .. code would be needed to handle that special case. We leave out that */ .. extra code here as it would clutter the example. */ *nOutput = 0; return; }

/* put the trace headers into a 2d array that is easy to handle */ rhdrs = fVecTo2d( rthdrs, *nStored, gr->nth ); /* see if the primary key header value of most recent input trace has changed */

Other Docs

Known Problems

interp_db.c430

Developer’s Programming Guide

if( rhdrs[*nStored-2][parms->pKeyHdrIndx] != rhdrs[*nStored-1][parms>pKeyHdrIndx] || *ifound_eoj != 0 ){ /* .. we have found enough traces to process */ /* .. Set the number of traces to output from exec, a non-zero value tells */ /* .. the trace executive that it is time to pass the traces to */ /* .. exec_interp_sb_ */ *nOutput = 2*(*nStored-1) - 1; /* .. set the number of traces that are being passed to the exec_ subroutine */ parms->nInEnsemble = *nStored - 1; /* .. Let the last trace input on this call be the first trace input */ /* .. on the next call since it is part of the next ensemble that we */ /* .. will be collecting. */ *nOverlap = 1; free( rhdrs ); } else{ /* .. signal the trace executive that we aren't ready to pass traces exec_ */ *nOutput = 0; } } /*-------------------------------------------------------------------*/ /* /* description: /* exectution routine for interp_db /* /* input arguments: /* nStored - the amount of memory available (in traces) in the /* traces array. NOTE that the actual number of traces /* input was determined in the flow_interp_db routine and /* passed to exec_interp_db via the parms->nInEnsemble /* variable. /* /* input/output arguments: /* traces_in - 2-d array of input traces /* ithdrs_in - 2-d array of input integer trace headers /* rthdrs_in - 2-d array of input float trace headers /* /*-------------------------------------------------------------------*/ void exec_interp_db_( float float float float

*traces_in, int *ithdrs_in, *rthdrs_in, int *nTrBuffIn, *traces_out, int *ithdrs_out, *rthdrs_out, int *nTrBuffOut )

{ float **rhdrs_in, **tracs_in; float **rhdrs_out, **tracs_out; int i,j;

Other Docs

Known Problems

interp_db.c431

Developer’s Programming Guide

/* set the global pointers */ GlobalRuntime *gr = globalRuntime; /* if cleaning up release memory, close files, in this case there is nothing to do */ if( gr->cleanup){ return; } /* Arrange the trace and header arrays into 2d arrays that are easy to handle. */ tracs_in = fVecTo2d( traces_in, *nTrBuffIn, gr->numsmp ); rhdrs_in = fVecTo2d( rthdrs_in, *nTrBuffIn, gr->nth ); tracs_out = fVecTo2d( traces_out, *nTrBuffOut, rhdrs_out = fVecTo2d( rthdrs_out, *nTrBuffOut,

gr->numsmp ); gr->nth );

/* Interpolate the sample values */ for( i = 0; i < gr->numsmp; i++ ){ for( j = parms->nInEnsemble-1; j > 0; j-- ){ tracs_out[2*j][i] = tracs_in[j][i]; tracs_out[2*j-1][i] = (tracs_in[j][i] + tracs_in[j-1][i])/2.; } } /* Take care of the headers too, interpolated traces will have a copy */ /* of the header just before them */ for( i = 0; i < gr->nth; i++ ){ for( j = parms->nInEnsemble-1; j > 0; j-- ){ rhdrs_out[2*j][i] = rhdrs_in[j][i]; rhdrs_out[2*j-1][i] = rhdrs_in[j-1][i]; } } free(tracs_in); free(rhdrs_in); free(tracs_out); free(rhdrs_out);

}

Other Docs

Known Problems

interp_db.c432

Other Docs

Developer’s Programming Guide

Known Problems

433

Developer’s Programming Guide

Appendix: Complex Tool Examples

This appendix provides an example of a complex tool. In this example, the tool changes the trace length of the output traces and the number of traces output per ensemble. The program does not actually do anything useful except to demonstrate how to use a complex tool. The program gathers data traces until it has an entire ensemble of M data traces, each of which is N data samples in length. The program outputs N data traces, each of which is M samples long, outputting the transpose of the 2-dimensional input ensemble matrix. The menu file is presented first, followed by the FORTRAN version and then the C version. The menu file serves both the C and FORTRAN versions.

Topics covered in this appendix:

➲ ➲ ➲ ➲

Other Docs

transform.menu transform.inc transform.f transform.c

Known Problems

transform.menu434

Developer’s Programming Guide

transform.menu '( name: TRANSFORM label: "Transform" value_tab: 35 exec_data: ("TRANSFORM" ("GENERAL" ("dummy" implicit: nil) ) ) )

Other Docs

Known Problems

transform.inc435

Developer’s Programming Guide

transform.inc C-----------------------------------------------------------------------------C Include file for TRANSFORM C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc"

&

&

COMMON /SAVED_PARMS/ SAVE1z, MAXDTRZ_OLD, NUMSMPZ_OLD, NSTORED, NDUMPED, DUMPING, IX_TRACES, IX_THDRS INTEGER LENSAVED, MAXDTRZ_OLD, NUMSMPZ_OLD, NSTORED, NDUMPED, IX_TRACES, IX_THDRS LOGICAL DUMPING

C C C C

..... ..... ..... .....

Specify the number of variables to save. This is set here (rather than in the initialization routine) so that programmers don't forget to change it when they are changing the contents of the common block SAVED_PARMS. DATA LENSAVED /8/

C C C C C C C

..... ..... ..... ..... ..... ..... .....

MAXDTRZ_OLD - previous value of MAXDTRz NUMSMPZ_OLD - previous value of NUMSMPz NSTORED - current number of traces stored NDUMPED - current number of traces dumped DUMPING - logical flag for dumping mode IX_TRACES - index of buffer of traces IX_THDRS - index of buffer of trace headers

Other Docs

Known Problems

transform.f436

Developer’s Programming Guide

transform.f SUBROUTINE INIT_TRANSFORM( LEN_SAV, ITOOLTYPE ) #include "transform.inc" #include "header.inc" INTEGER LEN_SAV, ITOOLTYPE, IOK_HDR C ..... The trace number is no longer valid, and the geometry no longer matches ITRNO_VALIDz = 0 IGEOM_MATCHz = 0 C ..... Reset the data type to transformed unstacked data IDTYPz = IUS_TRANSpz C ..... Reset the domain. We will have to make up something new, since C ..... nothing appropriate exists IDOMAINz = 100 C ..... Save the previous values of the trace length and maximum number C ..... of traces per ensemble (we will need them in exec phase) MAXDTRZ_OLD = MAXDTRz NUMSMPZ_OLD = NUMSMPz C ..... Reset the trace length and maximum number of traces per ensemble MAXDTRz = NUMSMPZ_OLD NUMSMPz = MAXDTRZ_OLD C ..... Check the needed header entries IF ( IOK_HDR(IEND_ENSz) .NE. 1 ) CALL EX_ERR_FATAL( & 'ENS_ENS not found in header' ) IF ( IOK_HDR(ITRACENOz) .NE. 1 ) CALL EX_ERR_FATAL( & 'TRACENO not found in header' ) IF ( IOK_HDR(ISEQNOz) .NE. 1 ) CALL EX_ERR_FATAL( & 'SEQNO not found in header' ) C ..... Initialize for execution phase NSTORED = 0 NDUMPED = 0 DUMPING = .FALSE. C ..... Set the number of words that need to be saved and the tool type LEN_SAV = LENSAVED ITOOLTYPE = ICOMPLEXpz RETURN END

SUBROUTINE EXEC_TRANSFORM( TRACE, ITHDR, RTHDR ) #include "transform.inc" #include "mem.inc" #include "header.inc" INTEGER ITHDR(NTHz), IND, IERR, IX_WORK REAL TRACE(NUMSMPZ_OLD), RTHDR(NTHz)

Other Docs

Known Problems

transform.f437

Developer’s Programming Guide

C ..... No action needed for cleanup phase IF ( CLEANUPz ) RETURN IF ( .NOT. DUMPING ) THEN C ......... Filling the buffer before the transform C ......... Increment the number of traces stored NSTORED = NSTORED + 1 IF ( NSTORED .EQ. 1 ) THEN C ............. Get a buffer to fill CALL MEM_RESBUFF( MAXDTRZ_OLD*NUMSMPZ_OLD, IX_TRACES, & IERR ) CALL MEM_RESBUFF( MAXDTRZ_OLD*NTHz, IX_THDRS, IERR ) ENDIF C ......... Store the current trace and header IND = IX_TRACES + (NSTORED-1)*NUMSMPZ_OLD CALL VMOV( TRACE, 1, RSPACEz(IND), 1, NUMSMPZ_OLD ) IND = IX_THDRS + (NSTORED-1)*NTHz CALL VMOV( ITHDR, 1, ISPACEz(IND), 1, NTHz ) C ......... If this is the end of the ensemble, time to process IF ( ITHDR( IEND_ENSz ) .EQ. LASTTRpz ) THEN C ............. Call the actual work routine CALL MEM_RESBUFF(MAXDTRZ_OLD*NUMSMPZ_OLD, IX_WORK, IERR) CALL TRANSFORM_WORK( RSPACEz(IX_TRACES), & RSPACEz(IX_WORK), NUMSMPZ_OLD, MAXDTRZ_OLD ) CALL MEM_FREEBUFF( MAXDTRZ_OLD*NUMSMPZ_OLD, IX_TRACES, & IERR ) IX_TRACES = IX_WORK C ............. Time to start flushing the buffer DUMPING = .TRUE. ELSE C ............. Otherwise, we are still in fill mode CALL EX_FILLMODE RETURN ENDIF ENDIF C ..... If control reaches here, we are dumping the buffer C ..... Increment the number of traces dumped NDUMPED = NDUMPED + 1 C ..... Copy a header from storage (same one every time!!). C ..... nasty problem CALL VMOV( ISPACEz(IX_THDRS), 1, ITHDR, 1, NTHz )

This is a

C ..... Nullify the trace number ITHDR(ITRACENOz) = INULLpz C ..... Set the sequence-number-in-ensemble ITHDR(ISEQNOz) = NDUMPED

Other Docs

Known Problems

transform.f438

Developer’s Programming Guide

C ..... Copy a "trace" from storage IND = IX_TRACES + (NDUMPED-1)*NUMSMPz CALL VMOV( RSPACEz(IND), 1, TRACE, 1, NUMSMPz ) IF ( NDUMPED .EQ. NUMSMPZ_OLD ) THEN C ......... We are finished dumping C ......... Reset related variables DUMPING = .FALSE. NDUMPED = 0 NSTORED = 0 C ......... Scratch the memory so that another process can use it CALL MEM_FREEBUFF( MAXDTRZ_OLD*NUMSMPZ_OLD, IX_TRACES, IERR) CALL MEM_FREEBUFF( MAXDTRZ_OLD*NTHz, IX_THDRS, IERR ) C ......... This is the last trace in the ensemble ITHDR(IEND_ENSz) = LASTTRpz C ......... We are in pipe mode momentarily CALL EX_PIPEMODE ELSE C ......... We still have more traces to dump ITHDR(IEND_ENSz) = NLASTpz C ......... We are still in flush mode CALL EX_FLUSHMODE ENDIF RETURN END

100 110

&

SUBROUTINE TRANSFORM_WORK( TRACES_OLD, TRACES_NEW, NUMSMPZ_OLD, MAXDTRZ_OLD )

&

IMPLICIT NONE INTEGER NUMSMPZ_OLD, MAXDTRZ_OLD, I, J REAL TRACES_OLD(NUMSMPZ_OLD,MAXDTRZ_OLD), TRACES_NEW(MAXDTRZ_OLD,NUMSMPZ_OLD) DO 110 J=1,MAXDTRZ_OLD DO 100 I=1,NUMSMPZ_OLD TRACES_NEW(J,I) = TRACES_OLD(I,J) CONTINUE CONTINUE RETURN END

Other Docs

Known Problems

transform.c439

Developer’s Programming Guide

transform.c /* include ProMAX prototypes and globals */ #include "cpromax.h" #include "cglobal.h" #include "alloc.h" #include "memalloc.h" #include "agfc.h" /* define saved parameters */ BEGINPARMS /* saved parms related to sample interval and trace length */ int maxdtr_old; int numsmp_old; /* saved parms related to control of trace I/O */ int nstored; int ndumped; int dumping; /* saved float float float

parms related to trace and header storage */ **storedTrcs; **storedHdrs; **transTrcs;

ENDPARMS(parms) void init_transform_(int *len_sav, int *itooltype); void exec_transform_(float *trace, int *ithdr, float *rthdr); float **transform_work( float**, int, int, int ); /*------------------------------------------------------------------Description: Initialization routine for transform output arguments: len_save - number of 4-byte words to save for re-entrancy itooltype - processing tool type ---------------------------------------------------------------------*/ void init_transform_(int *len_sav, int *itooltype) { /* local variabes */ GlobalRuntime *gr = globalRuntime; /* The trace number is no longer valid, and the geometry no longer matches */ gr->itrno_valid = 0; gr->igeom_match = 0; /* Reset the data type to transformed unstacked data */ gr->idtyp = IUS_TRANS;

Other Docs

Known Problems

transform.c440

Developer’s Programming Guide

/* Reset the domain. We will have to make up something new, since */ /* nothing appropriate exists */ gr->idomain = 100; /* Save the previous values of the trace length and maximum number */ /* of traces per ensemble (we will need them in exec phase) */ parms->maxdtr_old = gr->maxdtr; parms->numsmp_old = gr->numsmp; /* Reset the trace length and maximum number of traces per ensemble */ gr->maxdtr = parms->numsmp_old; gr->numsmp = parms->maxdtr_old; /* Make sure the needed header entries exist */ if( hdrExists( "END_ENS" ) != 1 ){ exErrFatal("END_ENS header not found in trace headers."); } if( hdrExists( "TRACENO" ) != 1 ){ exErrFatal("TRACENO header not found in trace headers."); } if( hdrExists( "SEQNO" ) != 1 ){ exErrFatal("SEQNO header not found in trace headers."); } /* Initialize for execution phase, TRUE and FALSE */ /* are #define'd in cpromax.h, TRUE = 1, FALSE = 0 */ parms->nstored = 0; parms->ndumped = 0; parms->dumping = FALSE; /* Set the number of words that need to be saved for re-entrancy. *len_sav = NPARMS (parms);

*/

/* set the tool type */ *itooltype = ICOMPLEX; }

/********************************************************************* * * Description: * Execution routine for transform * * Input/output arguments: * trace - array of trace samples * ithdr - trace header (as integer) * rthdr - trace header (as floating [point) * **********************************************************************/

void exec_transform_(float *trace, int *ithdr, float *rthdr) { GlobalRuntime *gr = globalRuntime; int indx;

Other Docs

Known Problems

transform.c441

Developer’s Programming Guide

/* No action needed for cleanup phase */ if( gr->cleanup ) return; if( parms->dumping == FALSE ){ /* .. Filling the buffer before the transform */ /* .. Increment the number of traces stored */ parms->nstored++; if ( parms->nstored == 1 ){ /* ..... Get buffers to store input traces and trace headers. */ /* ..... Note that error checking for null returned pointers is */ /* ..... done within ealloc2. */ parms->storedTrcs = _memAlloc2Float( parms->numsmp_old, parms->maxdtr_old, "transform.c", 129 ); parms->storedHdrs = _memAlloc2Float( gr->nth, parms->maxdtr_old, "transform.c", 132 ); } /* .. Store the current trace and header */ indx = parms->nstored - 1; vMov( trace, 1, parms->storedTrcs[indx], 1, parms->numsmp_old ); vMov( ithdr, 1, parms->storedHdrs[indx], 1, gr->nth ); /*... If this is the end of the ensemble it is time to process. */ if ( ithdr[ hdrIndex("END_ENS") ] == LASTTR ){ /* ...... Call the actual work routine that transforms traces */ parms->transTrcs = transform_work( parms->storedTrcs, parms->nstored, gr->numsmp, parms->numsmp_old ); /* ...... free the memory that stored the original input traces */ _memFree2Float( parms->storedTrcs, "transform.c", 150 ); /* ...... Time to start flushing the buffer */ parms->dumping = TRUE; } else{ /* ...... We are still loading traces until we get an entire ensemble. */ /* ...... Set the mode to not output a trace on this call and get */ /* ...... another trace on the next call to exec_transform. */ exFillMode(); return; } } /* If control reaches here, we are dumping the buffer */ /* Increment the number of traces dumped */ parms->ndumped++; /* Copy a header from storage (same one every time for this exercise) */ vMov( parms->storedHdrs[0], 1, rthdr, 1, gr->nth ); /* Nullify the trace number */ ithdr[hdrIndex("TRACENO")] = INULL;

Other Docs

Known Problems

transform.c442

Developer’s Programming Guide

/* Set the sequence-number-in-ensemble */ ithdr[hdrIndex("SEQNO")] = parms->ndumped; /* Copy a "trace" from storage */ indx = parms->ndumped - 1; vMov( parms->transTrcs[indx], 1, trace, 1, gr->numsmp ); if( parms->ndumped == parms->numsmp_old){ /* .. We are finished dumping */ /* .. Reset related variables */ parms->dumping = FALSE; parms->ndumped = 0; parms->nstored = 0; /* .. Free the memory so that another process can use it */ _memFree2Float( parms->transTrcs, "transform", 189 ); _memFree2Float( parms->storedHdrs, "transform,c", 190 );

/* .. This is the last trace in the ensemble */ ithdr[hdrIndex("END_ENS")] = LASTTR; /* .. We are in pipe mode momentarily, output the current trace and */ /* .. tell the trace executive to give exec_tranform a new trace on the */ /* .. next call */ exPipeMode(); } else{ /* .. We still have more traces to dump */ ithdr[hdrIndex("END_ENS")] = NLAST; /* .. We are still in flush mode, give the current trace to the trace */ /* .. executive and exec_transform doesn't want a new trace on the next */ /* .. since it still has traces to output. */ exFlushMode(); } }

float **transform_work( float **storedTrcs, int nstored, int outTrcLen, int numOutTrcs ) /**************************************************************** /* /* input: /* storedTrcs - the input traces /* nstored - number of stored traces == number out samples /* outTrcLen - the length in samples of the output trace /* nstored ipsort = ICDP; /* /* /* /* /* /* /*

.. .. .. .. .. .. ..

set the primary sort trace header index to cdp */ IMPORTANT NOTE, the standard header structure contains */ header index values that are appropriate for FORTRAN, a */ result of the trace executive being in FORTRAN. When you */ set gr->ipkey, use FORTRAN-appropriate values. When you */ need C header indicies, use the hdrIndex function or */ subtract 1 from the standard header value in stdHdr. */ gr->ipkey = stdHdr->icdp;

/* .. set the secondary key sort index to OFFSET */ gr->iskey = stdHdr->ioffset; } else{ /* .. add SIN, SOURCE, CHAN, and FFID to the header */ hdrAddStd("SIN"); hdrAddStd("SOURCE"); hdrAddStd("CHAN"); hdrAddStd("FFID"); /* .. set the primary sort flag to source */ gr->ipsort = ISIN; /* .. set the primery sort index to source */ gr->ipkey = stdHdr->isource; /* .. set the secondary sort index to channel */ gr->iskey = stdHdr->ichan; } /* set the domain to normal space-time */ gr->idomain = ITX;

Other Docs

Known Problems

sineWave.c458

Developer’s Programming Guide

/* it is impossible for the geometry to match or the trace number */ /* to be valid */ gr->igeom_match = FALSE; gr->itrno_valid = FALSE; /* /* /* /* /* /* /* /* /* /* /* /*

here are the gr->nth gr->mode gr->iounit gr->idate gr->npriorf gr->cleanup gr->ierror

globalRuntime variables that you do not set: */ - the number of 4-byte words in the trace header - the run mode (INTER, IBACKG, IBATCH) - the I/O unit for output diagnostics - the current date (seconds since 00:00:00 GMT, Jan 1, 1970) - number of prior processing flows - flag indicating the system is in cleaup mode - flag indicating the system is running under an error condition and trying to cleanup gr->init_only - flag indicating that only initialization phase is being executed */

/* initialize some values for exec phase */ currEns = 1; currTrcInEns = 1; firstVisit = TRUE; /* set the number of words that need to be saved for re-entrancy */ *len_sav = NPARMS(parms); /* set the tool type to simple, (one trace in one trace out). */ *itooltype = INPUT; /* set the external parameters */ parms->nens = nens parms->nfreqs = nfreqs parms->freqs = freqs parms->currEns = currEns parms->currTrcInEns = currTrcInEns parms->firstVisit = firstVisit

; ; ; ; ; ;

} /*-------------------------------------------------------------------*/ /* exec_sine_wave /* /* execution routine for ProMAX module sine_wave /* /* /*-------------------------------------------------------------------*/ void exec_sine_wave_( float* trace, float* rthdr, int* ithdr ) { /* declare int int float int int logical

local versions nens = nfreqs = *freqs = currEns = currTrcInEns = firstVisit =

Other Docs

of external variables */ parms->nens ; parms->nfreqs ; parms->freqs ; parms->currEns ; parms->currTrcInEns ; parms->firstVisit ;

Known Problems

sineWave.c459

Developer’s Programming Guide

/* declare local variables */ logical noDataFound; int iErr; /* set a pointer to the globalRuntime structure */ GlobalRuntime *gr = globalRuntime; /* see if we are in cleanup phase */ if( gr->cleanup ){ free( parms->freqs ); return; } /* fill the current trace and header buffers */ noDataFound = sineWaveNextTr( trace, ithdr, rthdr, freqs ); if( parms->firstVisit == TRUE ){ /* .. we only want to do this once */ parms->firstVisit = FALSE; if( noDataFound == TRUE ){ exErrFatal("Unable to create any traces."); } } if( noDataFound == TRUE ){ /* .. we have no data to output, this is analogous to hitting the end */ /* .. of a disk file or the end of a tape */ exQuitMode(); } else{ /* .. we have a trace to give to the executive */ exFlushMode(); } } /*-------------------------------------------------------------------*/ /* sineWaveNextTr /* /* subroutine to overlay sine waves on a trace /* input: /* freqs - array of frequencies to add to the sine wave /* /* output: /* trace - the trace on which sine waves are written /* ithdr - integer trace header array /* rthdr - float trace header array /* returns a logical (int) flag TRUE if trace could not be written to /* FALSE if normal completion /* /*-------------------------------------------------------------------*/ static int sineWaveNextTr( float *trace, int *ithdr, float *rthdr, float *freqs ) { /* local variables */

Other Docs

Known Problems

sineWave.c460

Developer’s Programming Guide

int i; int iErr; float offset; /* global variables */ GlobalRuntime *gr = globalRuntime; if( parms->currEns > parms->nens ){ /* .. we have no more data to output, this is equivalent to */ /* .. failing to read a trace from tape or disk */ return(TRUE); } /* if control reaches here we need to build a trace */ /* clear the input data trace */ vClr( trace, 1, gr->numsmp ); /* add the sine_wave data to the trace */ for( i = 0; i < parms->nfreqs; i++ ){ sineWaveAdd( freqs[i], gr->samprat, gr->numsmp, trace ); } /* initialize the values of the minimum standard header values */ initStdHdr( ithdr, rthdr ); /* take care of other header entries */ if( gr->ipsort == ISIN ){ /* .. inputting shots */ /* .. the source index number should be null until geom is assigned */ ithdr[ hdrIndex("SIN") ] = INULL; /* .. set the live source number */ ithdr[ hdrIndex("SOURCE") ] = parms->currEns; /* .. set the channel number */ ithdr[ hdrIndex("CHAN") ] = parms->currTrcInEns; /* .. set the field file id number, FFID */ ithdr[ hdrIndex("FFID") ] = parms->currEns; } else{ /* .. inputting CDPs */ /* .. set the CDP bin number */ ithdr[ hdrIndex("CDP") ] = parms->currEns; /* .. set the offset and aoffset, these values don't make much sense */ offset = (float)(parms->currTrcInEns); rthdr[ hdrIndex("OFFSET") ] = offset; rthdr[ hdrIndex("OFFSET") ] = (float)fabs( (double)offset ); } /* set the sequence number in the ensemble */ ithdr[ hdrIndex("SEQNO") ] = parms->currTrcInEns; /* set the sequential trace number (null until geometry is assigned) */

Other Docs

Known Problems

sineWave.c461

Developer’s Programming Guide

ithdr[ hdrIndex("TRACENO")] = INULL; if( parms->currTrcInEns == gr->maxdtr ){ /* .. this is the last trace in the ensemble, set end of ens flag */ ithdr[ hdrIndex("END_ENS") ] = LASTTR; /* .. reset the trace-in-ensemble counter, it gets incremented below */ parms->currTrcInEns = 0; /* .. increment the ensemble number for next time */ parms->currEns += 1; } else{ /* .. this is not the last trace in the ensembel */ ithdr[ hdrIndex("END_ENS") ] = NLAST; } /* increment the trace in ensemble counter for the next trace */ parms->currTrcInEns += 1; return( FALSE ); }

/*-------------------------------------------------------------------*/ /* sineWaveAdd /* /* subroutine to add a sine wave to a trace /* input: /* freq - frequency of sine wave to add /* sampInt - float sample interval of trace in ms /* numsmp - number of samples in trace /* output: /* trace - the trace /* /*-------------------------------------------------------------------*/ static void sineWaveAdd( float freq, float sampInt, int numsmp, float *trace ) { /* local variables */ int i; double angle, twoPi, time; twoPi = asin(1.0)*4.0; for( i = 0; i < numsmp; i++ ){ time = (float)i * sampInt/1000.; angle = freq*time*twoPi; trace[i] += (float)sin( angle ); } }

Other Docs

Known Problems

sineWave.c462

Other Docs

Developer’s Programming Guide

Known Problems

463

Developer’s Programming Guide

Appendix: Disk Iteration Examples

The appendix demonstrates the use of the disk iteration capabilities of ProMAX tools. The following examples present a prototype for a surface-consistent amplitude correction program. A FORTRAN example is presented first, followed by a C example. The critical subroutine in this disk iteration program is EX_SET_DISKITER() in FORTRAN and ex_set_diskiter_() in C. Note that in C, the program is simply calling the FORTRAN code directly. Since there is only one calling argument, this is not difficult to do.

Topics covered in this chapter:

➲ ➲ ➲ ➲ ➲

Other Docs

sc_amp.menu sc_amp.inc sc_amp.f disk_iter.menu disk_iter.c

Known Problems

sc_amp.menu464

Developer’s Programming Guide

sc_amp.menu '( name: SC_AMP label: "Sc Amp" value_tab: 35 exec_data: ("SC_AMP" ("GENERAL" ("dummy" implicit: nil) ) ) )

Other Docs

Known Problems

sc_amp.inc465

Developer’s Programming Guide

sc_amp.inc C-----------------------------------------------------------------------------C Include file for SC_AMP C-----------------------------------------------------------------------------IMPLICIT NONE #include "global.inc"

&

&

COMMON /SAVED_PARMS/ SAVE1z, NSHOTS, NRECS, IX_SHOT_SUM, IX_SHOT_NORM, IX_REC_SUM, IX_REC_NORM, IX_TR, PROCESSING INTEGER LENSAVED, NSHOTS, NRECS, IX_SHOT_SUM, IX_SHOT_NORM, IX_REC_SUM, IX_REC_NORM, IX_TR LOGICAL PROCESSING DATA LENSAVED /9/

Other Docs

Known Problems

sc_amp.f466

Developer’s Programming Guide

sc_amp.f C ..... This is a working model for a surface consistent gain module. C ..... makes no attempt for true separation of source and receiver.

It

SUBROUTINE INIT_SC_AMP( LEN_SAV, ITOOLTYPE ) #include "sc_amp.inc" #include "header.inc" INTEGER LEN_SAV, ITOOLTYPE, IERR, IOK_HDR C ..... Ask for two iterations through the data, one the analyze the data C ..... and one to process the data CALL EX_SET_DISKITER( 2 ) C ..... Make sure that the needed geometry parameters are valid IF ( MAXSINz .EQ. INULLpz .OR. MINSINz .EQ. INULLpz & .OR. INCSINz .EQ. INULLpz .OR. MAXSLOCz .EQ. INULLpz & .OR. MINSLOCz .EQ. INULLpz .OR. INCSLOCz .EQ. INULLpz ) & CALL EX_ERR_FATAL( 'Geometry parameters are null' ) C ..... Reserve a buffer to store the sum and normalization factor for every C ..... shot and receiver NSHOTS = (MAXSINz-MINSINz) / INCSINz + 1 NRECS = (MAXSLOCz-MINSLOCz) / INCSLOCz + 1 CALL MEM_RESBUFF( NSHOTS, IX_SHOT_SUM, IERR ) CALL MEM_RESBUFF( NSHOTS, IX_SHOT_NORM, IERR ) CALL MEM_RESBUFF( NRECS, IX_REC_SUM, IERR ) CALL MEM_RESBUFF( NRECS, IX_REC_NORM, IERR ) C ..... Get a work buffer to hold one trace CALL MEM_RESBUFF( NUMSMPz, IX_TR, IERR ) C ..... Check for needed header entries IF ( IOK_HDR(ISINz) .NE. 1 .OR. IOK_HDR(IREC_SLOCz) .NE. 1 ) & CALL EX_ERR_FATAL( 'SIN or REC_SLOC not found in header' ) IF ( IOK_HDR(IDISKITERz) .NE. 1 ) CALL EX_ERR_FATAL( & 'DISKITER not found in header (not using Disk Data Input?)' ) C ..... Initialize for execution phase PROCESSING = .FALSE. C ..... Set the number of words for re-entrancy and the tool type LEN_SAV = LENSAVED ITOOLTYPE = ICOMPLEXpz RETURN END

SUBROUTINE EXEC_SC_AMP( TRACE, ITHDR, RTHDR ) #include "sc_amp.inc" #include "header.inc"

Other Docs

Known Problems

sc_amp.f467

Developer’s Programming Guide

#include "mem.inc" INTEGER ITHDR(NTHz), ISHOT, IREC REAL TRACE(NUMSMPz), RTHDR(NTHz), SCALAR C ..... No action required in cleanup phase IF ( CLEANUPz ) RETURN IF ( ITHDR(ISINz) .GE. MINSINz & .AND. ITHDR(ISINz) .LE. MAXSINz & .AND. ITHDR(IREC_SLOCz) .GE. MINSLOCz & .AND. ITHDR(IREC_SLOCz) .LE. MAXSLOCz ) THEN C ......... This is a valid trace C ......... Compute the sequential shot and receiver mem offset ISHOT = (ITHDR(ISINz)-MINSINz) / INCSINz + 1 IREC = (ITHDR(IREC_SLOCz)-MINSLOCz) / INCSLOCz + 1 IF ( ITHDR(IDISKITERz) .EQ. 1 ) THEN C ............. Still analyzing the data C ............. Call the routine to accumulate the statistics CALL SC_AMP_ACCUM( ISHOT, IREC, TRACE, RSPACEz(IX_TR), & NUMSMPz, RSPACEz(IX_SHOT_SUM), & RSPACEz(IX_SHOT_NORM), RSPACEz(IX_REC_SUM), & RSPACEz(IX_REC_NORM) ) ELSE C ............. Processing the data IF ( .NOT. PROCESSING ) THEN C ................. Just started processing PROCESSING = .TRUE. C ................. Normalize the values CALL SC_AMP_NORM( NSHOTS, NRECS, & RSPACEz(IX_SHOT_SUM), RSPACEz(IX_SHOT_NORM), & RSPACEz(IX_REC_SUM), RSPACEz(IX_REC_NORM) ) ENDIF C ............. Apply the scalar to this trace SCALAR = ( RSPACEz(IX_SHOT_SUM+ISHOT-1) & + RSPACEz(IX_REC_SUM+IREC-1) ) / 2.0 CALL VSDIV( TRACE, 1, SCALAR, TRACE, 1, NUMSMPz ) ENDIF ENDIF IF ( ITHDR(IDISKITERz) .EQ. 1 ) THEN C ......... Still analyzing the data, so act as a black hole CALL EX_FILLMODE ELSE CALL EX_PIPEMODE ENDIF RETURN END

&

SUBROUTINE SC_AMP_ACCUM( ISHOT, IREC, TRACE, TR_WORK, NUMSMP, SHOT_SUM, SHOT_NORM, REC_SUM, REC_NORM )

C ..... Routine to accumulate the statistics

Other Docs

Known Problems

sc_amp.f468

&

Developer’s Programming Guide

IMPLICIT NONE INTEGER ISHOT, IREC, NUMSMP REAL TRACE(NUMSMP), TR_WORK(NUMSMP), SHOT_SUM(*), SHOT_NORM(*), REC_SUM(*), REC_NORM(*), SUM, AVEAMP

C ..... Convert the trace to absolute values CALL VABS( TRACE, 1, TR_WORK, 1, NUMSMP ) C ..... Take the sum CALL SVE( TR_WORK, 1, SUM, NUMSMP ) C ..... Normalize to the number of samples (ignore hard zeros) AVEAMP = SUM / FLOAT(NUMSMP) C ..... Add this value to the respective shot and receiver SHOT_SUM(ISHOT) = SHOT_SUM(ISHOT) + AVEAMP SHOT_NORM(ISHOT) = SHOT_NORM(ISHOT) + 1.0 REC_SUM(IREC) = REC_SUM(IREC) + AVEAMP REC_NORM(IREC) = REC_NORM(IREC) + 1.0 RETURN END

&

SUBROUTINE SC_AMP_NORM( NSHOTS, NRECS, SHOT_SUM, SHOT_NORM, REC_SUM, REC_NORM )

C ..... Routine to normalize the values

&

100

110

IMPLICIT NONE INTEGER NSHOTS, NRECS, I REAL SHOT_SUM(NSHOTS), SHOT_NORM(NSHOTS), REC_SUM(NRECS), REC_NORM(NRECS) DO 100 I=1,NSHOTS IF ( SHOT_NORM(I) .GT. 0.0 ) THEN SHOT_SUM(I) = SHOT_SUM(I) / SHOT_NORM(I) ELSE SHOT_SUM(I) = 1.0 ENDIF CONTINUE DO 110 I=1,NRECS IF ( REC_NORM(I) .GT. 0.0 ) THEN REC_SUM(I) = REC_SUM(I) / REC_NORM(I) ELSE REC_SUM(I) = 1.0 ENDIF CONTINUE RETURN END

Other Docs

Known Problems

disk_iter.menu469

Developer’s Programming Guide

disk_iter.menu '( name: DISK_ITER label: "Disk Iteration Demo" value_tab: 50 exec_data: ("DISK_ITER" ("GENERAL" ("DUMMY"

implicit: 0)

) ) )

Other Docs

Known Problems

disk_iter.c470

Developer’s Programming Guide

disk_iter.c /*--------------------------------------------------------------------*/ /* disk_iter /* A demo program for iterating over a dataset and scaling the /* shot records by the power of the shot record. /* /*--------------------------------------------------------------------*/ /* include promax interface, globals, error codes, etc */ #include "cpromax.h" #include "cglobal.h" /* define saved parameters */ BEGINPARMS float *scalVals; /* scalar values */ int hdrIndexSIN; ENDPARMS(parms) /*-------------------------------------------------------------------*/ /* /* description: /* initialization routine for disk_iter /* /* output arguments: /* len_sav - length of the saved common block /* itooltype - type of tool /* /* /*-------------------------------------------------------------------*/ void init_disk_iter_( int *len_sav, int *itooltype ) { int nShots, numIters, i; /* set the global pointers */ GlobalRuntime *gr = globalRuntime; GlobalGeom *gg = globalGeom; /* make sure the data are sorted by source */ if( gr->ipsort != ISIN ){ exErrFatal("Data must be sorted by SIN for this program to operate."); } /* allocate and initialize enough memory to store shot scalar values */ nShots = gg->maxsin - gg->minsin + 1; if( nShots scalVals = (float*)malloc( nShots * sizeof(float)); if( parms->scalVals == NULL ){

Other Docs

Known Problems

disk_iter.c471

Developer’s Programming Guide

exErrFatal("Memory allocation error, %d bytes requested.", nShots * sizeof(float)); } for( i = 0; i < nShots; i++ ){ parms->scalVals[i] = 0.0; } /* set the number of disk iterations, note that this is a call to a /* FORTRAN routine so the address of numIters has to be passed. */ numIters = 2; ex_set_diskiter_( &numIters );

*/

/* get the trace header index of SIN */ if( hdrExists("SIN") ){ parms->hdrIndexSIN = hdrIndex("SIN"); } else{ exErrFatal("The header SIN does not exist in the dataset." "Assign geometry to the data."); } /* set the number of words that need to be saved */ *len_sav = NPARMS(parms); /* set the tool type */ *itooltype = ICOMPLEX; }

/*-------------------------------------------------------------------*/ /* /* description: /* exectution routine for disk_iter /* /* input/output arguments: /* trace - data trace /* ithdr - int header array /* rthdr - float header array /* /*-------------------------------------------------------------------*/ void exec_disk_iter_( float *trace, int *ithdr, float *rthdr) { int i; float val; /* set the global pointers */ GlobalRuntime *gr = globalRuntime; GlobalGeom *gg = globalGeom; if( ithdr[ hdrIndex("DISKITER") ] == 1 ){ /* .. this is the first pass of the data */ /* .. sum the amplitudes */ if( ithdr[ parms->hdrIndexSIN ] < gg->maxsin ){ for( i = 0; i < gr->numsmp; i++ ){

Other Docs

Known Problems

disk_iter.c472

Developer’s Programming Guide

parms->scalVals[parms->hdrIndexSIN] += trace[i]*trace[i]; } } /* .. we are in just getting info from traces, not passing them on yet */ exFillMode(); } else{ /* .. this is the second pass of the data */ /* .. scale the trace */ if( val = parms->scalVals[parms->hdrIndexSIN] != 0.0 ){ for( i = 0; i < gr->numsmp; i++ ){ trace[i] /= val; } } /* .. put this trace back into the flow and receive one on the next call */ exPipeMode(); } }

Other Docs

Known Problems

473

Developer’s Programming Guide

Appendix: Stand-alone Tool Examples

This appendix provides examples of stand-alone programs. The first group of programs shows how to read data from a ProMAX disk dataset. It includes two FORTRAN examples (prestack.f and poststack.f) and one C example (poststack.c). Note that the FORTRAN routines require a C program, or wrapper, to allow the program to pass command line arguments. The C program does not require a wrapper, since command line arguments can be passed directly to the program. The next program (vel_io.f) demonstrates how to input and output velocities to the ProMAX database.

Topics covered in this appendix:

➲ ➲ ➲ ➲ ➲ ➲ ➲

Other Docs

prestack_menu prestack.f Makefile_prestack poststack.f Makefile_poststack poststack.c vel_io.f

Known Problems

prestack.menu474

Developer’s Programming Guide

prestack.menu '( name: DOMAIN_READ label: "Domain Read*" value_tab: 47 parameter: LABEL text: "Select trace data file" type: function: type_desc: (dataset_list datasets) value: "no entry" selected_item: "* No Datasets Found *" mouse_text: "Use Mouse Button 1 to select a trace data file description from the datasets menu." parameter: DOMAIN text: "Domain of the data" type: pop_choose: type_desc: ( ("CDP" 1 "CDP domain." ) ("SIN" 2 "Source domain.") ("SRF" 3 "Surface location domain." ) ) mouse_text: "Use B1 to pop up a menu to select the domain." exec_data: ("DOMAIN_READ" ("SPECIAL" ("TOOLTYPE" implicit: "STAND_ALONE") ("PATHNAME" implicit: "/home/dave/exer/stand_alone/prestack.exe") ) ("GENERAL" ("LABEL" ("DOMAIN" )

implicit: (value 'LABEL)) implicit: (value 'DOMAIN))

) )

Other Docs

Known Problems

prestack.f475

Developer’s Programming Guide

prestack.f C C C C C C C C C C C C C C

This is the C "wrapper" that passes command line args to the FORTRAN stand alone program. This produces the main executable file that is referenced in the menu under the "special" parameter section. void main( argc, argv ) int argc; char **argv; { prestack_( &argc, argv[1] ); } ..... ..... ..... .....

This is a sample program that shows how a stand-alone program can use the database to grab traces without searching. Traces are selected from disk in any order by accessing the trace file using the orginal trace number. SUBROUTINE PRESTACK( NARGS, CINPUT )

IMPLICIT NONE #include "global.inc" #include "mem.inc" INTEGER NARGS, ID_PACKET, IERR, NCHARS, ID_DATASET, IREAD_ONLY, & NDFILES, ISWAPENDS, ITRTOTAL, NUMSMP, NTH, MAXDTR, MAXATR, & IPSORT, IPKEY, ISKEY, IDTYP, IDOMAIN, NTRACES, IGEOM_MATCH, & ITRNO_VALID, NPRIORF, IFORMAT, LENGTH, IH_PKEY, & IX_TRACE, IX_THDR, MAXFOLD, NENS, & INPUT_DOMAIN REAL SAMPRAT, SOFTRLP, GEORLSP CHARACTER CINPUT(*), CAREA(16), CLINE(16) CHARACTER CPACKET_FILE*128, CLABEL*8, CDESC*32, CHDRNAM*8, & CENS_FLAG*3 IF ( NARGS .LT. 2 ) THEN C ......... Nothing was entered on the command line CALL U_ERR_FATAL( 'Packet file must be specified' ) ENDIF C ..... Load the character ARRAY into a character STRING. C ..... understand the difference. CALL U_CARR2STR( CINPUT, CPACKET_FILE, 128 ) C ..... Open CALL & IF ( & C C C C

..... ..... ..... .....

Be sure that you

the parameter packet file, get back the ID, area, and line PKT_OPEN_LOAD( CPACKET_FILE, 'DOMAIN_READ ', ID_PACKET, CAREA, CLINE ) ID_PACKET .EQ. 0 ) CALL U_ERR_FATAL( 'Unable to open packet file' )

Initialize the database for this area and line. Note that CAREA and CLINE are character arrays, not character strings. This call set globals geom. from LIN OPF and directory path for database routines. CALL DB_INIT_LINE( CAREA, CLINE, IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_DB_ERR( IERR ) CALL U_ERR_FATAL( 'Unable to initialize the database' )

Other Docs

Known Problems

prestack.f476

Developer’s Programming Guide

ENDIF C ..... Get the label for the dataset from the packet file. Parameters C are input using standalone version of EX_GET_PARM CALL U_CGETPARM( ID_PACKET, 'LABEL ', 1, CLABEL, NCHARS ) C ..... Get the requested domain: 1=CDP, 2=SIN, 3=SRF CALL U_GETPARM( ID_PACKET, 'DOMAIN ', 1, INPUT_DOMAIN ) IF ( INPUT_DOMAIN .EQ. 1 ) THEN CENS_FLAG = 'CDP' ELSEIF ( INPUT_DOMAIN .EQ. 2 ) THEN CENS_FLAG = 'SIN' ELSEIF ( INPUT_DOMAIN .EQ. 3 ) THEN CENS_FLAG = 'SRF' ENDIF C ..... Open the dataset (status=readonly) IREAD_ONLY = 1 CALL DISKIO_OPEN( CLABEL, IREAD_ONLY, ID_DATASET, IERR ) IF ( IERR .NE. 0 ) THEN CALL U_ERR_FATAL( & 'Unable to open the trace dataset ' // CLABEL ) ENDIF C ..... Get misc. info on the dataset. This is where runtime globals C ..... are set. CALL DISKIO_INFO( ID_DATASET, NDFILES, ISWAPENDS, & ITRTOTAL, SAMPRAT, NUMSMP, NTH, MAXDTR, & MAXATR, IPSORT, IPKEY, ISKEY, IDTYP, IDOMAIN, & NTRACES, IGEOM_MATCH, ITRNO_VALID, NPRIORF, SOFTRLP, & GEORLSP ) C C C C C

..... ..... ..... ..... .....

Set up the trace headers (this will cause a bunch of calls to HDR_ADD, creating all of the header entries that were found in the dataset). Complete header description info (list of names, format, length, etc) is now available using the same in-line executive header subroutines. CALL DISKIO_THDR_SETUP( ID_DATASET, IERR )

C ..... Now we can get header information. We are after the index of C ..... the primary key in question. IF ( CENS_FLAG .EQ. 'CDP' ) THEN CHDRNAM = 'CDP ' ELSEIF ( CENS_FLAG .EQ. 'SIN' ) THEN CHDRNAM = 'SIN ' ELSEIF ( CENS_FLAG .EQ. 'SRF' ) THEN CHDRNAM = 'REC_SLOC' ENDIF CALL HDR_NAMINFO( CHDRNAM, CDESC, LENGTH, IFORMAT, & IH_PKEY, IERR ) IF ( IERR .NE. 0 ) CALL U_ERR_FATAL( & 'Primary key not found in header' ) C ..... Allocate buffers for a trace and a trace header. C ..... of memory routines as in executive. CALL MEM_RESBUFF( NUMSMP, IX_TRACE, IERR ) CALL MEM_RESBUFF( NTH, IX_THDR, IERR )

Same use

C ..... Determine the actual number of ensembles and the maximum fold.

Other Docs

Known Problems

prestack.f477

Developer’s Programming Guide

C ..... Note that all of these variables were set by DB_INIT_LINE. You C ..... should check to see if they are null (geometry not loaded). IF ( CENS_FLAG .EQ. 'CDP' ) THEN NENS = ( MAXCDPz - MINCDPz ) / INCCDPz + 1 MAXFOLD = NTRCDPz ELSEIF ( CENS_FLAG .EQ. 'SIN' ) THEN NENS = ( MAXSINz - MINSINz ) / INCSINz + 1 MAXFOLD = MAXCPSz ELSEIF ( CENS_FLAG .EQ. 'SRF' ) THEN NENS = ( MAXSLOCz - MINSLOCz ) / INCSLOCz + 1 MAXFOLD = MAXTPRz ENDIF

& &

IF ( NENS .LT. 1 ) CALL U_ERR_FATAL( 'Less than 1 ensemble exists' ) IF ( MAXFOLD .LT. 1 ) CALL U_ERR_FATAL( 'Maximum fold is less than 1' )

C ..... Call CALL & &

the routine that does the actual work. STAND_ALONE_WORK( ISPACEz(IX_TRACE), ISPACEz(IX_THDR), ISPACEz(IX_THDR), NUMSMP, NTH, NENS, IH_PKEY, ID_DATASET, CENS_FLAG )

C ..... Close the packet file CALL PKT_FILCLOSE( ID_PACKET, IERR ) C ..... Close the dataset CALL DISKIO_CLOSE( ID_DATASET ) C ..... Report normal completion CALL U_COMP_NORMAL END

&

SUBROUTINE STAND_ALONE_WORK( TRACE, ITHDR, RTHDR, NUMSMP, NTH, NENS, IH_PKEY, ID_DATASET, CENS_FLAG ) IMPLICIT NONE

#include "global.inc"

& &

INTEGER NUMSMP, NTH, NENS, IFOUND_VAL, ITRACENO, ITHDR(NTH), IFOLD, IERR, J, I, IH_PKEY, ITOKEN1, ID_DATASET, IENS, ITOKEN2 REAL RTHDR(NTH), TRACE(NUMSMP) CHARACTER CENS_FLAG*3, CS_DOMAIN*8, CPKEYNAM*8

C ..... Assign the secondary key domain (not called 'FOLD' in SIN domain) C ..... Also assign the primary key name IF ( CENS_FLAG .EQ. 'CDP' ) THEN CS_DOMAIN = 'FOLD ' CPKEYNAM = 'CDP ' ELSEIF ( CENS_FLAG .EQ. 'SRF' ) THEN CS_DOMAIN = 'FOLD ' CPKEYNAM = 'SRF ' ELSEIF ( CENS_FLAG .EQ. 'SIN' ) THEN

Other Docs

Known Problems

prestack.f478

Developer’s Programming Guide

CS_DOMAIN = 'NCHANS ' CPKEYNAM = 'SIN ' ENDIF C ..... Loop over the ensembles DO 110 J=1,NENS C ......... Determine which ensemble this is IF ( CENS_FLAG .EQ. 'CDP' ) THEN IENS = MINCDPz + (J-1) * INCCDPz ELSEIF ( CENS_FLAG .EQ. 'SIN' ) THEN IENS = MINSINz + (J-1) * INCSINz ELSEIF ( CENS_FLAG .EQ. 'SRF' ) THEN IENS = MINSLOCz + (J-1) * INCSLOCz ENDIF C ......... Get the fold for this ensemble. Use the ensemble number to examine C ......... the appropriate ordered parameter file and retrieve the fold. CALL DB_ENSEMBLE_MAP( ITOKEN1, CPKEYNAM, CPKEYNAM, & CS_DOMAIN, IENS, IFOLD, IFOUND_VAL, IERR ) C ......... Note that in some situations this might not be a fatal error. IF ( IFOUND_VAL .NE. 1 .OR. IERR .NE. 0 ) THEN WRITE (*,*) CPKEYNAM, J, IFOUND_VAL, IERR CALL U_ERR_FATAL( 'Error getting FOLD' ) ENDIF WRITE (*,*) C ......... Read every trace in the ensemble. C ......... the fold for this ensemble. DO 100 I=1,IFOLD

Note that we are looping over

C ............. Given the primary key and sequence number within ensemble, C ............. get the trace number. Again the order parameter files are C ............. the vehicle for extracting the all important original trace # CALL DB_TRACE_MAP( ITOKEN2, CPKEYNAM, 'sequence', & 'traceno ', IENS, I, ITRACENO, IFOUND_VAL, IERR ) C ............. Note that in some situations this might not be a fatal error IF ( IFOUND_VAL .NE. 1 .OR. IERR .NE. 0 ) THEN WRITE (*,*) CPKEYNAM, I, IFOUND_VAL, IERR CALL U_ERR_FATAL( 'Error getting TRACENO' ) ENDIF C C C C

............. ............. ............. ............. &

This reads in a trace and headers given the original trace number. ITRACENO can be replaced with a negative integer to simply access the traces by sequential order (ie -143 -> gets the 143rd trace). CALL DISKIO_GET_TR( ID_DATASET, ITRACENO, ITHDR, TRACE, IERR )

C ............. These had BETTER be the same IF ( IENS .NE. ITHDR(IH_PKEY) ) THEN WRITE (*,*) I, J, IENS, ITHDR(IH_PKEY) CALL U_ERR_FATAL( 'IENS != ITHDR(IH_PKEY)' ) ENDIF

&

Other Docs

WRITE (*,*) 'IENS= ', IENS, ' ITHDR(IH_PKEY)

ITHDR= ',

Known Problems

prestack.f479

100 110

Developer’s Programming Guide

CONTINUE CONTINUE RETURN END

Other Docs

Known Problems

Makefile_prestack480

Developer’s Programming Guide

Makefile_prestack ############################################################################## # Makefile for ProMAX standalone program ############################################################################## # Program name name = prestack # Object files - preface any product-specific files with "$(product)/" objs = prestack_main.o prestack.o # Libraries - to be searched before and after the standard ProMAX libraries libsbefore = libsafter = # Standard rules to make ProMAX programs include maxprog.make # be quiet .SILENT:

Other Docs

Known Problems

poststack.f481

Developer’s Programming Guide

poststack.f C C C C C C C C C C

This is the C "wrapper" that passes command line args to the FORTRAN stand alone program. This produces the main executable file that is referenced in the menu under the "special" parameter section. void main( argc, argv ) int argc; char **argv; { poststack_( &argc, argv[1] ); }

C ..... This is a sample program that shows how a stand-alone program can C ..... read post-stack traces (or otherwise access data without using C ..... TRACENO as a key) SUBROUTINE POSTSTACK( NARGS, CINPUT ) IMPLICIT NONE #include "global.inc" #include "mem.inc" INTEGER NARGS, ID_PACKET, IERR, NCHARS, ID_DATASET, IREAD_ONLY, & NDFILES, ISWAPENDS, ITRTOTAL, NUMSMP, NTH, MAXDTR, MAXATR, & IPSORT, IPKEY, ISKEY, IDTYP, IDOMAIN, NTRACES, IGEOM_MATCH, & ITRNO_VALID, NPRIORF, IFORMAT, LENGTH, IH_CDP, & IX_TRACE, IX_THDR REAL SAMPRAT, SOFTRLP, GEORLSP CHARACTER CINPUT(*), CAREA(16), CLINE(16) CHARACTER CPACKET_FILE*128, CLABEL*8, CDESC*32 IF ( NARGS .LT. 2 ) THEN C ......... Nothing was entered on the command line CALL U_ERR_FATAL( 'Packet file must be specified' ) ENDIF C ..... Load the character ARRAY into a character STRING. C ..... understand the difference. CALL U_CARR2STR( CINPUT, CPACKET_FILE, 128 )

Be sure that you

C ..... Open the parameter packet file, get back the ID, area, and line C ..... STAND_ALONE was the name specified in the menu file. CALL PKT_OPEN_LOAD( CPACKET_FILE, 'POSTSTACK_READ ', ID_PACKET, & CAREA, CLINE ) IF ( ID_PACKET .EQ. 0 ) CALL U_ERR_FATAL( & 'Unable to open packet file' ) C ..... Initialize the database for this area and line. Note that CAREA C ..... and CLINE are character arrays, not character strings. CALL DB_INIT_LINE( CAREA, CLINE, IERR ) IF ( IERR .NE. 0 ) THEN CALL REPORT_DB_ERR( IERR ) CALL U_ERR_FATAL( 'Unable to initialize the database' ) ENDIF C ..... Get the label for the dataset from the packet file. C ..... the packet ID is an input arg.

Other Docs

Note that the

Known Problems

poststack.f482

CALL U_CGETPARM( ID_PACKET, 'LABEL

Developer’s Programming Guide

', 1, CLABEL, NCHARS )

C ..... Open the dataset, status=readonly IREAD_ONLY = 1 CALL DISKIO_OPEN( CLABEL, IREAD_ONLY, ID_DATASET, IERR ) IF ( IERR .NE. 0 ) THEN CALL U_ERR_FATAL( & 'Unable to open the trace dataset ' // CLABEL ) ENDIF C ..... Get misc. info on the dataset. CALL DISKIO_INFO( ID_DATASET, NDFILES, ISWAPENDS, & ITRTOTAL, SAMPRAT, NUMSMP, NTH, MAXDTR, & MAXATR, IPSORT, IPKEY, ISKEY, IDTYP, IDOMAIN, & NTRACES, IGEOM_MATCH, ITRNO_VALID, NPRIORF, SOFTRLP, & GEORLSP ) C C C C C

..... ..... ..... ..... .....

Set up the trace headers (this will cause a bunch of calls to HDR_ADD, creating all of the header entries that were found in the dataset). Complete header description info (list of names, format, length, etc) is now available using the same in-line executive header subroutines. CALL DISKIO_THDR_SETUP( ID_DATASET, IERR )

C ..... Get the index of CDP CALL HDR_NAMINFO( 'CDP ', CDESC, LENGTH, IFORMAT, & IH_CDP, IERR ) IF ( IERR .NE. 0 ) CALL U_ERR_FATAL( & 'Primary key not found in header' ) C ..... Allocate buffers for a trace and a trace header CALL MEM_RESBUFF( NUMSMP, IX_TRACE, IERR ) CALL MEM_RESBUFF( NTH, IX_THDR, IERR ) C ..... Call the routine that does the actual work. CALL STAND_ALONE_WORK( ISPACEz(IX_TRACE), ISPACEz(IX_THDR), & IH_CDP, ID_DATASET, ITRTOTAL ) C ..... Close the packet file CALL PKT_FILCLOSE( ID_PACKET, IERR ) C ..... Close the dataset CALL DISKIO_CLOSE( ID_DATASET ) C ..... Report normal completion CALL U_COMP_NORMAL END

&

SUBROUTINE STAND_ALONE_WORK( TRACE, ITHDR, IH_CDP, ID_DATASET, ITRTOTAL )

IMPLICIT NONE #include "global.inc" INTEGER ITHDR(*) INTEGER IERR, J, IH_CDP, ID_DATASET, ITRTOTAL

Other Docs

Known Problems

poststack.f483

Developer’s Programming Guide

REAL TRACE(*) DO 100 J=1,ITRTOTAL C ......... This reads in a trace, given the relative position within the C ......... file. CALL DISKIO_GET_TR( ID_DATASET, -J, ITHDR, TRACE, IERR ) IF ( IERR .NE. 0 ) THEN WRITE (*,*) '*** ERROR *** ', J, IERR ELSE WRITE (*,*) 'CDP= ', ITHDR(IH_CDP) ENDIF 100

CONTINUE RETURN END

Other Docs

Known Problems

Makefile_poststack484

Developer’s Programming Guide

Makefile_poststack ############################################################################## # Makefile for ProMAX standalone program ############################################################################## # Program name name = poststack # Object files - preface any product-specific files with "$(product)/" # The following line is for the FORTRAN above objs = poststack_main.o poststack.o # The following line is for the C code example below #objs = poststack.o # Libraries - to be searched before and after the standard ProMAX libraries libsbefore = libsafter = # Standard rules to make ProMAX programs include maxprog.make # be quiet .SILENT:

Other Docs

Known Problems

poststack.c485

Developer’s Programming Guide

poststack.c /* /* /* /* /* /*

----------------------------------------------------------------*/ poststack() This is a stand-alone ProMAX module for demonstration of a stand alone program that reads in data traces and headers. ----------------------------------------------------------------*/

#include "cglobal.h" #include "cpromax.h" #include

void main( int argc, char **argv ) { /* declare local variables */ void *tblPntr; char char int int int

cpacket_file[129], carea[17], cline[17]; *dsetLabel; read_only; id_dataset, errCode, ih_cdp; i_128 = 128;

/* variables realated to the dataset */ int ndfiles, iswapends; int trtotal, numsmp, nth, maxdtr; int maxatr, ipsort, ipkey, iskey, idtyp, idomain; int ntraces, igeom_match, itrno_valid, npriorf; float samprat, softrlp, georlsp; /* pointers to memory to hold a data trace and its header */ float *trace; int *ihdr; /* function used in the main program */ void stand_alone_work( float *trace, int *ithdr, int ih_cdp, int id_dataset, int itrtotal ); /* set the global pointers */ GlobalRuntime *gr = globalRuntime; GlobalGeom *gg = globalGeom; GlobalMisc *gm = globalMisc; /* Put argv[1] into a string called cpacket_file, this is a FORTRAN call */ u_carr2str_( argv[1], cpacket_file, &i_128, strlen(argv[1]) ); /* null terminate the string at the end */ cpacket_file[128] = '\0'; /* Open the parameter packet file and initialize the database */ /* This gives access to opf files, not actually used in this program. /* Note that the character string must be padded to 16 characters. */

Other Docs

*/

Known Problems

poststack.c486

Developer’s Programming Guide

/* The character string "STAND_ALONE " matches the tool name in the menu. */ initStandAlone(argc, argv, "STAND_ALONE ");

/* Get the 8-character label for the dataset from the packet file. */ uParGetString( "LABEL", &dsetLabel ); /* Open the dataset, status=readonly, note we are calling a FORTRAN routine read_only = 1; diskio_open_( dsetLabel, &read_only, &id_dataset, &errCode );

*/

if( errCode != 0 ){ uErrFatal("Unable to open the trace dataset %s", id_dataset ); } /* Get misc. info about the dataset */ diskio_info_( &id_dataset, &ndfiles, &iswapends, &trtotal, &samprat, &numsmp, &nth, &maxdtr, &maxatr, &ipsort, &ipkey, &iskey, &idtyp, &idomain, &ntraces, &igeom_match, &itrno_valid, &npriorf, &softrlp, &georlsp );

/* /* /* /* /*

Set up the trace headers (this will cause a bunch of calls to HDR_ADD, creating all of the header entries that were found in the dataset). Complete header description info (list of names, format, length, etc) is now available using the same in-line executive header subroutines. diskio_thdr_setup_( &id_dataset, &errCode ); if( errCode != 0 ){ uErrFatal("Error setting up trace headers for dataset."); }

*/ */ */ */ */

/* Get the index of CDP */ if( hdrExists("CDP") ){ ih_cdp = hdrIndex("CDP"); } else{ uErrFatal("Primary key header CDP does not exist in the data ."); } /* Allocate buffers for a trace and a trace header */ trace = (float*)malloc( numsmp * sizeof(float)); ihdr = (int*)malloc( nth * sizeof(float)); if( trace == NULL || ihdr == NULL ){ uErrFatal("Error allocating memory for trace and header."); } /* Call the routine that does the actual work */ stand_alone_work( trace, ihdr, ih_cdp, id_dataset, trtotal ); /* Close the dataset */ diskio_close_( &id_dataset ); /* notify the system of a normal completion */ u_comp_normal_(); }

Other Docs

Known Problems

poststack.c487

Developer’s Programming Guide

void stand_alone_work( float *trace, int *ithdr, int ih_cdp, int id_dataset, int itrtotal ) { /* local variables */ int errCode, j, trNum; /* loop over the dataset trace numbers, they range from 1 to itrtotal. */ for( j = 1; j ,,,,,