SCRC Data Capture and Analysis Software FAQ

This FAQ is compiled by Gilles Detillieux of the SCRC, based on questions posed at various times, mostly via e-mail, or sometimes in anticipation of questions likely to come up. Questions (and answers!) are greatly appreciated. Please send questions and/or answers to the SCRC.

Frequently Asked Questions

1. General

1.1. What is the name of the software package?
1.2. I think I found a bug. What should I do?
1.3. I have a great idea for the software!
1.4. I sent mail to Gilles, but I never got a response!
1.5. My question isn’t answered here. Where should I go for help?


2. Getting the Software

2.1. What is the latest version of neuro?
2.2. How much does the software cost?
2.3. Can I get the source code for the software?
2.4. What systems can run this software?
2.5. Can’t I run this software under Windows?
2.6. Where can I find the documentation?
2.7. What programs are included in the analysis software package?
2.8. What programs can be added to the analysis software package?
2.9. How can I write my own analysis scripts?


3. Capture

3.1. How many channels can I capture at once?
3.2. What A/D boards and systems are supported for capture?


4. Analysis Techniques

4.1. How do I do …?
4.2. How do I do integrate spike activity from ENGs/EMGs?
4.3. How do I use the raster program to make cell firing activity raster plots?
4.4. How do I make polar plots?
4.5. How do I reframe using a cycle trigger, rather than a spike trigger?
4.6. How do I make a PSTH in analysis?
4.7. How do I measure mean waveform amplitude?
4.8. How do I measure area under curve for waveform bursts?
4.9. How do I plot stepping frequency over time?


5. Import/Export

5.1. How do I get the waveforms out in ASCII?
5.2. How do I import data from Axoscope, pClamp, or Spike2?
5.3. Why do I get segmentation fault errors when using axon2run?
5.4. How do I import data from other programs?
5.5. How do I print from the Mac version of the analysis software?
5.6. How do I export my analysis graph data as CSV?


6. Troubleshooting

6.1. In analysis, I get the warning “Parameter file xxx.prm will not make use of run file xxx.frm”. What can I do to fix this?
6.2. In calibrate, cap or chanmon, why do I get the error “netcap status check failed connecting stream socket”?
6.3. In calibrate, cap or chanmon, why do I get the error “netcap status check failed Forbidden: A/D device cannot be accessed”?

Answers

1. General

1.1. What is the name of the software package?

The official name of our software package is SCRC Data Capture and Analysis Software, but it’s also unofficially known as the neuro package. Either name will do.

1.2. I think I found a bug. What should I do?

Your best bet is to e-mail Gilles directly (see contact info at the bottom of this page). Please try to include as much information as possible, including the version of the package you are running (see question 2.1), the operating system you have on your computer, the program that failed, any error messages you saw displayed, and anything else that might be helpful in reproducing the problem.

1.3. I have a great idea for the software!

Great! Development of this software continues through suggestions and improvements from users. If you have an idea (or even better, a tip or script to contribute), please send it to us. We may decide to share it on this site so others can use it.

1.4. I sent mail to Gilles, but I never got a response!

While Gilles does his level best to answer all his e-mail, especially from users of the neuro package, he is often swamped, so messages may at times slip through the cracks. If you don’t get a reply within 2 or 3 days, try sending a reminder. Gilles has returned to only doing development on a part-time, contract basis, so his time to work on the software is now quite limited. We have also had to phase out the support mailing list we used to have.

1.5. My question isn’t answered here. Where should I go for help?

Before you go anywhere else, think of other ways of phrasing your question. You have questions that are very similar to other questions answered here. While we try to phrase the entries in the FAQ closely to the most common questions, we can’t get them all! The next place to check is the documentation itself. In particular, take a look at the list of analysis parameters. You should also try a search of our web site, which will include the contents of our online documentation, tutorials, and help files. Finally, if you’ve exhausted all the online documentation, then contact Gilles for help (or ask your supervisor to contact him).

2. Getting the Software

2.1. What’s the latest version of neuro?

The latest pre-built release is 20190220 for Linux or Windows (Cygwin). Mac OS X/macOS is no longer supported. Any supported system can be updated to the latest release on request. The version number is in fact just the date the source code was packaged together before building into binary packages for installation.

To find out what version you’re running, it’s usually just a matter of running the “rpm” command on Red Hat Linux to query the “neuro” or “neuro-cap” package. The neuro-cap package is installed on data capture and analysis systems, while the neuro package is for analysis-only installations. The command “rpm -q neuro neuro-cap” will tell you what is installed. For non-Red Hat systems, you can just look at the date of the ChangeLog file that comes with the package, e.g.: “ls -l /usr/neuro/doc/ChangeLog” as this is the last file updated before the package is built. If you don’t have a ChangeLog file, then your installation is from July 2000 or earlier. The date of /usr/neuro/bin/analysis should give you a rough idea of the release date.

2.2. How much does the software cost?

The SCRC Data Capture and Analysis Software package is no longer being supported or sold.

2.3. Can I get the source code for the software?

Yes, for existing installations, full source code has always been included with the SCRC software. You can usually find it under the /usr/neuro/src directory on any system on which the software is installed.

2.4. What systems can run this software?

While a number of systems have supported this software in the past, we currently recommend Intel Core i5/i7 or compatible PCs running Red Hat Enterprise Linux or CentOS, for data capture as well as analysis. The software also runs under Cygwin on Windows 10 systems (see below).

2.5. Can’t I run this software under Windows?

Until the summer of 2009, the answer was no. Because the software was written to run on UNIX-compatible operating systems, like QNX, Linux and Mac OS X (for the analysis portion), porting it to run directly on a Microsoft Windows system would have been rather difficult. But an emulation environment called Cygwin/X makes that task considerably easier, and has gotten to be reliable enough that we now support it. So, now the analysis portion of the software can run under Windows 10 or 7. Additionally, using some form of virtualization technique, you can run a Linux system as a virtual machine right on your Windows system. One of these virtualization techniques, built-in to 64-bit versions of Windows 10’s Creator’s Update (1703 or 1709) is Windows Subsystem for Linux (WSL), which can the SCRC software in a virtualized Ubuntu Linux bash shell environment.

By means of our networked capture server for National Instruments USB DAQ devices under Windows, both the Cygwin/X and WSL installations of the SCRC analysis software can also do data capture under Windows.

Finally, because the software runs on the X Window System on Linux or UNIX compatible systems, it is possible to run the software remotely on any of these systems on the network, while displaying on any other system that supports the X Window System for display. As software is available for Microsoft Windows to turn a Windows PC into an X Window System terminal, you can run the software over the network from any Windows-based PC, using remote X11 display. If you already have a Linux system running our capture and analysis software at your site, then this may be the preferable way of running analysis from your Windows desktops, as it keeps your data centralized on your Linux server system.

2.6. Where can I find the documentation?

The documentation for the most recent release we have installed is posted at scrc.sites.umanitoba.ca/wp/our-research/scrc-data-capture-and-analysis-software/. In all releases, much of the documentation is included in the /usr/neuro/doc subdirectory of the source distribution, so you have access to the documentation for your installed version. On Red Hat Linux installations, the RPM package will also install the manual pages in the appropriate directories so they can be accessed with the man command, e.g.: “man appendrun“.

2.7. What programs are included in the analysis software package?

There are two ways to determine the answer to this question. The first, and likely most helpful one, is to have a look at the listing of SCRC Data Capture and Analysis Manual Pages on our web site, to see what’s normally available in the current release. This will tell you not only what the comamnds are, but what they do, and will link to documentation for all of these commands.

The second way is to get a definitive list of all the commands installed on your system right now. You can do this by looking in the /usr/neuro/bin directory on any system on which the software is installed, e.g.: “ls -l /usr/neuro/bin“. If the command you’re looking for isn’t listed there, chances are it’s an add-on command or script as described below.

2.8. What programs can be added to the analysis software package?

The analysis script archive on our web site is a collection of locally developed or user-contributed analysis scripts, which can be freely downloaded and installed on your system. In some cases, scripts on our web site are already included in the package, but in those cases the version on our site may be a newer version with added features. If in doubt, look in the /usr/neuro/bin directory, as described above, to see which commands are included in the analysis software package on your system. If the script you’re looking for isn’t listed there, or the size is different from that listed in the analysis script archive then you should probably download and install the needed script.

To obtain the scripts from our web site, download the complete ZIP file at https://scrc.sites.umanitoba.ca/wp-content/uploads/sites/18/2024/07/analysis_scripts.zip, then browse the ZIP archive and extract a copy of any script you want. Then, the scripts should be installed in /usr/local/bin or /usr/neuro/bin on your analysis system, and made executable with the command “chmod +x filename“, where filename is the full pathname of the installed script, e.g. /usr/local/bin/gensspp. This will enable the script to be run as a command on your system. You will need to be logged in as “root”, or raise yourself to administrator status with the “su” command, in order to install scripts in either of these two directories. The choice is yours which of the two you prefer to use, but you should probably be consistent to avoid losing track of what you’ve installed.

While it might be a bit of a bother to have to install your own add-on commands, it allows for quicker updates than what you can get strictly from the packages of distributed software. Having access to our script archive gives you updates to the latest features and user contributions as they happen, without having to wait for them to be packaged in the next analysis software release.

2.9. How can I write my own analysis scripts?

You will need to get familiar with basic Unix/Linux comands, as well as the basics of writing shell scripts. There are a number of helpful tutorials online to get you started. Of course they don’t deal with how to use the SCRC analysis software within a shell script, but they’ll cover the basics of control structures (loops, if-then-else), variables, basic commands, I/O redirection, etc.

The next thing to do is to learn by example. Look at some of the simpler scripts in our analysis script archive and see how they function. You may find something there that you can use as a starting point for your own analysis script.

3. Capture

3.1. How many channels can I capture at once?

This is largely dictated by the hardware you are running, and which operating system version, as well as a number of capture parameter settings. There are no hard and fast limits on the number, apart from the fact that the only supported A/D cards for a long time had been 16 channel cards, and the software had a limit of 16 traces and 16 waveforms for a given run file. For the Linux-based data capture systems, we have tried to preserve reliable performance capturing 16 channels at 20 KHz per channel.

Until 2006, the only supported A/D cards had also been 12 bit cards. Support for 16 bit cards has been developed, as has support for cards with up to 64 channels since 2011.

3.2. What A/D boards and systems are supported for capture?

In addition to our long-standing support for UEI PowerDAQ A/D boards under Linux, we have added a few new options to the mix: National Instruments PCI-6251 or other NI M-series multifunction boards (supported under Linux using the Comedi driver package), or NI USB-6210 or other NI USB-based multifunction DAQ devices (supported under Windows using the NI DAQmx driver package). Other combinations, which we have not yet tested, but which should also work with the software as it stands now, are the following: any multifunction DAQ board supported by the Comedi drivers under Linux, or any National Instruments DAQ board or USB device under Windows.

4. Analysis Techniques

4.1. How do I do …?

If you’re still learning the software, the best way to learn various analysis techniques is to look at the on-line tutorials on our web site. In time, we will be expanding these to include more techniques. We will also be describing specific analysis techniques briefly below, as the need arises, for techniques not covered by the tutorials. For techniques not in the tutorials or the FAQ below, we recommend you browse through the Analysis User’s Manual, or the list of Analysis Methods in the analysis help facility, to find something close to what you need. Then, explore the analysis parameters for that method, to see if you can get it to produce the type of graph you want. See also question 1.5.

4.2. How do I do integrate spike activity from ENGs/EMGs?

What we often refer to as “integrating” in electrophysiology is really not integration at all, but simply rectifying and filtering, in order to obtain the linear envelope of a signal. True numerical integration is the reverse of differentiation, in which successive samples are summed up. This summing up continues over the whole duration of the signal, such that at any point in time, the value of the integrated signal is the area under the curve of the non-integrated signal from time 0 up to the current point in time. This technique, in and of itself, is not generally used in processing raw ENG or EMG signals.

A variant of this numerical integration is to perform it on time slices of the signal, where you integrate for some short amount of time, after which you zero out the sum and start over integrating the next time slice. This technique, which can be performed in software or hardware, is sometimes used in electrophysiology, as it produces a signal that can approximate the envelope of the signal being integrated.

The more common technique, and the one which we use in our labs, involves full-wave rectification of the signal followed by low-pass filtering. It is often referred to as integration, even though that term isn’t exactly correct, because it somewhat approximates the time-sliced integration technique. This is because the low-pass filter accumulates charge from the signal (like summing it up) and slowly bleeds it off (like resetting the sum, only gradually rather than at fixed time increments). Again, this technique can be performed in hardware or software, and we use both methods in our labs. Some labs are equipped with hardware “integrators” (rectifier/filter units) which can process the raw ENGs into signals that give the ENG envelope, which can then be captured at a lower sampling rate than raw ENGs. More commonly, we will capture the raw ENGs and rectify and filter them in the analysis program.

The trick to this approach is to find the appropriate cutoff frequency for the filter, which will vary depending on the source of your signal. The cutoff frequency must be low enough to filter out individual spikes, but not so low as to attenuate the pattern of bursts of spikes. Our analysis program’s Maint/Filter section can perform the rectification and filtering in one operation. Tutorial 13 and tutorial 14 on our web site give some pointers on how we do this in the analysis software.

4.3. How do I use the raster program to make cell firing activity raster plots?

Unfortunately, the term “raster” suffers from overload, and means different things in different contexts, which leads to confusion. Even in electrophysiology it has two different meanings:

a) a “waterfall” plot of triggered sweeps of data, plotted front to back with an offset and optional hidden line removal; and

b) a graph of action potential positions, with each line in the graph representing a different cycle, and with each dot marked on the line indicating the action potential position within that cycle.

The raster program in our software suite produces the first type of raster plot, not the second. You can produce the second type within the analysis program, using the graph of action potential position vs cycle (or the sorted version of this). For this analysis, it helps to turn off the Normalization option, and turn on the Display cycle activity option, so you can see the varying cycle lengths and how the spikes line up with them.

You can also produce a waterfall plot of waveform spikes, rather than just spike positions, by converting the waveform to triggered sweeps, which can then be displayed in the raster program. You do this by setting up a cycle triggered average, with a single bin per cycle but with a window duration as long as the longest cycle, so that you can get each full cycle as a triggered sweep. This cycle-triggered reframing technique is described in a little more detail in question 4.5 below. You then set the Preview average data option and Bins-save the preview (raw) data. See Tutorial 9 for an introduction to reframing and the raster program.

4.4. How do I make polar plots?

In the general case, a polar plot is essentially just an X-Y graph wrapped around in a circle, with the X coordinate representing the angle and the Y coordinate representing the radial distance from the centre. So, when you say you want a polar plot of your data, the question becomes what are the data for the X and Y axes of this graph? What these data are will determine the best approach to producing the graph, as there are essentially 3 different approaches to making polar plots in the SCRC software, and one of them can be applied to any cyclical graph in the analysis program.

Most of the polar plots made with the SCRC analysis software are generated by the gensspp script from our analysis script archive. This script can plot the times of onset of activity on one waveform (usually a rectified and filtered ENG) on a cycle defined by the activity on another, or spike (action potential) onset times on a cycle. You will need to install this script on your system (see question 2.8), then follow Tutorial 13 for the procedure to use for making these polar plots, including the work you need to do to prepare the data for analysis (e.g. rectifying and filtering raw ENGs and marking up cycles). Tutorial 3 also explains the procedure for marking up cycles, if you need to go into more detail on the ins and outs of that technique, and the 5th and 6th sections of Tutorial 11 show how you can put the resulting HPGL plot files into final figures.

There are also some polar plotting capabilities right in the analysis program, as well as in our genplot program. A search for “polar plot” on our web site should find all the relevant documentation. The analysis program can turn any normalized cycle graph into a polar plot by setting the Cycles on graph and Normalization parameters. The genplot program can plot data from an ASCII text file into a cartesian or polar graph, and is the engine under the hood of our gensspp script.

4.5. How do I reframe using a cycle trigger, rather than a spike trigger?

The Maint/Reframe operation only uses a spike trigger, so you have to use a somewhat roundabout technique to reframe using a cycle trigger. You do this by setting up a cycle triggered average, with a single bin per cycle. You will need to set the window duration to an appropriate length. For example you can set it to be as long as the longest cycle so that you can get each full cycle as a triggered sweep. You then set the Preview average data option and when you perform a Bins-save operation, you can tell the analysis program to save the preview (raw) data. The end result is similar to reframing, but the resulting run file will only have the frame (trace) data, and no copies of the waveforms, like when you reframe with the “Without-W.F.” selection.

4.6. How do I make a PSTH in analysis?

A peri-stimulus time histogram, or PSTH, is also known as a cross-correlation. That latter term is what the analysis program uses. It can be performed using the W.F. spike cross-correlation histogram analysis. The process is described in detail in Tutorial 16 on our web site.

4.7. How do I measure mean waveform amplitude?

That depends a lot on what part of a signal you want to measure, and whether you want a relative or absolute amplitude. Though many analysis methods involve the measure of amplitude, one of the most versatile for measuring waveform amplitude is the Averaged W.F. amplitude vs step cycle. With step cycles set, you can use this method to overlay all cycles to calculate an averaged curve representing the cycle. The top titles of this analysis will include a number reported as “Amplitude”, which is the peak-to-peak amplitude in the averaged waveform, i.e., the difference between the minimum and the maximum average data points calculated in all bins in the graph. As it’s a difference, this number represents a relative amplitude, independent of the absolute displacement (or DC shift) of the waveform. Note that as you reduce the # bins- graph parameter for this analysis, the amplitude reported at the top of the graph will tend to decrease as well, as the peak & trough of the cycle get smoothed out and reduced by averaging. It’s important to choose a number of bins that will still accurately represent a typical cycle’s shape in order to get a meaningful amplitude measurement in this way. When this analysis is carried out on an intracellular recording of a motoneuron (which is DC-coupled, i.e. not high-pass filtered) the reported amplitude is known as the locomotor drive potential or L.D.P.

If you wish to obtain a single absolute mean amplitude for a region of the waveform, rather than a peak-to-peak or relative amplitude, you can use this same analysis and set the # bins- graph parameter to 1, and set the Start of run and End of run to the range you wish to measure. Set Cycle W.F. # to -1 to treat the entire range to be analysed as a single cycle. This will plot a single point on the graph, representing the mean amplitude as an absolute level. You can then use Bins-save to output the numerical value for that level. Of course, any amplitude measurements are only as accurate as the calibration for the channel from which the signal was recorded, so be sure to properly calibrate the channel before recording, or correct the calibration for the waveform afterward.

4.8. How do I measure area under curve for waveform bursts?

With any averaged graph in the analysis program, you can set the Show areas under curves option to get this area calculated and shown at the top of the display. This is most commonly used, and most useful, with the Averaged W.F. amplitude vs step cycle graph. Note that because the calculation is done on the displayed graph, the scaling of the graph matters – particularly the lower bound of the Y axis. So it is important to turn off Auto scale and explicitly set the Y-axis lower bound in order to get consistent results. The analysis program calculates for each bin the area from the mean value to the graph’s baseline, whatever that baseline appears as on the displayed graph, and it sums these up for all bins.

If you’d rather get areas for each burst in a run, you may be better off using the burstareas program or the burstareaplot script from our script archive. See the burstareas manual page for more information on this script, and run “burstareaplot –help” to get more information on the latter script.

4.9. How do I plot stepping frequency over time?

While this graph can’t be produced directly in the analysis program, there are a couple ways it can be done. One way is to use the W.F. activity burst duration vs cycle duration analysis method, and set it up to output burst durations as start to start, and to output burst positions on the X axis. With the Cycle W.F. # set to -1 to get the whole run as a single cycle, and Normalization turned off, you’ll get the start time of each burst on the X axis. You can then Bins-save the data and import it into a spreadsheet, where you can invert the Y values to get frequencies, then plot the frequency vs time.

An easier method might be to use the burstareas program, to output burst start times and cycle durations. You can then convert the times from ms to seconds, and invert the cycle durations (while also converting from ms to s, or KHz to Hz), using a handy UNIX tool called “awk”. Here is an example of a command that will do both of these steps:


burstareas -c 0 -d SC runfile | awk -F, '{print $1 / 1000 "," 1000 / $2}' > timefreq.csv

The redirect at the end of that command will save the output in a .csv file. The awk command above does the conversion from ms to s by dividing ms by 1000 in the first column, and does the frequency conversion by dividing into 1000 (equivalent to 1 / (time in s)) in the second column. Finally, you can plot that file using genplot:


genplot -xftimefreq.csv -xc1 -yftimefreq.csv -yc2 -xhTime -yhFrequency > timefreq.plt

See the burstareas and genplot manual pages for more information on these two scripts.

5. Import/Export

5.1. How do I get the waveforms out in ASCII?

There are a number of different methods of exporting data from the analysis program. Check out Tutorial 11 for an introduction to exporting data. Also, you can get a raw dump of an entire waveform file, as A/D levels, using the dumpwf script from our script archive, or you can use the newer getwfdata program to dump several waveforms as a CSV file of millivolt values, downsampled to a frequency of your choice.

5.2. How do I import data from Axoscope, pClamp, or Spike2?

You can use either axon2run, to convert an ABF format file, or atf2run to convert an ATF format file. The usage of atf2run is similar to that of axon2run, except for the expected input file format. As Axon Instruments has changed its ABF file format since axon2run was written (using Axon’s File Support Pack for DOS), some versions of this program may have difficulties converting newer files from their software, as detailed in question 5.3 below. To handle the new ABF2 format files, produced by Axoscope or pClamp software version 10, you should use the latest version of axon2run (Nov. 1, 2017 and up).

For CED’s Spike2 .smr file (SON-format), you can use the new smr2run conversion program. It was added to the Nov. 1, 2017 release of the SCRC analysis software. If you don’t have it yet, it can also be downloaded from our script archive.

5.3. Why do I get segmentation fault errors when using axon2run?

The axon2run program was written with Axon Instruments’ older DOS-based file support pack, which handled the ABF (Axon binary file) format in use in Axotape and other programs as of the mid-nineties. They have since changed the ABF file format, so axon2run has had problems with some newer files that make use of the extensions to their file format. This is highly dependent on which version of axon2run your are using. We strongly recommend upgrading to the November 1, 2017 version of the SCRC analysis software to have the latest updates to axon2run: it now handles ABF2 files, as well as providing output data at the full 16-bit resolution.

One important point to note is that there are two types of ABF files: Binary Integer and Binary Floating Point. There are also two versions of ABF files: version 1 (produced by version 9 or earlier of pClamp, Axoscope, or Axotape on DOS), and version 2 (produced by version 10 of pClamp or Axoscope). Pre-Nov. 1, 2017 versions of axon2run could not convert ABF2 files. Pre-May 17, 2007 versions of axon2run could not convert Binary Floating Point ABF1 files. If you’re still trying to work with an older version of axon2run, be sure to save your files using Binary Integer, as Binary Floating Point will not work in old versions (pre-2007) of axon2run, and select ABF v1.8, not ABF2. Files which have been analyzed in pCLAMP are saved in floating point by default. If you are experiencing problems with ABF floating point data files, you may wish to upgrade your SCRC analysis software to the latest release (Nov. 1, 2017 & up).

It’s possible that axon2run will encounter problems with some ABF files, even if you’re running the latest version, and that’s certainly possible with the older versions. In those cases, you should convert them to ATF by opening them in Axoscope or whatever program you used, and saving them in the Axon Text File format. You can then convert the text files with atf2run. Note also that in Axoscope and Clampex version 10, they have changed to a new ABF2 file format which is completely incompatible with the File Support Pack we’ve used, so anything but the latest axon2run will not recognize these files. The latest version was developed using incomplete documentation on the new ABF2 format, using the AxonIO input functions of the open-source Neo analysis software as a guideline. This was tested with many ABF2 files, and we’re confident that it’s working reliably, but if you encounter problems (particularly if Molecular Devices changes the format again), then you may need to fall back on atf2run or older ABF versions.

If you use version 10 of Axon Instruments software, and don’t have the latest version of the SCRC analysis software, you must save your ABF files in version 9 format (i.e. ABF v1.8). These version 9 format files can be ABF floating point data, which recent versions of axon2run can convert directly. For older versions of axon2run, you’ll need to reopen the version 9 file in Axoscope 9, and save as binary integer, or save as ABF v1.8 binary integer data if your version 10 software allows it. In version 10.1 and later of Axoscope, you can directly “Save As” or “Export” to the older version binary integer ABF file type: for Save As, select ABF 1.8 (integer), and for Export, select pCLAMP9 ABF (integer). The initial version 10 (10.0) didn’t have this option, but you should be able to download an update from Molecular Devices.

Finally, the pCLAMP 9.0 manual also has this to say, which is relevant for axon2run and likely as well for any other third-party program that can read ABF files: “In order for Clampex 9 binary integer data files to be recognized by software packages compatible with pCLAMP 6 binary data files, it may be necessary to record the data with the Clampex Program Option configured to ‘Ensure data files and protocols are compatible with pCLAMP 6’, and to change the data file’s extension from ‘abf’ to ‘dat’.” Older versions of axon2run did require the .dat extension, but current versions allow either. However, check to see if this option has been set. Note: this option also exists in Axoscope 9’s Program Options, so you should set it there too, if using Axoscope to capture data. However, this option has no effect in Axoscope or Clampex version 10. (We have not determined yet if this is fixed in version 10.1.)

5.4. How do I import data from other programs?

In addition to the axon2run and smr2run programs mentioned above, we have a number of other programs for importing data from other sources. The bin2run and dat2run programs were written to deal with lagacy data files at the SCRC, but could be used as the starting point for other conversion programs. They both deal with averaged triggered sweeps, which go into a .frm file, so this is the sort of data for which they’d best be adapted. You can find the source files for these under the /usr/neuro/src/bin2run directory on any system on which the software is installed. Also in that directory is a script called pp2run, which had been used at one point for importing data from Peak Performance software.

Our script archive also has a few more recent scripts, nsm2run and raw2run, which convert data from custom formats we’ve had to deal with. These scripts make use of the dsepr program, to convert interleaved 16-bit binary numbers into our run file format. These could be adapted to other similar formats, by changing the code that parses the custom header. There is also a Perl script, sptxt2run, which converts files from Spike2’s exported text file format, for cases where smr2run doesn’t work for you, or you only want to convert certain exported waveforms.

For dealing with other programs that have an ASCII export capability, you may have better luck with the asc2run program. We’ve used it for importing data from the GENESIS neural simulation, but it is flexible and generic enough to be suitable for many ASCII export formats.

Finally, you can download and use Synaptosoft Inc.’s ABF Utility to convert other file formats, such as Igor Text or Binary files, into ABF files. These can then be converted by axon2run. The ABF Utility can be downloaded separately, or as part of demo of their Mini Analysis program. This utility runs on Windows systems, so you’ll then need to transfer the ABF files to your analysis system via FTP or some other means before running axon2run on them.

5.5. How do I print from the Mac version of the analysis software?

There are a few wrikles to printing support on the Mac, depending on which version of Mac OS X you’re running.

  • It’s set up by default for a PostScript printer, and should print to the user’s currently selected default printer.
  • For OS X 10.2, printing to non-PostScript printers is a bit tricky, as you then have to use selectlp to pick a different printer type from the very limited set of printers the software supports directly. Effectively, only PostScript and PCL printers are supported by the software, as the other printer drivers are for more specialized or obsolete devices. We haven’t expanded this list because a) we tend to use PostScript printers almost exclusively now, and b) on Linux systems (at least Red Hat ones), and newer OS X releases, the printing system emulates PostScript for any non-PostScript printer, so from the analysis software’s perspective it’s almost always printing to a PostScript printer.
  • For OS X 10.3 and up, the printing system should emulate a PostScript printer when printing to non-PS printers, as most current Linux systems do.
  • We’ve had reports that every so often, printing on 10.3 simply stops working until you restart the system. We haven’t been able to reproduce this problem on 10.2, nor isolate the cause on 10.3.

If you are unable to print directly from the analysis software to your printer, we recommend plotting to files, and then using hpgl2pdf to convert to PDF files, which you should then be able to open and print in the Preview application on the Mac, or in Acrobat Reader on any other system. For hpgl2pdf to work on OS X 10.2, you need to install the ghostscript package on your Mac. It’s available from a number of different sources, but probably the easiest way is to download the “ESP Ghostscript” (espgs) package from the gimp-print site, then edit your .profile file to add /usr/local/bin to your PATH environment variable (e.g.: export PATH=”$PATH:/usr/local/bin” ).

5.6. How do I export my analysis graph data as CSV?

Any of the analysis methods that produce a graph (selected using Analysis/Graph/…) can export the coordinates of their plotted data using the Bins-save operation from the main menu of analysis. By default, it will export just the Y axis coordinates, but that can be changed using the Number list format parameter (Set/Disp-opt/Num-format), and set it to

x, y\n

Then when you use Bins-save to save the data, the coordinate pairs will be output as comma-separated values, one per line. This is described in more detail in the help for the Number list format parameter and the Analysis documentation on Bins-save (scroll down to the part with the heading “Saving ASCII data values in a graph“). Note that the output file will also contain a few header and footer lines, which won’t usually pose a problem for programs that import .csv files, but if they do you can edit them out.

6. Troubleshooting

6.1. In analysis, I get the warning “Parameter file xxx.prm will not make use of run file xxx.frm”. What can I do to fix this?

There are a few possible causes of that warning message, all stemming from the fact that one of the analysis parameters in the xxx.prm file is actually the full pathname to the corresponding run file. This was done to allow flexibility in saving multiple parameter files for a given run. Rather than only allowing a single parameter file, with the same name as the run file, the parameter file stores the run file name as one of the parameters. Unfortunately, that flexibility has given rise to situations where the parameter and run file names get out of sync with each other, or with what the user would expect the correspondence to be between the two, so that warning was added to alert the user of this and allow an opportunity to clear up the ambiguity.

Most commonly, the warning results from copying the data files from one directory to another, and then analyzing the data in the new directory. Normally, you can put files in another directory, and analysis will find the run file anyway. It looks first in the original directory, then in the directory where the parameter file is. However, if you have two copies of the data on your system, one in the original location, and one in a new location, the parameter file copied to the new location still points to the run file in the original location – in this case, analysis gives the warning because it will use the run file in the original location instead of the new copy in the same new location as the parameter file you just told it to use.

To make that clearer, here is an example: Let’s say you have a file called cell1.frm, in /home/exp/aug9, and you analyze it. Your parameters will be saved in /home/exp/aug9/cell1.prm. One of the parameters in cell1.prm is the run file name, /home/exp/aug9/cell1.

Now, if you copy all of the aug9 directory to /data/2011/aug9, and then you run analysis /data/2011/aug9/cell1, one of two things can happen:

  1. if the original copy in /home/exp/aug9 is gone, then analysis will look there first, see the file isn’t there, and will look instead in /data/2011/aug9, where the parameter file is, and use the run file there without any warnings.
  2. if the original copy is still there in /home/exp/aug9/cell1.frm, then analysis will look there, find it, and use that run file, even though you’re using a parameter file in /od/aug9 – this is because the run file parameter in that .prm file still contains the full name of the original location.

In this second case, you get a warning, because this is a situation that can confuse the user – while you’re tempted to think you’re looking at the data in the new location, you’re actually looking at data in the old location, but only using analysis parameters in the new location. In fact, if you modify waveform parameters at this point, they will be saved in the old location, not the new one! Only the analysis parameters are in the new location. This fix is to do a Set/File operation, and update the path name for the run file location to give the new location – or just remove the directory names from the run file name if it’s in the current directory.

Another case where you’ll get that warning is if you run something like: analysis cell1, and then Set/File to cell2. In this case it gives the warning because you will be making cell1.prm point to cell2.frm, rather than cell1.frm. Set/File changes the “Run file” analysis parameter, but if you do that in a way that causes an ambiguity, such that the parameter file name would imply it should refer to a different run file name than the parameter is actually set to, then you get the warning. This warning will happen whenever the ambiguity is detected in a Load, Keep or Set/File operation. In the case of Set/File, the problem usually stems from using this operation for the wrong reason: if you want to switch from analyzing one file to another, usually you should do this with the Load operation, rather than the Set/File. Set/File is for transferring a set of analysis parameters from one run file to another, but then you should be careful to do a Keep operation to save the new parameters and new run file name into a new parameter file, rather than the parameter file for the previous run. E.g. rather than having cell1.prm saved with the “Run file” pointing to cell2.frm, make sure you save the new parameters as cell2.prm. If you want multiple parameter files for a single run file, make sure none of the parameter file names conflict with existing run file names, e.g. cell3-ar.prm, cell3-atf.prm and cell3-agwvr.prm can all refer to cell3.frm without triggering the warning.

Finally, another case where this problem happens is if you change file using the Load operation, and analysis detects that you have unsaved parameters and asks you if you want to save the modified parameters, but when it asks for the name of the parameter file to Keep, you instead type the name of the parameters (or run) that you want to Load. Again in this situation, the “Run file” parameter will point to a different run than the parameter file name implies it should, because you’re telling the analysis program, inadvertently, that it should clobber the parameter file you want to load with the parameters you wanted to save for the previous run.

In all these cases, the fix is the same: pay close attention to which parameter file you’re currently using, and which run file it really ought to be using as its “Run file” analysis parameter. Then use Set/File to correct the run file name and path, then Keep the updated parameters. If you do that, it should fix things so you don’t get the warning any more.

To avoid this warning causing problems with shell scripts that run analysis, and might run into parameter files where the “Run file” parameter is out of sync in this way, there is a simple fix to dismiss the warning without analysis falsely dropping keystrokes from the input sequence in the script. Just put a space right at the start of the input to analysis, to clear away the warning before any commands are read. If there’s no warning, the extra space will have no effect.

6.2. In calibrate, cap or chanmon, why do I get the error “netcap status check failed connecting stream socket”?

This is usually a sign that the networked data capture server (NCAPD) is not running. Check to make sure it’s started up, and if not, try restarting it. (How to start this server varies depending on the system type: on Linux it should start up automatically when the system restarts, but on Windows systems it runs as a user application so you must remember to start it manually before capturing.)

Another possibility is a network port number mismatch or conflict. This would indicate a configuration problem which must be rectified, as it would otherwise cause capture to fail consistently. Make sure the port number used by the server matches the port number after the “:” in the URL in the .adboard file in your home directory, or in /etc/sysconfig/adboard on Linux systems if you don’t have a .adboard file. Also make sure no other TCP/IP network service is already using that port number. Usually port 40880 is a safe one which isn’t used by anything else. This is the one that is usually used by default, but can be changed by command line options in the Windows shortcut to the ncapd .exe file, or the init script on Linux systems.

6.3. In calibrate, cap or chanmon, why do I get the error “netcap status check failed Forbidden: A/D device cannot be accessed”?

This is usually a sign that the A/D device isn’t connected. For a PCI board, make sure it is installed properly in the PC. For a USB device, ensure it is properly plugged in to a working USB port, and that the device driver software can detect it. It can also happen if the device is already in use by another data capture process: check to see if cap or chanmon is running in another terminal window.

Another cause is a configuration error, where the bd= parameter in the URL in the .adboard or /etc/sysconfig/adboard file refers to an incorrect device number: in most cases this should be bd=1 for a single A/D device, or the parameter may be omitted.

It may also be that the device driver or library has gotten into a state in which it can no longer communicate with the device, so a reboot may help. Some USB devices can be susceptible to this problem when the USB device is unplugged and replugged after being used, especially while the ncapd server is still running.


See also: SCRC Software On-line Documentation

Revised July 7, 2024.

Copyright © 2024, G. R. Detillieux, Spinal Cord Research Centre, University of Manitoba. All Rights Reserved.