Skip to main content
Proceedings. Mathematical, Physical, and Engineering Sciences logoLink to Proceedings. Mathematical, Physical, and Engineering Sciences
. 2017 Mar 15;473(2199):20160828. doi: 10.1098/rspa.2016.0828

Comment on ‘Are some people suffering as a result of increasing mass exposure of the public to ultrasound in air?’

T G Leighton 1,
PMCID: PMC5378247  PMID: 28413349

Abstract

A number of queries regarding the paper ‘Are some people suffering as a result of increasing mass exposure of the public to ultrasound in air?’ (Leighton 2016 Proc. R. Soc. A 472, 20150624 (doi:10.1098/rspa.2015.0624)) have been sent in from readers, almost all based around some or all of a small set of questions. These can be grouped into issues of engineering, human factors and timeliness. Those issues (represented by the most typical wording used in queries) and my responses are summarized in this comment.

Keywords: ultrasound, ultrasonics, hearing, bioeffects, guidelines, audiology

1. Introduction

This comment is written in response to enquiries and comments asking for clarification on specific issues raised in [1], and for assistance in navigating a long paper without an index. Since publication, the paper has been cited both to open up [2,3] and to close down [4] discussion of the safety of ultrasound in air. The level of detail in these responses tries to reflect the range of disciplines from which the enquiries arise. The questions fall into three broad topics: engineering (§2), human factors (§3) and timeliness (§4).

2. Engineering

(a). How can ultrasound in air affect humans when more than 99% of the energy is reflected at the skin?

Such estimates have been based on a simple calculation that treats both the air and soft tissue as fluids and assumes that the ultrasonic wave propagates perpendicularly to the skin. This calculation shows that more than 99% of the incident energy is reflected off the skin and back into air [5]. However, such calculations assume models that are independent of frequency, and would equally apply to the frequencies at which speech is heard (§3a(i)). The evolution of ears enables sensitivity to sound despite reflection by the skin, such that sensory perception in air in complex creatures has favoured the evolution of ears.

The argument that ultrasound in air cannot affect humans, because the vast majority of it is reflected by the skin, contains within it the unspoken and questionable assumption that the ears (and the associated hearing and balance systems) play no part in sensitivity to frequencies in air in excess of 17.8 kHz (see §3a(i)). The admittedly sparse data undermine this assumption because evidence suggests that some individuals experience an auditory sensation at such frequencies [6,7]. The level of data available at this time does not allow us to relate an individual's acuity of hearing ultrasound in air with the potential to generate adverse effects, and we should beware any implicit assumption that maps from the one to the other [1, pp. 9, 18]. However, the lack of data and a testable mechanism by which adverse effects might occur does make it premature to rule out any role by the ear in generating adverse effects from ultrasound in air. This is the rebuttal to any assertion that such adverse effects cannot occur because of the reflectivity of the skin.

As a postscript to the use of the simple calculation mentioned at the start of this section, it is in fact incorrect to characterize reflection and transmission at the ear by using a simple fluid/fluid flat-interface normal-incidence model (as is simply demonstrated by the fact that the reflection coefficient predicted by such a model is frequency independent). This is because the propagation of energy in the middle ear, and inner ear, is not simply via longitudinal waves in a fluid, but via more complex wave motion [8]. Measurements of the reflection coefficient at the entrance to the ear canal show that it is highly dependent both on frequency and on the characteristics of the particular individual's ear. After averaging across individual adults, the mean power reflectance is estimated to vary from approximately 0.4 to 0.6 between 10 and 20 kHz [9], values confirmed up to 15 kHz by Farmer-Fedor & Rabbit [10].

(b). How can ultrasound in air affect humans when the incident energy in ultrasonic fields is low?

The intensity in a beam of ultrasound in air having a sound pressure level (SPL) of 140 dB re 20 µPa has ‘approximately one-tenth of the intensity of ground-level daylight on a clear day’ [1, p. 37]. Such 140 dB re 20 µPa signals are far stronger than one would expect to find in approximately 20 kHz public exposures, as they exceed by 30 dB or more any of the current maximum permissible levels (MPLs) in fig. 3 of [1]. Given that such fields are then reflected by the skin (§2a), one would not expect the fields in public places measured in [1] to cause perceptible physical effects such as radiation pressure, heating, etc. (it was only in laboratory tests that some authors reported mild to very unpleasant heating at fingers, skin clefts and nasal passages in excess of 140 dB re 20 µPa in humans [1113], as reported in [1, p. 37]).

Just as for §2a, the data on the possible adverse effects of ultrasound in air are currently too sparse to confirm or deny mass public exposure to ultrasound as the source of any adverse effect reported to date, let alone confirm a possible mechanism. Therefore, it is vital that we do not make judgements that rely on an assumed mechanism, especially if such assumptions are not explicitly stated (see [1, pp. 3, 10, 15, 18, 19, 24, 35, 36, 39, 40]). This is particularly so when, as here, the list of anecdotal symptoms shares much in common with those that have been anecdotally or medically attributed to other causes (e.g. anxiety). That list (nausea, dizziness, migraine, fatigue, tinnitus and ‘pressure in the ears’) need not require the physical damage on which are based arguments citing the energy entering the tissue (§3a(ii)). Indeed, the references in [1] tend to suggest that the reported adverse effects (headaches, nausea, etc.) disappear after the ultrasonic signal ceases, which is suggestive (but by no means conclusive) for the possibility of a mechanism where the response is not simply related to the absorbed energy, and certainly not one that relies on physical damage to tissue.

One issue that needs urgent attention is that, from our limited acoustic field measurements at sites of anecdotal distress, the third-octave SPLs of the fields which individuals report to us as disturbing are very often so low (data unconfirmed so not reported yet) that if the same third-octave SPLs were produced at lower frequencies by, say, broadband traffic noise one would not expect nausea, dizziness, migraine, fatigue, tinnitus and ‘pressure in the ears’. The levels are well within the MPLs quoted in fig. 3 of [1]. This makes a physical damage mechanism unlikely and suggests either that the adverse effects will not be reproducible in a double-blind trial or that they are produced by some other mechanism (note the discussion regarding birdsong in §3a(ii)). Speculatively, among other candidate mechanisms (§3b(ii)), the tones emitted in public places could conceivably annoy or affect concentration and performance, and by raising anxiety generate other adverse effects in those who can hear them. This raises an interesting further possibility, in asking how indirect a mechanism we must consider if we are concerned with human safety. Using my tablet computer, I have detected ultrasonic signals in air at around 20 kHz at airports and sports stadiums (I assume from public address voice alarm (PAVA) sources). These are both locations where we rely on adequate performance by dogs to mitigate against the individual and societal hazards posed by bombs, drugs, etc. Extending the question posed in the title of this section, should we expand the range of mechanisms for adverse effects on humans to include the speculative possibility of the underperformance by dogs, including guide dogs, in the presence of ultrasound in air? Should we consider whether or not there are adverse effects (including habitat denial) on protected species of birds, and bats? It must be stressed that these are hypothetical scenarios where the data are lacking, and the boundaries for considering human adverse effects are not yet set by regulatory bodies; therefore they were omitted from the formal journal article [1], but introduced here in response to queries inviting speculation.

The conclusion is that any consideration of adverse effects from ultrasound in air should take account of mechanisms (such as the effects on sensory apparatus) that would probably require far lower energies to stimulate than would the mechanisms for physical damage that probably prompted this question.

(c). Ultrasonic safety is not an issue, because we have decades of experience on the safe use of ultrasound in fetal scanning

Conflating the guidelines for possible health effects of ultrasound that [1] reviews (in air, at around 20 kHz) with sonography (ultrasound in tissue at frequencies in excess of 1 MHz) is misleading. Indeed, [1, pp. 28, 37] explicitly warns against such conflation and explains that the mechanisms by which hazards occur during sonography are very unlikely to be those that would be the issue if indeed ultrasound in air was found to cause adverse effects at the levels to which the public are exposed. Guidelines for the safe use of ultrasound for fetal scanning [14] have as a basis a clear understanding of the mechanism by which adverse effects are generated.

However, there is one useful comparison to be made with sonography: during fetal scanning, we are unlikely to ever use signals outside of the range 1–10 MHz (possibly harnessing some energy up to 30 MHz). The signal cannot achieve the same goals if it uses significantly lower frequencies, because spatial resolution would be reduced; and because higher megahertz frequencies are more strongly absorbed, their use is restricted to more shallow examinations (the skin, the eye, etc.). The pulse profile used in fetal scanning is also constrained by the need to process echoes to obtain specific information for the clinician. In contrast, with many current and proposed applications or incidental emissions of ultrasound in air, we are not so constrained in the frequencies and pulse profiles used: there are options available [15]. If we could undertake the research to understand what signals stimulate adverse effects, and how these might be avoided, and how quickly acclimatization without damage occurs, then we might find ways of allowing technological progress to proceed without adverse effects. For example, the strong PAVA emission reported in [1] is an unintentional by-product of the monitoring system that ensures system integrity, and can be replaced (by shifting to a sufficiently high frequency, reducing amplitude, replacing tonal outputs by more acceptable ones or making use of a DC monitoring system as opposed to using approx. 20 kHz [16,17]). We need to move away from a position where the guidelines have an inadequate basis, and are often improperly applied (e.g. occupational guidelines applied to public exposures). This approach generates a space that allows technology to be deployed without sufficient information being available to assess its potential for adverse effects (or lack of these), and the options that might be deployed to ameliorate these.

(d). Smart phones are not suitable instruments for measuring ultrasonic fields

The 2016 paper [1] never quotes levels (in dB) measured on a smart phone and warns against such a practice. That paper also emphasizes the need to trace measurements back to primary standards [1, pp. 9, 33, 41, 43]. All dB levels in [1] were measured on two or three calibrated systems, sampled at 96 kHz and traceable back to primary standards [1, p. 4, appendix A and captions to figs 1 and 2]. Figure 1a of [1] was included to contribute to the citizen scientist mission: it shows the citizen scientist how to use a smart phone to detect a signal's presence (not its intensity) at around 19.2 kHz and masks the microphone for part of the time history to show that this is an acoustic signal. Citizen scientists responded to this, and to a demonstration video [18], by generating a database [19] of ultrasound in air taken by members of the public using smart phones and tablet computers (for demonstration only, there being no audit or control possible, as with many citizen scientist projects). Taking spectra in this way is within the capabilities of such devices, so long as the microphone/amplifier chain has sufficient frequency response and the highest frequency recorded (with an anti-aliasing filter in place) is no more than half the sampling rate (44.1 kHz being the sampling rate for audio format for compact disc; some smart phone apps sample at 48 kHz, and so sensibly do not display data above approx. 22 kHz).

Figure 1.

Figure 1.

Recordings of the acoustic field in a university computer room, at a range of 1 m away from the loudspeaker which was the presumed main source, show (a) the spectrogram and (b) the sound autospectral density. The colour scheme in (a), and the vertical axis in (b), share a common description ((dB re 20 µPa) per Hz, as shown by the arrows at the base of the figure). Further details are given in appendix B. The significance of the broken lines is described in the text.

During measurements of ultrasound that record SPLs in dB, taken using an appropriate microphone/preamplifier/filter/data acquisition chain, we found it to be important to undertake extra checks in addition to calibrating the microphone and preamplifier before the measurements. First, for a brief interval while the ultrasound is being recorded, the microphone is placed into a calibrator (also with a calibration traceable to a primary standard) and then removed, during data acquisition. This serves two purposes: first, it briefly masks the ultrasonic signal, proving that it is acoustic in origin and not electromagnetic pick-up; second, it provides a limited (i.e. at speech frequency as opposed to the ultrasonic frequency) back-up for the calibration, to double-check that no unreported gain has entered into the data acquisition chain. It is also very important that data acquisition sessions should be ‘bookended’ with a calibration tone. The microphone range from, and orientation to, the source should be recorded, along with ambient metadata (enclosure size, occupation and air temperature and humidity, etc., though some of these may need to be estimated).

3. Human factors

(a). Humans cannot hear ultrasound and so how can they be adversely affected?

This is perhaps the most common question asked and contains within it some complex hidden issues, as follows.

(i). The inherent assumption that an ultrasonic signal is inaudible

The assumption that, if a signal is classed as being ‘ultrasonic’, it is inaudible must be challenged by questioning what we mean when we use the label ‘ultrasonic’. The 2016 paper [1, pp. 3, 9, 10, 12--15, 17, 18, 28, 44] discusses at length the issue of how individuals may differ from the average in terms of the high-frequency limit of their hearing, and those arguments will not be repeated. Nor will this section reiterate cautioning against tacitly linking acuity with the potential for adverse effects in the absence of evidence [1, pp. 3, 9, 18, 24, 36].

Instead it will focus on the fact that every single MPL that has been set for signals ‘at 20 kHz’ [1, fig. 3], across many countries and organizations, has in fact regulated for signals down to approximately 17.8 kHz.

This remarkable phrase bears re-reading. We have (inadvertently it appears to me) already set the lower limit of the ultrasonic band to be 17.8 kHz. Table 1 shows the pattern of the accepted centre, lower- and upper-limit frequencies of the third-octave bands that have for years been used in setting standards and guidelines in acoustics. It is unwise to forget that a given guideline, regulation or standard is framed in terms of third-octave bands. Guidelines to levels ‘at 20 kHz’ actually mandate signals with frequencies as low as a tone just above 17.8 kHz. Table 1 shows how, within the 3.55–35.5 kHz range, there are two distinct regimes (the darkest grey background indicating ultrasonic frequencies, and the light grey background indicating the upper set of bands covered in standard audiometric testing, each regime of course extending to its adjacent bands that are not covered by this table). If these regimes are accepted, then the boxes with a white background show the borders between these regimes.

Table 1.

Nomenclature.

graphic file with name rspa20160828-i1.jpg

While the practice of setting fixed frequency limits for the boundaries between regimes is an artifice, and does not reflect the variability between individuals in human response to acoustic waves, it serves the need of different regulatory bodies to cover the important fields without overlap. For example, the charter of the International Commission on Non-Ionizing Radiation Protection (ICNIRP) [20] has to distinguish ICNIRP's remit from bodies that cover other frequency ranges, just as it has to distinguish itself from bodies that cover protection from ionizing radiation. Such restrictions are necessary, although they create the unfortunate result that a 1% change in a tonal signal (such as the tones in figs 1 and 2 of [1]) might shift it into or out of the ultrasonic regime, if we recognize the need to use the upper and lower limits of the third-octave bands as the dividing lines between regimes. The common practice of colloquially referring to the centre frequencies as the dividing lines (e.g. such as occurs if a user refers to the lower limit of the ultrasonic range as being 20 kHz, yet the same user applies MPLs as illustrated in fig. 3 of [1]) creates a worse outcome, which is that some frequencies are not adopted as a priority by any regulatory authority.

To illustrate this, note that both light grey and medium grey backgrounds are used to cover the frequencies below 17.8 kHz in table 1. The light grey covers frequencies up to 8.91 kHz, the top of the third-octave band centred on 8 kHz, which is the upper band measured for standard audiometry. In a similar vein, specifications for classroom noise in the UK [21] only mandate noise levels against a calibrated signal up to the third-octave band centred on 8 kHz, because they specify use of a class 2 sound level meter which, at frequencies of 10 kHz or greater, has acceptance limits of +5 and −∞ dB (the specification in the third-octave band centred on 8 kHz being ±5 dB) [22]. These guidelines, and a range of others, focus on the frequencies perceived to be important to understand speech in background noise, and progressively less attention is paid as one moves to higher frequencies. In common parlance, the phrase ‘up to 8 kHz’ is often used as short hand for frequencies up to the third-octave band centred on 8 kHz, which includes frequencies up to 8.91 kHz.

The range from 8.91 to 17.8 kHz (indicated with a medium grey background in table 1) is indeed a ‘grey area’: it usually falls within the remit of the same bodies that also cover the frequencies used to understand speech (see below) but is often such a low priority that it receives scant attention, and could be in danger of becoming a lower priority to regulators. Its priority needs enhancing, because the upper regions of this range (12–17.8 kHz) are employed (by mosquito devices and any number of apps for free public download to phones and tablets [23]) to produce age-discriminating adverse reactions.

If one follows what appears to be the logical option of setting the lower limit for ultrasound at the boundary between third-octave bands, but one argues that this should be at 22.4 kHz rather than at 17.8 kHz, then this neglected ‘grey area’ is extended, and would cover most of the sources of ultrasound in air that have been measured in public to date. The logical conclusion is that the lower limit of the ultrasonic band should be 17.8 kHz.

This leaves a messy history which must be tidied up. The charter of ICNIRP states that it covers ‘acoustic fields with frequencies above 20 kHz (ultrasound)’ [20] and yet, in following the long history of setting guidelines for MPLs, it has followed the ubiquitous trend of using third-octave bands, and so implicitly regulated down to 17.8 kHz. The public exposure signals measured in [1] were tonal, so that when using MPLs in third-octave bands, if one sets an MPL of X dB re 20 µPa ‘at 20 kHz’, one should in fact use data from the population taken at 17.8 kHz to identify X. If one uses tonal data at 20 kHz to find the MPL ‘at 20 kHz’, and the population requires a higher SPL (say, Y dB re 20 µPa) for a tonal signal at 20 kHz to induce adverse effects compared with a tonal signal at 17.8 kHz, then, in simplified terms, the MPL ‘at 20 kHz’ will be Y–X dB too great. Given these two facts, there is no alternative but to redefine the lower limit of the ‘ultrasonic’ frequency band to 17.8 kHz. If we keep our tradition of measuring and regulating in third-octave bands (which has the drawbacks discussed in [1] for tonal signals), then a 17.8 kHz lower limit from the ultrasonic regime offers the possibility (currently unmet) of dovetailing with the neglected 8.91–17.8 kHz regime, which in turn dovetails to the terminology for the dominant frequencies used to understand speech. Cox & Moore [24] stated that ‘Average long-term RMS third-octave band speech spectra were generated for 30 male and 30 female talkers… It was concluded that a single spectrum could validly be used to represent both male and female speech in the frequency region important for hearing aid gain prescriptions: 250 Hz through 6300 Hz’, from which, according to table 1, we must take care to interpret as between the limits of 224 and 7.08 kHz, and not just between the limits of the centre frequencies of the third-octave bands. This remains the dominant range although modern hearing aid usage often considers one or two third-octave bands above this, e.g. to help distinguish between sibilants.

The artifice associated with the division of MPLs into third-octave bands is however important, as demonstrated by figure 1 (for experimental details, see appendix B). Figure 1 shows an acoustic emission in a public place that extends in roughly equal measure between the third-octave bands centred at 16 and 20 kHz. In figure 1b, three vertical broken lines show the limits of the third-octave bands centred at 16 and 20 kHz, topped off by horizontal broken lines showing the level of a hypothetical white noise signal that would give the same third-octave SPL as that measured in the data (after the manner of [1]). The most significant component of the measured spectra is the two highest amplitude peaks, which are separated by less than 10% of their frequencies, yet each falls into a different third-octave band. Clearly, for public exposure by such a source, it would be curious to say that the energy in one of these third-octave bands is the remit of one organization (say, ICNIRP for the 20 kHz band) and the energy in the other band is the remit of another organization; and an encumbrance to suggest that neither organization can address the energy in the other's band, as it hinders the output of one source being assessed as a whole for its adverse effects. However, that is a necessary consequence of the chartered remits that are required to govern such organizations. Similarly, for a device emitting a tonal signal to be able to take itself out of the remit of an organization by altering its frequency by an increment of 100 Hz is only acceptable if the other organization (the one responsible for the band into which the energy then falls) takes responsibility, and the change in the allowed level (a change which can be as large as 35 dB for some guidelines; appendix A) is balanced by acknowledgement of the continuity that is seen in human responses (for individuals and populations).

In summary, therefore, the choice of 17.8 kHz as the lower-frequency limit for ultrasound does not close off the possibility of dovetailing with the longer tradition of setting MPLs for the lower frequencies (including speech frequencies), and of course keeps parity with the past guidelines for ultrasound in air listed in fig. 3 of [1]. It is recognized that, even if organizations must abide by frequency limits, it is wise if researchers, policymakers and the journals are circumspect in the use of a single frequency to define whether or not an acoustic wave is ultrasonic when setting a specific frequency to define the lower limit of ‘ultrasonic’. This is because the ability to hear high frequencies depends on their amplitude, and on the hearing characteristics of the individual, so that a signal of a given high frequency that is inaudible might be made audible by increasing its amplitude. As a result of the division between sound and ultrasound being subject to such artificial decisions, and the link between hearing acuity and adverse effects being plagued by a sparsity of data, caution is recommended when this definition of ‘ultrasound’ is used to discuss maximum permissible limits.

(ii). Can humans be adversely affected by ultrasound in air?

Having in the preceding section addressed the meaning of the label ‘ultrasound’ in the title to §3a, the assertion ‘humans cannot hear ultrasound’ should be omitted and that question reduces more properly to ‘Can humans be adversely affected by ultrasound in air?’ As we are discussing the home and public places, the higher levels used in the laboratory for heating [13] are not under discussion, as the levels measured in public places are significantly less. The only intentional physical effect on humans proposed for home and public use, to the author's knowledge, is haptic feedback [25], which is in the early stages of commercialization. Even that works by coupling to the human sensory apparatus, and it is by such coupling that we should first address the issues of adverse effects. Even then, the absence of mass complaints in locations where sources were measured [1] suggests that most people are unaffected. Double-blind tests are required to find out whether a minority are affected, and if they are then social studies are required to determine the management of that minority. In the case of the possible generation of headaches, fatigues, nausea, tinnitus and similar adverse effects, two scenarios must be distinguished from one another: (i) the possibility of an individual being adversely affected by an acoustic wave that has a frequency that is too high for them to hear and (ii) the possibility of an individual being adversely affected by an acoustic wave that has a frequency that is too high for the majority of the population to hear, but which that individual can hear.

To the author’s knowledge, there is no double-blind randomized data on scenario (i) and its existence is currently in question.

Much more is known about the ability of sound (despite its low energy and reflection from the skin; see §2a,b; [1, p. 37]) to induce a range of ‘subjective’ effects, and here the character of the sound matters: birdsong in a garden is generally perceived to be pleasant, but a tone at a constant frequency (taken from the average frequency content of the birdsong), at the same SPL as the birdsong, rapidly becomes annoying to most people. Indeed (introducing a first-hand anecdote), a co-worker tells me that he plays long tracks of birdsong in his garden to attract birds, but the repetition of this long-duration track has been noted by his wife, and annoys her! Of course, once annoyance or stress or anxiety from a stimulus has been induced, a wide range of other effects can follow, including headaches, etc.

The issue in this paper involves the exposure of humans to much higher frequencies than birdsong (the public signals measured in figs 1 and 2 of [1] were tones in the third-octave band centred at 20 kHz). The levels measured in public places to date are relatively low [1,3], and, if these are the cause of the anecdotal adverse effects, the mechanism is likely to be too complicated to relate simply to the absorbed energy. These public exposures are not particularly ‘loud’ in terms of the levels of noise many of the public tolerate at speech frequencies, and I suspect that many public sources would not be breaking the only guideline for public exposure (70 dB re 20 µPa at 20 kHz; 100 dB re 20 µPa for higher third-octave bands [26]) at the position of most ears. However, the tonal signals generated by the apps [23] mentioned in §3a(i) also do not appear to be exceeding their MPLs, and, in a sample population of mixed age range, some people will not hear the 14–18 kHz tones from these, and some will (without prompting) be too disturbed to continue their conversations or tasks.

The phenomenon of high-frequency signals (approx. 12.5–20 kHz) generating annoyance, stress, anxiety and (as a result of these) adverse effects like headaches is one issue. The direct generation of pain is another. Speculatively, it may be that, as the frequency approaches the upper range of what an individual can hear, there could be an increased propensity for pain, if the signal is heard. This is because an individual will normally have, at any given frequency, a threshold dB SPL at which they can just hear a signal, and a higher dB SPL at which the same frequency causes pain. At the upper frequency range of a person's hearing, the ‘hearing threshold’ dB SPL increases rapidly with increasing frequency, and if the threshold for pain does not increase at the same rate the two will cross. If this indeed is the case, the usable dynamic range (where a person can perceive a signal without pain) diminishes near the upper end of a person's hearing. Given that the upper limit for many people is in the 12–18 kHz range, it would have served us well in the past to have an environment that historically has been relatively free of such signals. Some of these environments are disappearing (figure 1).

Speculation of this sort cannot be adequately tested until:

  • — We have an agreed protocol for measuring such fields and enough measurements to map what levels occur and where.

  • — We have an accumulated database of audiologically approved (even double-blind, if necessary) tests to see how high are the frequencies that individuals can hear, investigating differences between young children and adults, and testing statistical significance; and a similar database on adverse effects. Experimental design should take into account the difficulties with repeatability and reproducibility inherent in testing at these frequencies (see, for example, [1, fig. 6]).

  • — We have agreed frequency boundaries for bodies setting guidelines which ensure that no frequencies are neglected.

Once this has been done, we can begin to understand what adverse effects can be reliably generated or ameliorated in individuals, and by what mechanisms. It is too early to be conclusive, and lack of data is the key issue—but the simple app demonstration outlined above shows that sounds which one person (let us call them for argument's sake the ‘older person/manager’) cannot hear can cause distress in another individual who can hear them, which has implications if this is in a situation controlled by the older person/manager.

However, such speculation is not the purpose of [1], which never attributes symptoms to an ultrasonic cause. It reviews publications that have, and emphasizes that the evidence base is too slim to set guidelines [1, pp. 1, 6, 9, 10, 26, 27, 35, 40, appendix C] and calls for double-blind tests with appropriate attention to false positives [1, p. 41]. The 2016 paper [1] does not set out to establish either causation or correlation. Rather, it looks at the fact that we issue several new guidelines each decade, and asks why (if these guidelines are right) people complain. That leaves a number of questions:

  • — Are the people incorrect in identifying ultrasound as a cause of their adverse effects? The conclusion of this author is that it is impossible to know with the present data: a double-blind randomized controlled trial needs to be funded. If ultrasound were to prove to be the cause, then that would open up the question of whether this gave us options for mitigating these adverse effects: if they are produced because of a general anxiety about ultrasound then probably not; but if we can find a threshold level for direct stimulation of adverse effects, then such levels can be considered in context by those setting guidelines.

  • — Is our reliance on current guidelines appropriate? The conclusion of this author is that it is not: current guidelines are based on an inadequate research base, occupational guidelines are being applied to public exposure, and the practice of copying old guidelines in order to issue new ones has been insufficiently critical.

  • — Are the instrumentation and procedures up to the task of assessing the effect of ultrasound on humans? The conclusion of this author is that they are probably not: there is inadequate appreciation of the complexities in transposing audio frequency procedures up to ultrasonic frequencies; and the international standards allow inaccuracy in sound level meters for ultrasonic fields (although of course individual manufacturers are free to apply more stringent rules for their own products, but we have no data on this practice).

4. Conclusion: why spend time on this now?

It is a reasonable point that the list of symptoms (nausea, dizziness, migraine, fatigue, tinnitus and ‘pressure in the ears') could be attributed to many possible causes, and none. It is also a reasonable point that, with scant resources and in the absence of double-blind evidence that ultrasound is causing these, we should focus our resources on health issues that are clearly a higher priority for the general population.

This being the case, an honest approach would be to stop the practice of copying old guidelines in order to issue new ones. We must then ask why bodies and nations feel the need to issue new guidelines: if the argument for that is good (and my interpretation of the evidence is that it is), then we must address public exposure, and base new guidelines on new data, which means resourcing the double-blind controlled trials that are cited (by their absence) above.

There is another reason for timeliness. Our attitudes to ultrasound in air in public places are informed by experience, but now new technologies are to be devised, supported by guidelines for which the evidence base is too slim, and where the application of occupational guidelines to public exposures is common practice. Many of these new technologies would place the source of sound close to the heads of members of the public. These include ultrasonic technology to replace current proximity sensors in mobile phones, allowing them to sense when closeness to the head permits the screen to be turned off [27]; ultrasonic spotlights for the home [28], including for use where a member of the household has suffered hearing loss [29]; and haptic feedback systems for the PC [30]. Such technology might, or might not, generate adverse effects—current guidelines and research do not allow us to make that assessment. The number of technologies exploiting 17.8 kHz and above in air is poised to proliferate. Now is the time to assess whether or not this is safe.

Acknowledgements

The author is grateful to Craig Dolder for insightful discussions, and for working with undergraduates Sarah Dennison and Michael Symmonds to provide the data of figure 1. The author is grateful for insightful input on the audiology topics from Ben Lineton, Mark Fletcher and Sian Lloyd Jones, and on signal processing from Paul White.

Appendix A

This paper argues that, if a lower-limit frequency for the definition of ultrasound is required, the most logical choice is to use the frequency at the lower limit of the third-octave band centred on 20 kHz. The problem with having such limits is that a small change in the frequency of a tone can take it from one third-octave band to another, thereby altering by a significant amount the maximum permitted level. This is particularly problematic with the lower-frequency limit of the third-octave band centred on 20 kHz, because its estimated value varies with the way it is calculated, and none of those calculations produce such a memorable, simple value as 20 kHz.

To calculate this value, one must first choose how to define an octave. Two methods have been in common use in the past, differing in whether they define the octave above frequency f as being 2f or 103/10f = 1.99526f (respectively termed the ‘base-2’ and ‘base-10’ methods).

The 1997 edition of the International Organization for Standardization [31] compares the third-octave ratios and states that: ‘Strictly, these two series are incompatible. However the base-two series may be accepted as a sufficient approximation to the base-ten series because of the fact that 21/3 = 1.2599… is very nearly the same as 101/10 = 1.2589. Practical considerations make some additional rounding desirable: Thus 500 Hz is listed instead of 501.187233… Hz.’ The latest edition (2014) of the filter standard [32] (which has been adopted as BS 61260-1:2014-1 and EN 61260-1:2014-1) recommends the use of base-10 for all new instruments (while accepting that the base-2 system is still in use for older instrumentation), removing the possibility of misreading the intention of older standards [33] with regards to the base use.

Once the base has been stated, in order to calculate the threshold frequency between two- and third-octave bands, one must decide whether to use the centre frequency of the band above the threshold, or the one below it. For example, following the base-2 approach, if the centre frequency of the band is fc, then the lower limit of the third-octave band is ideally given by fc/[(21/2)1/3] = fc/(21/6) (such that the base of the third-octave band centred at 20 kHz is 17817.97 Hz). However, the upper limit of the third-octave band is given by fc[(21/2)1/3] = fc(21/6), such that the top of the band centred on 16 kHz is 17959.39 Hz. The International Organization for Standardization [31] states that the calculated values are defined as the exact values rounded to five significant figures, and in general the standards allow some flexibility and discretion in choice of these frequencies.

Some authors choose to use 17 959.39 Hz [34], whereas other sources use 17.8 kHz [35], a difference of around 0.9%. If the signal is tonal, with a quality factor exceeding 100, then uncertainties of this magnitude cannot be tolerated, because, by allowing uncertainty as to whether the emission falls in the third-octave band centred on 16 kHz or the one centred on 20 kHz, they incur large uncertainties in MPL (e.g. a difference between these two bands of 35 [36] or 25 dB [3740]; see [1, table 1]). Hence this paper recommends using 17.8 kHz as both the lower limit of the third-octave band centred on 20 kHz and the lower limit of ultrasound.

Similarly, the centre and third-octave boundary frequencies of table 1 are chosen as acceptable compromises. However, if we are discussing tonal signals at frequencies close to the borders between third-octave bands, or if the output of a single device is sufficiently wideband (figure 1) to place some of its continuum output in the jurisdiction of a body whose charter is to consider only ultrasonic frequencies [20] and some in the jurisdiction of a body considering only non-ultrasonic frequencies, then the expeditious appeal of defined frequency boundaries must be tempered by good judgement as to whether appealing to them aids or hinders safety.

Appendix B

Figure 1 shows recordings taken using a calibrated system in a computer room (measuring 10 × 25 × 7 m3) in a university, 1 m away from the supposed source (see below), at head height. There were eight speakers in the room, four on each side, spaced approximately 5 m apart. The room contained several long desks with PCs on them, a wall-mounted monitor screen (which was on standby), above which was a loudspeaker (which was assumed to be the source, though this could not be verified under the permissions given to make the recording; note that in such a room there could potentially be a range of such sources). The tones could be detected throughout the room. The recordings were made using a PCB 377B02 microphone, PCB 426E01 preamplifier, PCB 480E09 ICPR sensor signal conditioner and a Fostex FR-2LE handheld field memory recorder. Calibration measurements were taken immediately before each measurement of a source using a B&K type 4231 calibrator. The microphone calibration was traceable back to primary standards for VHF/US ranges at the National Physical Laboratory (NPL) in the UK, and the Danish Fundamental Metrology A/S (DFM) in Denmark. The calibration of the microphones was checked by NPL against a reference microphone (IEC type WS3), which had been calibrated up to 200 kHz at DFM using a primary free-field calibration method. See the Data accessibility section for details of how the raw data and metadata can be downloaded.

Four different SPL dB levels were calculated from the same time-series dataset. Calculating the SPL in a third-octave window centred around 20 kHz gives an SPL of 32 dB re 20 µPa. If the third-octave window is repositioned to be centred around the frequency that produced the highest peak in the power spectral density (here, 18.8 kHz), the SPL is 34 dB re 20 µPa. (An SPL based on the energy contained only at that peak frequency gives a level of 25 dB re 20 µPa, but this method is not recommended, as the result depends on the size of the fast Fourier transform.) If the SPL is based on the energy contained within frequency limits 1 kHz to either side of this 18.8 kHz peak, the SPL is 31 dB re 20 µPa.

Data accessibility

The data supporting this study are openly available from the University of Southampton repository at http://doi.org/10.5258/SOTON/405739.

Competing interests

The author has no competing interests.

Funding

Part contribution to T.G.L.'s time spent on this study was made by the Colt Foundation (RP015002) and by the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme under the EARSII project (15HLT03).

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data supporting this study are openly available from the University of Southampton repository at http://doi.org/10.5258/SOTON/405739.


Articles from Proceedings. Mathematical, Physical, and Engineering Sciences are provided here courtesy of The Royal Society

RESOURCES