Friday, 4 November 2016

Et tu Brute?

Will the new Macbook Pro Models drop the Optical/Audio output?

http://www.macrumors.com/2016/11/04/new-macbook-pro-models-lack-optical-audio-out/

Thursday, 3 November 2016

The Milk Snatcher

With the US Presidential election unfolding in a way which has the rest of the world mostly recoiling somewhere between disbelief, horror, and pity, I thought it would be a choice time to write a political column.  I don’t believe I have anything helpful to say about either Trump or Clinton, so I’ll focus my column on another political leader that inspired an almost comparable combination of adulation and loathing – Margaret Thatcher.  And just to be clear here, I don’t think that the subject of this column has any practical parallels with the present American situation other than the emotions and divisions that were aroused, although the Thatcherism ‘debate’ mostly arose after her election rather than before it.  On the other hand, a close look at the political situation of Great Britain in the 70’s and 80’s might form a useful point of contrast – maybe dramatically so – with the issues driving the political dialog in America today. So grab a cup of coffee and a cookie there's a lot to cover!

I will try to avoid applying a political slant to my piece, although it is a fair argument to suggest that any piece on a political subject can hardly avoid being slanted one way or the other.  And there are always those who will always insist that a piece that is not slanted their way is by definition slanted the other way.  Nonetheless I’ll try to straddle the middle ground as fairly as possible.  I will say, though, that I lived in England throughout most of the years in question, leaving for Canada in 1988.  During that time I never once voted for Margaret Thatcher, and if I had my time again I doubt that would change much – not that I had any great affection for the alternatives.  But I mention this to give context to the fact that even though I was not politically aligned with her, I always had the greatest admiration for her as a politician, and still do to this day.

Margaret Thatcher was a politician driven by deeply held and carefully considered social principles, something we normally associate with radicals from the left driven by ideals, rather than conservatives from the right driven by self-interest.  At a time and place in which Socialist and Tory governments alike had tended to address the problems of the day by employing varying degrees of pragmatism and consensus, Margaret Thatcher stood out as someone who wanted to tear down the walls and rebuild the edifice.  But what really made Margaret Thatcher so unique, not only from the perspective of analyzing what brought her to power, but also (from hindsight) of how she wielded that power, was the fact that she made no effort to hide her agenda.  She said exactly what she wanted to do, and then set about doing exactly that.  And it is a surprise to many with only a passing view of the legend of the Iron Lady, that she accomplished most of what she did by bulldozing her agenda past a largely unconvinced – even obstructive – cabinet drawn mostly from the ranks of senior old-guard Tories.  Thatcher did not see the Prime Ministership as an end in itself, a high office whose retention became job #1.  For her it was more a necessary requirement to achieving her political goals.  She was not a person whose political positions were carefully selected to best serve her personal ambitions.  Unlike those who followed her.

By the late 1960’s, the core of British industry operated as government-owned monopolies.  Mining, Steel, Shipbuilding, Gas, Electricity, Telecommunications, Transportation, Aviation, and, from 1975, most of the automobile industry, were all nationalized.  These industries, together with the National Health Service, the Civil Service and the Education system, together formed the labour-intensive core of the British economy.  By 1968 a combination of stagnating productivity and increasing competition from overseas meant that inflation had started to take hold, and this resulted in accelerating wage demands.  These in turn drove further price increases for the goods and services being produced, a runaway situation known as a wage/price spiral.  With the greater part of the economy being nationalized, it meant that the government didn’t really have separate levers with which it could seek to control prices and wages.  The two were inextricably linked.  However, at that time, most governments took the view that inflation could be countered by a combination of price and wage controls.  But with the high rate of inflation, the required wage controls proved to be incendiary, and the trade unions were strongly opposed.

The trade union movement was highly organized on a national level and was extremely powerful.  Most of the unions were deeply influenced by far left-leaning – even communist – ideologies, and were well aware that they had the power to bring down the government.  And indeed, in 1975 a strike by the Coal Miners’ union did bring down Ted Heath’s Conservative government and ushered in a more union-friendly Labour administration.  But all this accomplished was to put off today’s problems until tomorrow.  While Labour would quell the unrest by meeting the unions’ wage demands, they had no mechanism available to prevent the resultant price rises which would feed further inflation and thereby erode the wage gains.  With hindsight it was clear that something had to give, but at the time few saw it in such stark terms.  When, in 1979, the unions rose again to exert their power, it was their own Labour government that they brought to its knees – and this time there was nobody to the political left of the government to step in and bail them out.  What they got instead was Margaret Thatcher.

The other major political issue of the day concerned Defense policy, and in particular the Cold War.  At that time, the Western policy towards the Soviet Union was known as “Détente”.  In broad-brush terms, this involved ongoing negotiations between East and West to scale back their respective nuclear threat capabilities.  This ‘thawing’ of relationships was widely seen as being progressive and mutually beneficial.  Even so, Margaret Thatcher was very concerned that while the West, with its more transparent political structures, would by and large follow through with its commitments, whereas the opaque and intransigent Soviets would lie and obfuscate to their great advantage.  Furthermore, even if both parties did in fact scale back their nuclear arsenals to any substantial degree, this would only serve to expose the West to the Soviets’ considerable superiority in terms of conventional warfare.  By contrast, within the Labour Party, not only was there enthusiastic support for détente-driven nuclear arms reduction, there was even a core movement in favour of unilateral nuclear disarmament, something which Thatcher felt was intolerably reckless.

Mrs Thatcher assumed leadership of the Conservative Party in 1975, following Ted Heath’s humiliation at the hands of the Miners’ union.  Between then and her election victory in 1979, she developed the doctrines which were loosely to become known as Thatcherism.  These were broadly as follows.

First, the notion that the only levers available to a government to control inflation were prices and incomes policies was clearly wrong and wasn’t working.  In its place would be a monetarist policy where inflation would be brought under control by strategically managing the money supply.  This would require ruthless cuts in the amount of money fed by the government to any inefficient nationalized industries that proved unable to manage their own internal cost structures.  In other words, most of them.

Secondly, the entire portfolio of nationalized industry would be sold to the private sector, a process which, under Thatcher, would come to be known as Privatization.  Those that were still in good enough shape would generate considerable interest in the marketplace.  However, those that did not could no longer expect to be propped up by the government.  Privatization had the additional advantage that the cashflow received from these sales could be used the plug the holes in the money supply strategy that was needed to bring down inflation.  Another core aspect of the privatization process was that Thatcher felt strongly that these national assets should be sold not to institutional investors, but to the ordinary citizen.  [And indeed, mechanisms were put in place to ensure that private small investors could jump to the front of the queue when it came to the disposal of these assets.  They would do so in droves.]  Related to this politically was a policy to allow Council House tenants (a widespread, indeed dominant, form of social housing) to buy their homes at favourable pricing from their local Councils.  [Again, they would do so in droves.]

Third, the unions had to be decisively beaten and their power severely curtailed, although this would only come to the fore during her second term beginning in 1983.  There would be three major thrusts to her strategy to accomplish this.  As a first step, she would introduce legislation to replace the status quo which was that union members who broke the law would be prosecuted only as individuals, which had the effect of insulating union leaders from the consequences of their policies and actions.  Instead, unions whose members broke certain laws could have their funds sequestered by the government.  These funds were in many cases quite lucrative, and this would prove to be a crucial policy tool.  [Interestingly, the Heath government of 1970-74 had provisionally introduced such a policy, but fell when the striking Miners called their bluff and the government backed down.]  Next, she would introduce legislation to require any vote on strike action to be carried out by secret ballot.  This was massively resisted by the union leadership, who knew only too well that coercion and intimidation were powerful tools which only worked when voters could not avoid disclosing which way they voted.  Finally, she would introduce legislation to outlaw secondary picketing.  This was a practice where a union could arrange for members of one employer’s workforce to picket outside a different business’s premises.  Furthermore, the unions were not averse to employing outsiders - and even thugs - to add to the numbers of picketing workers, and it often became impossible to tell who was and was not a legitimate striking worker.  Violence was often employed.

Finally, a strong stance would be taken towards the Soviet Union.  She felt firmly that the only way to ensure peace was to ensure that the Soviets genuinely respected the capabilities - both offensive and defensive - of the West.  On the other hand, the best way to actively engage with and defeat the Soviets was economically.  Thatcher was convinced that the socialist system was fundamentally weak, and would be ultimately unable to sustain the economic growth that capitalist policies would drive in the West.  In this she found a resolute ally in Ronald Reagan, who held office throughout most of her premiership.

Margaret Thatcher was elected Prime Minister in 1979.  She came to power largely because the country had lost faith in the ability of the previous Labour administration to manage serious conflicts with its own labour movement, and were willing to try something different.  Thatcher never hid what she planned to do, and was quite determined that – come what may – she would deliver on her manifesto.  But I don’t think the country as a whole fully appreciated what that would amount to.  At least not in 1979.  In fact, until the Falkland War appeared unannounced out of left field, most observers believe that she would not have survived her first election, such was the impact of the bitter medicine that her new monetarist economic policy prescribed straight off the bat.  And if the opposition Labour Party had not been so thoroughly derailed by the radicals on their own left wing, most notably in the area of Defence, they would have been well placed to take immediate advantage.  [Indeed, the political self-immolation of the Labour Party during her entire premiership was arguably the most significant factor in her ability to hold onto power, and I don’t do justice to that in this already lengthy column.]  But Thatcher was indeed re-elected, and the full spectrum of Thatcherism was to follow.

Thatcher did, by and large, accomplish everything she set out to do.  She privatized the profitable nationalized industries and starved the irrecoverable ones to death by turning off the spigot that fed them with regular cash handouts.  She conquered the problem of endemic inflation, and oversaw a significant economic recovery.  She emphatically defeated the trade unions, and even outlasted the Soviet Union.  In many ways her legacy is a magnificent one – Thatcherism totally reshaped the Great Britain that she left behind.  But in many other ways it is not.  Her economic policies put many millions of people out of work, with no prospects whatsoever of finding another job.  Whole communities were effectively devastated, and there was little evidence of her much-vaunted economic recovery in large parts of the country – most notably those that suffered most from the loss of traditional industries.  To a significant degree, the economic recovery was enjoyed mostly by the haves, and the have-nots didn’t get much of a look-in – a situation that some may see reflected in many aspects of today’s economy.

It is a fair question to ask whether and how such a cost ever can be justified, one to which there are widely diverging, but equally valid, viewpoints.  Thatcherism undoubtedly caused terrible misery for a huge number of citizens who rightly looked to their government to protect them from such things.  Many, many people still hate Thatcher with such a passion that when she died in 2013 they celebrated the occasion with unseemly joy.  Ding-dong the Wicked Witch is dead. She was that divisive.  But the virulently anti-Thatcher elements have a difficult argument to make when they imply that, given the depths to which the Country had sunk in 1979, and the increasingly radical policy positions of the Labour Party in response, they would have ended up any better off under a decade of Labour Administration.

I don’t have enough space here to provide a thorough treatment of my chosen subject, and there are many significant elements of the Thatcher story that I don’t even begin to touch upon.  I’m already at 2,621 words!  A half-decent treatment would result in a multi-volume book that I could stand on to clean out my gutters.  Whatever your (or my) personal stance towards Margaret Thatcher’s politics, she was an icon for a number of things that I wish were more in evidence in today’s political environment.  First, she actually stood for something, and sought political leadership so that she could make those things happen, rather than cynically cherry-picking hot-button issues as vehicles to deliver her to high office.  Second, she placed a high premium on communicating those things to the public.  She genuinely felt that if only the public fully understood what she wanted to do, and why, they would be fully behind her.  She hated the thought of misleading the electorate, or of failing to get her message across.  Third, she had the incredible strength of character, intellect, and personal toughness required to drive those policies to an effective implementation.  It really doesn’t matter if a political leader has a great vision if they don’t have the leadership ability to actually deliver it in government.  Although she exhibited some significant holes with regard to all three of those attributes, it is hard these days to find a political leader who displays unambiguous strengths in more than one of them.  Frankly, most of them seem to have none of those strengths at all.

Finally, I’ll leave you with this thought.  Don’t you think it is quite remarkable that I can write all of the above without having to frame it in the context of her gender?

Wednesday, 2 November 2016

iTunes - “Up Next”

Those of you who have used the iTunes “Up Next” feature may find yourself frustrated by its limited behaviour.

If I initiate playback of an album, playback starts at the first track and proceeds through the album.  If you click on the “Up Next” drop-down button you will see the remaining tracks in the album listed.  To add music to the “Up Next” playlist, you would right-click on it and select “Play Next”.  This works regardless of whether the selection comprises a single track, a selection of multiple tracks, an album, or even a selection of multiple albums.  The problem is, though, that this has the effect of inserting the added track(s) immediately after the currently playing track.  This may not be what you want.  It’s certainly not what I normally want.  Often, what you really want is for the tracks to be added to the end of the “Up Next” playlist.  Unfortunately, iTunes 12 does not seem to offer that option.


Except that it does.  It is just very well hidden, in the sense that I don’t find it to be intuitively obvious.  What you have to do is drag the selection of tracks and/or albums that you want to add and drop them on the rectangular area at the top of the iTunes window that contains the track progress bar.  When you mouse-over that area immediately before dropping, you will see that the whole rectangle acquires a blue border around its edge.  When you drop the tracks, they will be added to the end of the “Up Next” playlist.  I find this useful to know, although I’m pretty sure most people would have preferred having it as an option in the right-click menu.

Tuesday, 1 November 2016

iTunes 12.5.3

I have been using the new iTunes 12.5.3 for most of the day and it has been working just fine for me. BitPerfect Users can install this update with confidence.

It might be too early to proclaim this from the rooftops, but the early indications are that this update may have finally gone some way to addressing the Gapless Playback problem that has plagued BitPerfect since day one. Those of you that have read this post on Gapless Playback will know that at the end of every track BitPerfect queues up the next track by pre-loading it into memory, but iTunes often provides the wrong information when it identifies what the next track is. The result is that BitPerfect queues up the wrong track, resulting in a glitch when BitPerfect discovers what has happened and switches to the right track. Well, the indications are that Apple has made some changes to iTunes in this area and as result is identifying the correct track most (if not all) of the time. Stay tuned, and as the picture gets clearer I will post a more detailed assessment.

Friday, 28 October 2016

iTunes 12.5.2

I have been using the latest v12.5.2 update to iTunes and everything seems to be working just fine. BitPerfect users can proceed with this update with confidence.

Wednesday, 5 October 2016

XLD & Sierra

Those of you who use XLD to convert FLAC (and other audio file formats) to Apple Lossless format for use with iTunes, may encounter unexpected difficulties when using it after upgrading OS X to macOS Sierra.  The solution is to download the latest version from:


Tuesday, 4 October 2016

Dies Irae from Verdi's Requiem.

For your viewing pleasure on YouTube, a fine all-Italian quartet of soloists, although non of the other three can quite match the power of impressive soprano Erika Grimaldi. London Symphony Orchestra, conducted by Gianandrea Noseda. The Mezzo was Daniela Barcellona, Tenor Francesco Meli, and Bass Michele Pertusi. Recorded live on 18 September 2016 in the Barbican Centre (where good orchestras go to die), London. Most enjoyable with the aid of a good pair of headphones (I used Audioquest Nighthawks).

Look for this to be released on the LSO Live label at some point.

Monday, 3 October 2016

de Vriend’s Beethoven Cycle

I wrote a while back about how the Mahler Symphony Cycle has more or less replaced the Beethoven Cycle as the reference standard against which modern conductors and orchestras seek to measure themselves.  One of the problems is that there is almost a saturation in the Beethoven repertoire. It's been done so many times that there is less and less room for someone to make a new statement, or showcase a personal approach. Another issue may be that Beethoven’s canvas can be considered more limited and more limiting than Mahler’s, although that is an argument that tends to find more traction outside of professional music circles than within.

On the other hand, Beethoven’s relatively more rigid and formalized approach can be used to great advantage to emphasize subtle points of interpretation, particularly in the context of a complete cycle, in much the same way that a Black & White photograph often opens a window to a greater appreciation of composition and character than its colour counterpart.  There is also the practical issue that it is possible, if one is of a mind to do so, to audition a 5-hour Beethoven cycle over the course of a leisurely afternoon, something that would be out of the question with a 13-hour Mahler cycle.

These days, for a conductor embarking upon a new recording of the Beethoven cycle, the vast legacy of Beethoven Symphony recordings that are already out there must surely loom dauntingly.  I recall reading one reviewer’s assertion that there are over 400 complete symphony cycles alone, something I find astonishing.   So, whatever your vision might be, there is a pretty good chance that somebody, somewhere, sometime, has already done something similar. Then there are the great reference cycles to be considered - what can possibly be constructively added to what the likes of Karajan, Toscanini, Klemperer, Bohm, and so forth, have already laid down?

Over the last three or four decades we have also been treated to the HIP (“Historically Informed Performance”) movement, which seeks to pay homage to the fact that musical instruments in Beethoven’s time were constructed differently, and hence sounded different, compared to contemporary practice.   It, in effect, poses the question “What would these pieces have sounded like at the time they were originally created?”, the unspoken subtext being that whatever it was should most accurately reflect the composer’s intentions.  It is a very valid question from an academic perspective, and makes for a fiery philosophical discussion.

In any case, none of this seems to have put any sort of brake on the continuing output of recorded Beethoven cycles, which continue to emerge.   And it should be noted that some of them have been very highly praised.  Harnoncourt, Chailly, Jansons, and Krivine have all produced well-received cycles during the last decade although I haven’t actually heard them all (or, in the case of Krivine, even heard of him!).  The cycle I am going to report on here is from another conductor who, until I happened upon this cycle, also occupied a place on my ‘never-heard-of-him’ list.  Jan Willem de Vriend.  Do we call him “de Vriend” or just “Vriend”?  I don’t know, but either way I’m already getting pretty fed up with the way my spell-checker keeps changing him to “Friend”, so apologies in advance if any of those escape my final proof-reading.  Here, Vriend conducts The Netherlands Symphony Orchestra.

Carlos Kleiber’s 1975 recording of symphonies 5 and 7 with the Vienna Philharmonic stands out - and in my view stands head and shoulders above all others - as a landmark interpretation.  In many ways, it established a new school of thought regarding Beethoven interpretation, but it would take more space than I have here to do that notion justice.  Where, for example, Karajan’s superb 1962 cycle emphasizes phrasing, tonality, and an earnest sense of reverence, Kleiber’s 5th has a lighter, smiling face, and opens our eyes (ears?) to the importance of the tight rhythmic elements of the composition, something with which modern jazz musicians would feel an immediate kinship.  Vriend’s new Beethoven cycle is very much of the Kleiber school, which, I suppose, is one reason I like it so much, since Kleiber, being possessed of a famously difficult personality, did not go on to record a complete cycle.

Precision” is the first world that comes to mind when listening to the de Vriend cycle.  It's what in Rock Music circles we refer to as tight.  And Vriend would, surely, have been a drummer.  Every phrase and passage, every instrument, is carefully delineated, so that we get to hear deeply into the music.  The phrasing is light and airy, but tightly controlled.  Tempi give the impression of being on the brisk side, but a stopwatch shows this to be mostly illusory.  Above all else, there is a cohesion of purpose across the entire cycle, accomplished to a degree I have never previously heard.  Listening through the entire cycle in one sitting, as I have done several times, each symphony flows naturally into the next, like movements within a single vast work.  What comes across is a combination of conductor and orchestra very much on the same page - the one is very clearly buying quite enthusiastically what the other is selling.

Perhaps Vriend’s most remarkable accomplishment is the way he transforms Symphony No 1 from being a 'baby brother' symphony to fully formed mature work.  Once the slightly plodding introduction gives way, it really makes you sit up and take notice.  It is the closest thing you will ever come to hearing a previously undiscovered Beethoven symphony for the first time.  Has de Vriend played fast and loose with the orchestration?  There is a richness of tone and sureness of touch to the development that I haven’t previously associated with the Haydn-esque Symphonies 1 and 2.  I certainly didn’t detect any evidence of such liberties being taken with any of the other symphonies that I know much better.  Either way, as the closing bars of Symphony No 1 bray triumphantly out, your attention will surely have been captured, and you will probably find yourself staying in your listening chair as No 1 gives way to No 2, No 3, and so on.  I've lost count of the number of occasions in this cycle where, as a particular movement closes, I just want to do a fist-pump and shout "Yes!".

The famous 9th symphony was the first of the cycle that I actually heard, and it prompted me to get the rest of the cycle.  ‘Idiosyncratic’ was the word I wrote on my notepad.  It too had me sitting up from note one, although first time through it was more ‘interesting’ than gripping.  However, it served it purpose, and left me wanting to listen through again, having notched my expectations up accordingly.  The 800lb gorilla in the 9th symphony is the choice of tempi with which to conclude the final 30 seconds of the last movement.  It is quite possibly classical music’s finest and most satisfying climax.  My problem is that, for me at any rate, Karajan’s 1962 performance rules the roost, and any departure from his inspiring rendition just sounds jarring to me.  And de Vriend’s version DOES depart.  Not in a good way.  No fist-pump.  Big let-down.

Like I said, more than anything else, what de Vriend has accomplished here is the most coherent Beethoven cycle I have yet heard.  It is not perfect, though. While his performance of the 1st Symphony may conceivably be the finest on record, none of the other symphonies will likely make anybody’s personal ‘best of’ list.  But this whole coherence thing is not to be under-rated.  It has a magnetic personality of its own.  More than with any other symphony cycle I own, listening to any one of these symphonies makes we want to listen to another, and another, and another.  As a cycle, I have always had a soft spot for Karajan's 1962 go-round, but playing it now, I find myself hearing it as a curation of nine separate symphonies, rather than as a collective statement. What Jan Willem de Vriend has accomplished with this cycle deserves great credit.  My feeling is that, as it continues to grow on me as a cycle - and it really does continue to grow on me - it will establish itself considerably in stature.  I just wish the ninth didn’t wrap up so disappointingly!

One last thing to be said about this cycle.  It was recorded by Northstar Recording in Holland.  This group is making what are quite possibly the finest classical recordings in the world today.  Given that the quality of classical music recording in general is today at an extraordinarily high level across the board, these could quite possibly be the finest classical recordings ever.  Take advantage while you get the chance.  Here I listened in DSD64.  I also have some of their other recordings in their native DXD (24-bit 352.8kHz PCM) format. [What with Channel Classics also being Dutch, there must be something in the dunes and dykes over there.]   It is SUCH a bonus when great music and great recordings come together.

Saturday, 1 October 2016

Jenna Mammina - Close Your Eyes

All too often, as audiophiles, we are torn between listening to the music that we like to listen to because of its musical qualities, and music that we appreciate for its sonic qualities.  Some of our favourite albums are - lets face it - just not that well recorded.  This is brought into even sharper focus when we listen to older recordings - I have examples going back to the 1950’s - that have been remastered recently under circumstances where sound quality is secondary to absolutely nothing.  The recordings I am talking about are all - virtually without exception - major commercially successful recordings.  In some cases (a good example here might be The Doors’ self-titled 1967 debut) they are even colossal musical landmarks.  But today, if anything, contemporary recordings seem to be getting worse, even as recording technology supposedly improves.

When it comes to new releases, the gulf between commercial and specialist recordings in terms of sound quality is widening by leaps and bounds.  The best specialist recordings are getting progressively better, while mainstream commercial recordings are getting progressively worse.  [The one ray of hope is in Classical music, where the sound quality of commercial recordings is getting to be staggeringly good pretty much across the board.]  The trouble with the specialist recording industry is that there is a bit of a disconnect between the artists and music that they offer, and the tastes and desires of the wider buying public.  Most of this is down to simple economics.  There is no money in the music industry, which seems an odd thing to say with Kanye West and Taylor Swift flying overhead in their private jets.  But it’s true.  Just as there’s no crying in baseball, there’s no money in music.

So as audiophiles we can cue up our audiophile prizes - here’s a stunning recording of Joe Blow with his guitar whispering his honestly-crafted and heart-felt folk songs; there’s a Jani Doe with her heart-on-her-sleeve piano arrangements of out-of-copyright classics (listen - you can hear the tears rolling down her cheeks); I have a copy of Burt Qwonk’s incredible virtuoso performance on the [insert name of a bizarre instrument that looks like a guitar with four necks]; then there’s all these other REALLY INTERESTING albums.  No, wait, honestly…  Once you put your cynicism to one side, some of them are REALLY GOOD.


Sorry, I was getting kind of excited there.  No, they’re not.  None of them can be mentioned in the same breath as Kind Of Blue, Ziggy Stardust, The Doors, OK Computer, Couldn’t Stand The Weather, Random Access Memories, etc, … the stuff you really want to play when the booze has run out and your audiophile buddies have all gone home.


Wait a minute.  “Random Access Memories”??  I did NOT write that!


No.  What I’d want - what we ALL want - is an an album of really good music, showcasing first rate material, serious-shit musicians, and a producer who won’t settle for sound that falls short of demonstration quality.  I’d want an album that I’d enjoy listening to even if it was only on my car stereo.  I’d want an album I’d find myself humming all the time.  I’d want an album I can play to friends without having their eyes roll.  In fact, they would like it so much they would ask me what it was.  And I’d tell them - it’s Jenna Mammina’s “Close Your Eyes”.


I downloaded Close Your Eyes from Cookie Marenco’s Blue Coast Records store, “Downloads Now!” (exclamation mark included).  If you don’t know who Cookie Marenco is, you’re either no audiophile, or you’re living under a rock.  She’s been in the music business since … a long time ago.  I imagine she must have recorded “Nobody Does It Better”, because nobody does, although she would have been about 3 at the time.  Now she has her fair share of “honestly-crafted and heart-felt folk singers”, and all kidding aside, some of them are seriously, seriously good, and her catalog, while limited, is as good as any in the no-compromises audiophile market.  But I don’t think she has any four-necked guitar virtuosos (actually, if there are any, Todd Garfinkel is probably recording them).


Occasionally, when the budget is there, Cookie will show you what the extra dollars can bring, whether that is the cost of talented session musicians, or the cost of the extra studio time required to assemble a multi-tracked recording.  And you should know that the caliber of session musicians Cookie can assemble includes folks who won’t even return your phone call ... in fact their agents won't even return your phone call.  Jenna Mammina is a talented singer who inhabits the middle-of-the-road pop scene most readily identified with Norah Jones.  Jones is a superstar whereas Mammina is not.  Such are the vagaries of the music business.  Listening to Close Your Eyes, you
might wonder why.

Close Your Eyes is a sort of “Best Of” album, comprising tracks taken from different recordings Cookie has made for Jenna going back about ten years.  Mostly these are recorded to 2” analog tape, although a couple were recorded directly to DSD.  For “Close Your Eyes” the original tapes were remastered to DSD256 using the very latest Pyramix equipment.  The results are quite astonishing.  Most tracks comprise Jenna on vocals, backed by Bass, Drums, Keyboards, and assorted other instruments including Guitar, Soprano Sax, and Accordion.  Most of the instruments are recorded and mixed with a light touch, the whole album having a seriously laid-back feel, but the bass - OOH, THE BASS - is just spectacular.  I don’t mean Jaco Pastorius spectacular.  I mean absolutely flawless technique, a musical approach that doesn’t intrude, an instrument of the highest caliber, and a recording technique that captures it all perfectly.  You might argue that it is mixed about 6dB too high, but then maybe you just don’t appreciate tasty bass.


From the very first track you are enveloped by the music.  A laid-back take on Steely Dan’s “Dirty Work”, it immediately sets the tone for the album.  The arrangement is slick, highlighted by a soprano sax solo, and suits Jenna’s breathy vocal to a tee.  Immediately, you are aware that you are in the presence of serious musicians.  Next comes “Lotus Blossom” an old track from the 40’s, brilliantly evoking a Parisian Boulevard with a dash of accordion.  It all comes together so well.  “You Can Close Your Eyes” is a James Taylor song, with Jenna accompanying herself on piano.  As she invites you to close your eyes, there is little else that you want to do in that moment.


Next up, and quite possibly the best cover I have heard of it, is Elvis Costello’s “Watching The Detectives”.  I’m sure Costello himself would approve.  The vocal delivery manages to evoke a hint of hip-hop drawl which gives it a contemporary vibe.  Chris Izaak’s “Wicked Game” is the only track on the album that at first seems out of place.  Just Jenna with a simply plucked guitar accompaniment, but somehow I find myself thoroughly drawn into it.  I think it is all down to how artfully the vocal is delivered, and the empty sound of the guitar just catches the emotion perfectly.  “Running To Stand Still” from U2’s Joshua Tree album is probably the most ambitious track on the album.  But arena rock does not translate so well to the intimate cafe-lounge setting, and you find yourself waiting for a slow-building climax that simply isn’t delivered.


Dr. John’s “Pictures and Paintings” is offered as a straightforward jazz standard with piano trio, but segues into my favourite song on the album, Tom Waits’ “I Hope That I Don’t Fall In Love With You”.  Just Jenna accompanied by piano, a great song, sung with great feeling.  It’s odd that, on an album notable for its instrumental mixes, I should pick out the simplest one, but such is life.  “Don’t Let Me Be Lonely Tonight” is another James Taylor offering presented as a soulful jazz number.  Once again we have that delicious bass playing, the laid-back drum licks, and the keyboards doing their classic Hammond thing.  What’s not to love.  The album closes with “When I’m Called Home”, an Abbey Lincoln song, taken from an album Jenna did of Abbey Lincoln covers.  Abbey Lincoln? You might well ask.


So this is one seriously good album.  The songs are of a uniformly high standard, and quite frankly, is easily as good as anything from the Norah Joneses of this world.  I have played it hard and often.  Even my wife nodded appreciatively, which doesn’t happen all that often.  In fact she asked me to turn it up, which NEVER happens.  And despite having all those strikes against it, it stands as an absolute reference when it comes to recording quality.  I love it.

Friday, 30 September 2016

AirPlay with Sierra.

If you read my previous post “AirPlay with El Capitan” you will know that Apple had introduced a significant change to the way the AirPlay subsystem was integrated with the rest of the audio subsystem. This in turn changed the way in which BitPerfect Users had to set up their systems in order to play through an AirPlay-connected audio device. Unfortunately, the way this was done, which was entirely laudable in terms of having been driven by the right ideas, was less successful in the execution. This has caused a lot of gnashing of BitPerfect Users’ teeth.

Well, with the next version of OS X (now re-branded ‘macOS’) called Sierra, we have what looks like a small but worthwhile update. The process of playing BitPerfect through AirPlay still starts by opening macOS’s ‘System Sounds’ (part of the ‘System Preferences’ suite of tools). Different audio devices which are accessible via AirPlay still appear as independent audio output devices of type ‘AirPlay’. Every available AirPlay device still appears as an independent device. But they still only appear when they are powered on and detected by the AirPlay subsystem. In other words, if macOS can’t detect them, then it won’t offer you the option of selecting them. If you want BitPerfect to play through a specific AirPlay device, you have to start by selecting it here.

As before, third party Apps such as BitPerfect are constrained to having to access the AirPlay Subsystem via its Standard Audio Interface. And you would think that if an AirPlay device such as “Joe’s AppleTV” was live on the system, and listed as an available device under ‘System Sounds’, then it would also be live and available under AirPlay’s Standard Audio Interface, which was the case prior to El Capitan. But no. If we wish to see which Standard Audio Interfaces are live and available we must open Audio Midi Setup (normally located within the Applications - Utilities folder). All of the available Standard Audio Interfaces are listed in the left-hand panel. You can select one, and then configure it in the right-hand panel (although no such configuration is required for BitPerfect). However, most of the time, you won’t see an AirPlay device listed there, even though we can clearly see one or more AirPlay devices listed as being available and accessible under ‘System Sounds’. If a device is not listed in Audio Midi Setup, then it will not be made available to BitPerfect, and it will not appear as a choice in BitPerfect’s list of Audio Output Devices, regardless of whether it appears in 'System Sounds'. So far, all this is the same as with El Capitan.

The procedure to get around this is relatively simple, but for the love of God I cannot fathom out why it is required at all. You still have to go into System Sounds and select the specific AirPlay device that you want BitPerfect to play to. Once you have done that - and this may take a few moments - the selected AirPlay device will magically appear in Audio Midi Setup, and shortly after that will appear as one of BitPerfect’s available audio output devices. What has changed is that under El Capitan only “AirPlay” would appear as a Standard Audio Interface. Now, under Sierra, the specific AirPlay device you selected in System Sounds instead appears as its own personal Standard Audio Interface. This is - in my view at least - a good thing from the perspective of stability.

Under El Capitan I had been using Audio Midi Setup as a regular part of the AirPlay setup process in order to confirm that the Standard Audio Interface had been created, because sometimes it just stubbornly refused to appear (for reasons that were never clear). Under Sierra, though, that aspect seems to be quite reliable now. So really, there is not much need any more to go into Audio Midi Setup at all, other than for information or diagnostic purposes.

Finally, as before, you still need to go into iTunes and in its “Choose which speaker…” selector (the button with a ‘transmission beacon’ icon to the right of the iTunes volume slider) you need to be sure to choose “Computer” and NOT the AirPlay device that you actually want BitPerfect to play to.

What this procedure tells us is that macOS is only creating a Standard Audio Interface for an individual AirPlay device when it is selected in System Sounds. What I would like to see is for the Interface to be created automatically as soon as a device is detected and becomes live and available. There would then be separate Standard Audio Interfaces for each available AirPlay device. This would be a far better paradigm and wouldn’t require BitPerfect users to hop in and out of System Sounds.

One limitation of the Sierra system is as follows, though. Suppose you have multiple AirPlay devices, such as an AirPort Express and an AppleTV. You want to play BitPerfect to the AirPort Express, while at the same time streaming something else to the AppleTV. So you go into System Sounds and select the AirPort Express. This creates a Standard Audio Interface for the AirPort Express which BitPerfect can use. So, with BitPerfect playing nicely through the AirPort Express, suppose you go back to your Mac and in System Sounds you now select the AppleTV device. Unfortunately, it seems Sierra either cannot or simply will not allow multiple Standard Audio Interfaces to exist at the same time, and so it immediately closes the one for the Airport Express, and opens a new one for the AppleTV. In effect, BitPerfect’s designated Audio Output Device suddenly disappears, and it simply stops playing.

Even though these shenanigans are a little irritating, I still feel that they are evidence that Apple’s AirPlay implementation is moving in the right direction. Quite clearly, the requirements of third party users such as BitPerfect are very much an afterthought, and we recognize that we are always going to have to feed off the scraps from the Master’s table. Nonetheless, my feeling is that this iteration of the BitPerfect/AirPlay experience is a step in the right direction. It may not quite knock the Yosemite implementation off its perch, but it is in all likelihood a solid #2. In fact if you prefer Sierra’s quirks to Yosemite’s, you might even rate it #1.

macOS Sierra :)

I now have a workaround in place for the Console Log issue I mentioned last week, plus I have been using AirPlay on Sierra with success, so I will post an update shortly on how to make that work.

In short, BitPerfect users can now feel comfortable upgrading to macOS Sierra if they wish to do so.

Wednesday, 21 September 2016

macOS Sierra

I have been using BitPerfect 3.1.1 with the latest version of OS X, “macOS Sierra”.  It comes with iTunes 12.5.1.21 as part of the installation.

The results are a bit of a mixed bag.  Functionally, BitPerfect seems to be working fine.  I haven’t come across any unusual behaviour yet.  However, I notice that the Console App has changed significantly with Sierra, and it appears that it is not displaying most of BitPerfect’s diagnostic messages.  This is going to make it very difficult for diagnostic purposes.  For that reason, I recommend that BitPerfect users hold off from making this update for the time being, unless they have other, more pressing needs to do so.  We will dig deeper into this issue and report our findings in due course.

I also need to spend some time trying out the AirPlay subsystem under Sierra, and I haven’t been able to get around to that yet, so AirPlay users might want to avoid the update for the time being unless they are feeling particularly adventurous.  AirPlay users who are still on Yosemite in particular should stay where they are.  Yosemite is notably stable with AirPlay.  El Capitan is not, and Sierra might not be any better.

Wednesday, 14 September 2016

iTunes 12.5.1.21

I have been using the latest 12.5.1.21 iTunes update for 24 hours now without encountering any usability problems.  BitPerfect users should feel confident installing this update.

Tuesday, 6 September 2016

Announcing BitPerfect v3.1.1

Today we announce the release of v3.1.1 of BitPerfect.
This is primarily a maintenance release which addresses an issue where BitPerfect can crash under certain conditions when used with certain specific DACs.

As usual, BitPerfect v3.1.1 is a free upgrade to all existing BitPerfect users.

Monday, 8 August 2016

iTunes 12.4.3.1

I have been using the latest iTunes update for a few days now and have not encountered any problems with it. BitPerfect users can download it with confidence.

Thursday, 28 July 2016

Take The Gauss Challenge!

All of the really clever stuff behind digital audio is just mathematics, pure and simple.  I’ve said before - and will say again - that mathematics is the purest of the pure sciences, and for that reason is often separated out from the rest of science.  The principal differentiator is that in the pure sciences the ultimate standard of correctness is measured in terms of empirical evidence, whereas in mathematics it lies in absolute proof.  Also - somewhat more esoteric in concept - scientific postulates usually start with a simple everyday observation, and proceed from there to both break it down into its component constructs and expand it into more elaborate implications.  By comparison, mathematics begins at ground zero.  It starts by defining the quantities ZERO and ONE, plus the concept of ADDITION, and builds upwards from there.

Mathematics, like Physics, also has the capacity to lead us down some mind-bending rabbit-holes.  One such example is Kurt Gödel’s famous Incompleteness Theorem.  Among other things, this theorem [a theorem is an axiom which has been proven mathematically to be correct - as opposed to a theory, which is an unproven postulate] states that there are some things that are mathematically correct, but which are fundamentally incapable of being proven.  Furthermore, he shows that it is impossible to prove that any given unprovable theorem is indeed unprovable.  But we don’t need to go there ….

I should observe that while I talk about mathematics in such glowing terms, I myself am a physicist.  I was equally qualified to study physics or mathematics at university (music too, for that matter), and wisely chose physics.  While at university I could keep up with my two mathematician friends for most of my first year, but by the second year it became obvious that what they were studying went way over my head.  Waaaaay over.  Good call.

One of the greatest mathematicians ever was a German named Carl Friedrich Gauss.  A famous true story is told of him, one against which you might be interested to try to measure yourself.  There is a nice, simple formula that gives you the sum of all integer numbers (1 + 2 + 3 + 4 + 5 +, … etc) up to any arbitrary number N.  That formula is N*(N+1)/2.  My challenge to you is to prove it.

The proof is erroneously attributed to Gauss.  As a 10-year old boy attending school in 1787, Gauss’s class was asked by their teacher to add up all the whole numbers between 1 and 100.  This gave the teacher some spare time to take a nap.  However, after ten minutes Gauss woke him up with the correct answer - 5,050.  He couldn’t be bothered to do all the adding up, and so had derived the above formula on the spot.  I don’t know about you, but I had not even been introduced to algebra at age 10!

In actual historical fact, the teacher - one Herr Buttner - was not looking to take a nap, but wanted to prepare his pupils’ mindset so that when he subsequently presented them with the formula they would better appreciate its worth and value.  A good teacher, I think!  Astonished by young Gauss’s precocious genius, he became his personal champion and was largely responsible for much of the young man’s early advancement.

The proof itself is in fact a deceptively simple one, which almost anyone could understand, and easily teach to others.

So I won’t bother to repeat it here :)

Wednesday, 20 July 2016

OS X 10.11.6 & iTunes 12.4.2.4

I have been using the latest combination of OS X 10.11.6 and iTunes 12.4.2.4 for a day or so now, and have encountered no problems.  BitPerfect users ought to be able to apply these two updates with confidence.

Tuesday, 12 July 2016

Your Competitors are your Best Friends

I want to tell you a true story, but I’m going to change everything that identifies the actual participants.  It’s about two companies who saw each other as their biggest competitors.  There was no love lost between them, not to mention countless law suits and counter-suits.  These were large and professional public listed companies, one with hundreds and the other with thousands of employees.  Professionally managed by people with more PhDs and MBAs than you can shake a stick at.  But still …

The global market for ‘Widgets’ is $20-50 Billion dollars per year and growing, and is served by several of the world’s largest international corporations.  Widgets are manufactured in colossal volumes on highly automated production lines which run 24/7.  The manufacturing process is a multi-stage line, with each stage comprising its own dedicated machine tool.  One of these stages is the ‘Widget Tuning’ stage where the performance of each individual ‘Widget’ is precisely ’tuned’ to very tight specifications.

Widget tuning machines are evaluated against three broad parameters - (i) the precision with which they can tune a Widget, (ii) the net throughput with which they can tune Widgets, and (iii) their overall cost of ownership.  The preferred technology for Widget tuning was known as OWT (Optical Widget Tuning), and there were two manufacturers which supplied OWT machines to the global Widget industry - Opticorp, and General Optics International (GOI).  The total worldwide market for OWT machines varied between $200 and $500 Million annually, and was typically shared 60:40 between Opticorp and GOI, with both vendors generating excellent margins on these product lines.

Widget manufacturers appreciated having two vendors for these expensive manufacturing tools, which sell at up to a million dollars apiece and require comprehensive and reliable support and maintenance.  Although the market is quite a large one, it isn’t really large enough to support much more than two vendors considering that the barriers to entry are very substantial.  What would happen was, as Opticorp began to stretch their lead in market share, GOI would focus on new technology resulting in improved Widget tuning performance, and would gradually claw back market share.  Being the market leader, Opticorp would be more reluctant to risk investing in new technology, but would eventually be obliged to do so to avoid the risk of losing their dominant position.  And if neither vendor produced a compellingly differentiated product, the Widget manufacturers would start to pressure them on price.

Overall this ongoing competitive situation was good for the Widget manufacturers.  The Widget tuning process was an effective one for them, and they had two highly reliable vendors that they could play off one against the other to keep them both sharp.  But Optical Widget Tuning was not their only option.  A new technology called Electrical Widget Tuning (EWT) was waiting in the wings.  It had the potential to be at least as effective as OWT, but required other changes to be made to the overall design of the Widgets to accommodate it.  But so long as OWT remained viable there was no pressing need to abandon it.

So long as OWT remained viable …

Out of the blue, the CEO of GOI made a monumental strategic decision (the specifics of which are not germane to this discussion), and which put GOI deeply into debt.  No sooner was this announced than a global financial recession suddenly set in.  Within months it became evident that GOI was in desperate financial straits, and talk of possible bankruptcy was in the air.  And indeed, the following summer GOI went under.  Attempts to sell off their OWT business came to nought as at first they demanded too much for it, and later were forced to lay off so many of the key employees that there was no longer a critical mass that could form a viable acquisition.  There was great joy in the corridors of Opticorp as their bitter rival and only OWT competitor bit the dust and left the lucrative OWT market entirely to them.

In the boardrooms of the Widget manufacturers, though, things looked rather different.  With Opticorp now their sole supplier, they would have nothing with which to push back against price increases, and there would be limited incentive for Opticorp to invest in advancing their technology rather than pocketing cash.  Instead, they decided that they had no choice but to go all out to bring Electronic Widget Tuning to maturity as their preferred Widget tuning technology.

Opticorp insisted that they didn’t see this coming.  In internal meetings their product managers gave presentation after presentation showing how OWT met all Widget manufacturing requirements, how EWT would offer no advantage, and how a switch to EWT would be disruptive across the board.  In short, gentlemen, EWT was a load of hot air and would never happen.  But happen it did, despite Opticorp’s technical analysis being pretty well on the money.  What it utterly failed to take into account were the strategic perspectives.  Within two years Opticorp’s OWT sales had dropped by 75%, and within another 12 months they had evaporated completely.  Shortly thereafter, Opticorp’s CEO was shown the door.

The point of this post is not to show how Opticorp could or should have responded differently.  That is actually far from simple, and would form a much more elaborate case study.  Instead it is a reflection of how events which would lead to the demise of a highly profitable $250M business within three years were greeted by whoops of celebration, and not a hint of trepidation over how it might end up playing out.  And how a proper assessment of the situation failed to be undertaken through hubris and conceit.

As I said, this is a true story, I hope accurately portrayed, and it teaches a valuable business lesson.  It really doesn’t matter what size your business is - you need competition, and you have to understand why.  Competitors keep you honest.  Without competition for your business there is no incentive for you to reduce costs, increase efficiencies, and improve service.  There is no incentive for you to invest in making your product better.  And ultimately, there is no incentive for your customers to remain interested in it.


For better or for worse, consumers at all levels - whether consumers of shampoo or OWT manufacturing tools - want to have choices.  Sometimes it is because people just feel more comfortable when they have choices - but sometimes it is because a well-considered strategy demands alternatives.  Where there is no choice there is stagnation, such as is typically the case with things like public transport.  Having competition is what keeps any business fresh.  Your competitors may want to put you out of business, and you them, but in reality they are your Best Friend.  Embrace it!

Wednesday, 6 July 2016

“Tell Them We’ve Already Got One!”

Let me describe something I was very fortunate to be able to try one time, but which very few of us will get the opportunity to experience.  I am talking of entering an anechoic chamber.

An anechoic chamber is a room specially designed for the purpose of conducting carefully calibrated acoustic measurements.  In normal rooms, any sound generated anywhere within the room will travel rapidly to all other parts of the room by bouncing off the walls (including the ceilings and floors).  Therefore, if we attempt to measure the sound in a room we very quickly find that it is impossible to distinguish between sounds which originate directly from the source and those which have travelled via multiple bounces off the room boundaries.  This is important, because these multiple signal paths cause the signal to be reinforced, cancelled out, or anything in between, thereby rendering many forms of measurement entirely useless.

The solution is to create a room in which sound waves, when they hit one of the walls (or floors, or ceilings), is instantly and totally absorbed and none of it is reflected back into the room.  Such a room generates no echoes, and is therefore termed ‘anechoic’.  These are particularly useful for designing things like microphones and loudspeakers, and enable detailed and accurate measurements to be performed in a way that would be virtually impossible otherwise.  You’d think that every loudspeaker manufacturer would have one, but they don’t.  They all wish they did, but most of them can’t afford such a preposterously expensive luxury.  The best they can hope to do is rent time in somebody else’s (most likely in a university research centre, or some other such institution).

What is particularly instructive is to get somebody to step into an anechoic chamber for the first time, and ask them to sing a song or play an acoustic instrument.  You can bet your mortgage that they will stop singing or playing within less than a second.  What they hear are sounds so alien to them that they can’t help but stop abruptly.  It only works first time, because once you know what is going to happen you aren’t so taken aback.

The sound of a voice or an instrument in an anechoic chamber is so utterly unlike anything you have ever heard before that it just stops you dead in your tracks.  Same goes for a loudspeaker playing in an anechoic chamber.  It is a totally dry sound, devoid of all character, expression, depth, or life.  After stopping abruptly, the second thing you will do is lick your lips, because the sound is so dry, so arid, so utterly parched, that it seems to draw the moisture from every pore in your body.  It is a profoundly unnatural environment.

And yet, the sound of a voice or an instrument in an anechoic chamber is the most accurate representation of that sound.  That is precisely what that voice or instrument actually sounds like.  Only the sounds travelling directly from the source to the listener will reach the listener.  All other sounds will be totally absorbed as soon as they hit any of the walls.  This is as accurate as it gets.

Outside of the anechoic chamber, the sound you hear is the sound of that instrument playing in a given room.  The difference between what you heard inside the chamber and outside is the contribution of the room to the sound.  That contribution is colossal.  Indeed it is fundamental to how we perceive the sound.  The magnitude of the difference serves to ram home the point that everything we hear every day is the product of the various sound sources modified by the environments in which we both exist.  The same orchestra, for example, playing in two different concert halls often sounds like two different orchestras.

This is important to grasp, because it serves to illustrate the futility of one of the holy grails of the audio industry - or more precisely of many of the critics who presume to influence the industry as to what it should be doing.  This particular sacrament requires that the goal of a high-end audio system is to recreate the sound of the original instrument.  But the sound of the original instrument is the desiccated sound from the anechoic chamber, and that is not what people want to hear.  What they want to hear is the sound of the original instrument played in the original location, but they want to replay it in a different location.

That presents us with two separate philosophical problems.  First, how are we to know what the original performance actually did sound like in the original location?  Unless we were there at the time, we can’t.  Second, our loudspeakers are located in their own separate and different acoustic environment.  If ‘simply’ reproducing the musical instruments themselves in our own listening environment is challenging enough, it is a different challenge entirely to reproduce the audio environment of one room inside an entirely different room.  Just consider recording a violin in an anechoic chamber, and then trying to reproduce the sound of that anechoic chamber in your own listening room.  Take it from me, it is not possible to come even close.

So what is it we actually want from our systems?  I believe we just want to be convinced.  We listen to something and ask ourselves how convinced we are by the illusion that our system has created.  The best sound systems do recreate a good illusion of a complete acoustic space.  However, for most - if not all - of our recordings, we have no idea whether that space is the same as the one in which the recording was made.  But if we can be convinced by what we hear - transported into a listening experience - surely that is all we can realistically ask.  I have long ago stopped asking myself if the sound I was getting was ‘correct’.  There is no ‘correct’.  Nowadays I ask only whether - and to what degree - I am convinced.

I think this goes some way to explaining the pangs that most of us face as we periodically upgrade our sound systems.  Critics charge that we are never satisfied, so why bother in the first place.  And there is a lot of truth to that.  We buy a system, express our happiness with it, listen to it for a few years, and then upgrade it.  Rinse and repeat.  With each new system, not only are we satisfied that it is better than the old system, but suddenly the old system - to which we were formerly devoted - is now somehow inadequate and no longer lovable (other than through the distorted lens of nostalgia).  We cannot go backwards down the audio path and still retain the same sense of joy that powered us on the way up.  All this, of course, assumes that the upgrade path was always followed wisely and judiciously.

What is happening, I suggest, is that on each path up the upgrade chain we are re-setting the bar against which our system’s ability to ‘convince’ us is measured.  The whole point of a significant upgrade is to significantly enhance your system’s ability to convince you that it is better recreating the original soundscape.  If it can pull that off, it will permanently re-set your bar.  It now takes an even greater level of fidelity to improve upon the trick of convincing us.  Once you’ve heard something, you can’t ‘unhear’ it.

When I was a young man just setting out with this hobby, most critical evaluation of audio systems - particularly loudspeakers - was focussed on the degree to which the sound took on identifiable tonal colourations.  And indeed, back in those days colourations were indeed a dominant factor.  One product which I recall having a particular impact in the marketplace was the Kef R104aB loudspeaker, which was noted for having particularly low levels of colouration.  I used a pair once for a few weeks and confirmed that yes, indeed, they did have a particularly uncoloured sound.  But at the end of my time with them I realized that while they were undeniably uncoloured, they didn’t seem to float my boat any more as a consequence.

I wasn’t smart enough yet for the penny to drop, but yes, shortly thereafter it did so.  I have long since realized that for my own particular musical enjoyment, tonal colourations are not a major limiting factor.  I am more than willing to put up with them if they are the price I have to pay to realize the type of performance which does float my boat, which are imaging stability and soundstaging, dynamic range (both micro and macro), and what is dismissively called PRAT (Pace, Rhythm And Timing).  With all those requirements satisfied, I am willing to put up with tonal colourations that other people might find to be cause for criticism.  Having said that, though, major advances have been made in the elimination of tonal colourations since the good old ’70’s.

So that’s where I put my stick in the ground.  As far as tonality is concerned there are no absolutes.  Tonal colour is only partially provided by the instrument itself, and is dominated by the acoustics of the room.  So when it comes to judging sound reproduction there can be no such thing as Harry Pearson’s much vaunted “Absolute Sound”.  There are no absolute points of reference other than an anechoic chamber, and nobody would want to listen to anything that sounded like that.  The most important milestone of any audiophile journey is when you finally understand what it is that YOU want out of your system - whatever that is - and achieve comfort in the knowledge that that is way more important than what some other audiophile wants out of his.

BTW, have any of you figured out the reference in this post’s title? :)

Wednesday, 22 June 2016

Breathtaking

Thanks to BitPerfect user Robin Wukits for sending me this link of a beautiful duet between a Soprano and a Cornetto. No, I didn't know what a Cornetto was either :)


https://www.youtube.com/watch?v=pnqvjNSvwSU&ab_channel=cornettissimo

Tuesday, 14 June 2016

Tube Rolling????

When I took delivery of my new PS Audio BHK300 Signature mono block amplifiers, together with a PS Audio P10 Power Regenerator, it presented me with an immediate practical problem.  Those three units replaced the single unit of my Classé CA-2300 power amplifier which was installed in the bottom shelf of my “SolidSteel” equipment rack.  All three PS Audio units share the same chassis, one which makes them comparable in size (and weight) to the Classé unit.  So there was room for only one of the new trio in the SolidSteel rack.  It was determined that the P10 would go in the rack, while the BHK300s - which could maybe profit from being located nearer the loudspeakers - would have to find a place to sit on the floor.

This new aesthetic immediately raised a question in my mind.  While the SolidSteel rack attempted to provide a solid mechanical ground for the P10 via its three spiked feet sitting in conical cups, sitting the BHK300s directly on the suspended wooden floor did not seem so smart.  Nonetheless that would have to suffice, while I thought about how I could provide a better solution.

Forced to consider the situation from theoretical as well as practical considerations, I immediately wondered about the relative benefits of a mechanical ground vs an isolation platform.  The idea of a solid mechanical ground is that any vibrations in the product will be efficiently coupled out - just like any residual electrical signals in the equipment chassis will be efficiently coupled out to electrical ground via the ground wire in its power cord.  Such an approach - whether electrical or mechanical - requires that the ‘ground’ we are coupling to be a true ground.  Now, if my house were built directly on granite bedrock, which I exposed to form the floor of my listening room, a mechanical grounding approach could be ideal.  But my house isn’t like that.  I have a suspended wooden floor to which my speakers are coupled via their own ‘mechanical ground’ connection.  The speakers are therefore in all likelihood transmitting a substantial proportion of any mechanical energy generated within their cabinets into the wooden floor and energizing it.  The sound waves propagating back and forth around the room also energize the suspended floor, as do people walking about in the house.

Therefore, if I sit my amplifiers on a support table design which provides a solid ‘mechanical ground’ coupling to the floor, it seems that all this will do is potentially couple vibration from the floor up into the chassis of the amplifier just as efficiently as it would in the other direction.  If there were more vibrations in the amplifier than in the floor, then it might be ideal.  But I don’t think that is likely to be the case here.  So a ‘mechanical ground’ approach might actually cause more problems than it would solve.

The alternative, if the thinking is that the floor represents a source of vibrations from which the chassis of the BHK300s are to be protected, is an isolation system.  This is simply a mechanical system between the BHK300s and the floor which absorbs any incoming vibrations.  Those of you who still own a turntable will know exactly what I’m talking about.  Any turntable worth its salt will contain its own built-in isolation system, although all but the most extreme designs will still benefit from sitting on some sort of external isolation table.

The core of an isolation system is a damped spring.  If you sit something heavy on a theoretically perfect spring, and tap down on it to provoke a bounce, then it will continue to bounce away forever, at a frequency determined by the stiffness of the spring and the weight of the object.  If you introduce any damping into the system this will cause the bouncing to die down.  The greater the amount of damping the more rapidly it will die down.

Consider a car driving along a rutted road.  The car is a heavy object sitting on a damped spring (i.e. its suspension).  A car driving along a rutted road is very similar to the same car standing still on a vibrating road.  The purpose of the car’s suspension can be thought of as trying to isolate the occupants of the vehicle from the vibrations of the road.  In truth a car’s suspension designer has a lot more on his mind than your comfort, but lets ignore that (although if you imagine a 1970’s Cadillac you might not be too far off the mark).  The mass-on-a-spring will typically have a resonant frequency.  If you push down on the fender of your car and suddenly release it, that is the frequency at which it will bounce up and down.  The suspension’s damping determines how quickly the bouncing dies out.  A modern car is typically very well damped - your 1970’s Caddy less so.

An isolation system designed this way tends to pass frequencies lower than the resonance (natural bouncing) frequency, and absorb the higher frequencies.  That way, your old Caddy can drive comfortably along even a Montreal highway, smoothing over all but the biggest bumps, which are transmitted into the cabin.  Damping is necessary for two reasons.  First, because damping is what actually absorbs the vibrations fed into the system, turning them into heat which is conducted away.  Second, because the amount of damping determines how well the isolator attenuates the frequencies it is designed not to pass.

Leaving the old Caddy to one side, for my audio application I want to make sure that my BHK300s are isolated from all frequencies at 20Hz and above.  In fact, the lower the better.  In my mind, my ideal would be something like a 1Hz resonance, with a small amount of damping that would allow the 1Hz bounce to die out over something like 4-5 seconds.  Armed with these design objectives I can sharpen my pencil, sit down, and work out spring rates, damping factors, masses and so forth.  And if I was designing something in a professional context that’s how I would approach it.  But that’s not what’s happening here.

All I wanted was a simple test bed to see whether any of this stuff actually had any audible effect in my system.  What I came up with was a Typhoon 17” x 13” Butcher Block which would form the base of my isolation table, and happened to be the exact same dimensions as the BHK300 chassis.  Plus it looks good, and has the practical convenience of four sturdy, built-in legs.  As a platform, Butcher Block is a mechanically well-damped material, which is a plus.  For my damped springs I decided to use a high-technology pneumatic approach.  I bought a set of 12.5” inner tubes for Stroller/Pushchair wheels.  The idea was to inflate the inner tube, lie it on its side on the Butcher Block, and sit the BHK300 directly on top of it.

My calculations suggested that those inner tubes could support the weight of the BHK300 without my needing to inflate them to anywhere near their maximum rated air pressure, so I felt confident that the weight of the monoblocks wouldn’t just burst them.  As it happens, without the tyre to constrain their expansion, you cannot pump these things up anywhere near their rated pressure!  All I could do was pump them up as much as I felt they would safely sustain, and see how I got on.  The pressure was too low to register on my automobile tire pressure gauge, so I can’t tell you what the actual pressure was.  But that’s fine, because my target pressure was also too low to register!

Since the weight in the BHK300s is not conveniently centred, you have to position the inner tubes slightly to the right of centre on the Butcher Block.  To my surprise it proved easy to get it lined up so that the monoblocks sit nice and level.  See the photograph.  Also to my surprise - and great pleasure - this arrangement proved to have a resonant frequency of ~1Hz and a natural resonance which damps out in 4-5 seconds.  This is exactly what I thought in advance might be my ideal setup.  This is great news, because manhandling those 83lb monoliths every time you want to make a change is not my idea of fun.

Of course if the isolation method is right, the mechanical grounding method must be wrong, no?  I had one unused inner tube, so I pumped it up and put it under the P10 on the bottom shelf of the SolidSteel rack.  This inner tube turned out be a bit narrower than the other two, and provides the P10 (a mere 73lb lightweight) with a resonance frequency more like ~5Hz, and is slightly more damped than the monoblocks.  If I decide that makes a difference I can always shell out another $10 for one of the wider tubes.

So, how does it all sound?  Let’s have a listen.

At this point I am somewhat concerned at the possibility of losing credibility.  The changes I am hearing are not subtle.  No, not subtle at all.  Have you ever changed a pair of interconnects or speaker cables?  How about a USB cable or a power cord?  There is no doubt in my mind that those components can make a real and valuable contribution to the performance of a high-end audio system, particularly in the area where I operate, where vastly diminishing returns are the order of the day.  But for sure the changes engendered by such tweaks are definitively subtle.  I’m sure many people might listen along as I audition a pair of interconnects and shrug their shoulders, whereas I might conclude that one of the sets is worth an investment of a thousand dollars.  Still others, not content with merely shrugging their shoulders, will fire off a spate of spiteful invectives on every audio forum that they can make the time to sign on to.  Subtle effects are what we have become used to dealing with when auditioning audio “tweaks”.

Well, that’s not what is happening here.  The changes wrought by those $10 inner tubes are more on a par with swapping out a pair of good 20 year-old loudspeakers for a pair of modern high-performance units.  The basis is deeper and fuller, with less overhang and more tuneful delineation of pitch.  The stereo image is tighter, much deeper, and more holographic.  A whole layer of grain that I didn’t even know was there has seemingly been stripped from the midrange.  Vocals in particular seem more natural and more three-dimensional.  I could go on, but I won’t because the specifics of what I am hearing may prove to be specific to my particular system.  However, I don’t think that the general level of benefits are going to be all that system-specific.  We are talking about mechanical isolation, and I don’t think my PS Audio components depart in any radical way from industry norms with regard to their mechanical standard of construction.  I expect major noticeable improvements are going to be evident regardless of what system you are using.

You might well point out that the BHK300 monoblocks contain vacuum tubes, and that vacuum tubes are well known for being particularly microphonic, and you would be right.  But on the other hand I listened for a short while with inner tubes only underneath the BHK300 monoblocks, and when I placed the third inner tube under the P10 PowerPlant - which contains no vacuum tubes - the magnitude of the change was just as large, if not larger, and was probably more impressive in terms of the qualitative improvements it brought.  Whereas the inner tubes under the monoblocks brought immediate and indisputable benefits, it was not until the final tube was placed under the P10 that everything suddenly came together as a coherent whole.

This was originally conceived as a trial experiment.  The idea was to see how it went, and decide where to go next.  In truth I’m not sure where to go next.  All I know is I am going to focus on enjoying the music until the inner tubes eventually burst or deflate or whatever it is they are going to do.  I expect stability and longevity is going to limit their long-term practicality, but until it does I’m going to be enjoying it for what it is doing right now.