Sunday, September 25, 2016

Has the Internet enabled lying, crooked Donald?

We are in the early days of the Internet as a political medium and hopefully it will co-evolve along with our society and education system.

Last June, Donald Trump began calling Hillary Clinton "lying, crooked Hillary" and established a Web site of the same name. Leaving Trump's coarseness aside, is the allegation fair? (His coarseness calls for a separate post).

Politifact is a fact-checking Web site run by a Florida newspaper. They rate political statements on a six-level scale ranging from True to Pants on fire:

True – The statement is accurate and there’s nothing significant missing.
Mostly true – The statement is accurate but needs clarification or additional information.
Half true – The statement is partially accurate but leaves out important details or takes things out of context.
Mostly false – The statement contains an element of truth but ignores critical facts that would give a different impression.
False – The statement is not accurate.
Pants on fire – The statement is not accurate and makes a ridiculous claim.
They justify their ratings with reasoned, sourced analysis and have been awarded a Pulitzer Prize. (You can read the details on the rating rubric here).

The following are summaries of the Politifact ratings of statements by President Obama, Hillary Clinton and Donald Trump. (Click the image to enlarge it).

As you see, Clinton is a bit more honest than President Obama and lies much less frequently than Donald Trump. The ratings of Obama and Clinton have changed little since January. Trump is telling the truth a little more frequently, but over half of his statements were found to be lies.

I retrieved the September ratings from Politifact this morning and retrieved the January ratings using the Internet Archive Wayback Machine.

I guess all politicians lie, but few, if any, lie as frequently as lying, crooked Donald. (The "crooked" part calls for yet another post on his business dealings).

The Internet has changed political campaigns just as newspapers, radio and television did. Candidate's statements are archived and Politifact and others can analyze them, but the Internet also enables the dissemination of lies like this faked image of Hillary Clinton and Osama bin Laden, which can be found on many Web sites:

(To be fair, a lot of democrats shared a fake image showing President Bush holding a picture book upside down at the time he was informed of the 9/11 attacks).

The Internet also increases the odds that we will see lies we might "like." As Eli Pariser points out in his book The Filter Bubble, ad-driven sites like Facebook have an incentive to send us things we agree with to keep us on their sites longer.

The Internet enables us to easily create and disseminate lies and it also enables us to discover and expose them, but does that matter? Has the Internet brought us to what William Davies calls the age of post-truth politics? After all, Politifact shows that over half of Donald Trump's statements are lies, yet millions of Americans are willing to vote for him. While Hillary Clinton and President Obama lie less than Trump, they also have millions of supporters who are ignorant of or indifferent to their lies.

That is discouraging, but remember that we are in the early days of the Internet as a political medium and it may co-evolve along with our society and education system to bring us something better. For perspective, check out this early use of television in a political campaign:

Friday, September 23, 2016

Football streaming on Twitter -- too many commercials and need to be able to filter tweets -- but like all new media, it will improve.

I watched a bit of Thursday Night Football on Twitter last night. You could watch a small screen with tweets as shown above or go full screen and lose the tweets. I watched it on a laptop with a large, high resolution screen and on a Mac with a 21-inch display and the video was smooth and looked good on both. That was the good news.

The bad news was the commercials. I am not a football fan, so do not know how many commercials a typical broadcast game has, but it seemed like Twitter spent more time on commercials than the game. I would be curious to see statistics on the number of minutes spent on commercials, commentary and game action on Twitter versus broadcast television.

I “cut the cord” years ago, so am used to paying Netflix and others for streamed content without commercials. If I am typical, Twitter will fail with this commercial-based business model. (The Motley Fool Web site says the ads did not pay off in the first game, which was streamed last week).

For a while, I watched both the TV broadcast and the Twitter stream. The Twitter stream was relatively delayed, but the lag time varied and they did not have the same commercials. I wonder how the commercial sales and revenue are handled.

Turning to the user interface – I did not time it, but it seemed like there about 20 tweets every thirty seconds. With that many tweets coming in, I think the best way to watch would be to go full screen during the live play and mute the audio and read tweets during the commercials -- not a good deal for advertisers.

I did not notice any obviously malicious tweets, so I assume there is some automatic or human filtering, but it would be better if they would let the user control the filtering. For example, to let one see only tweets from a selected group of friends or a selected group of experts like professional football players, sports analysts or professional gamblers.

But, lest I seem too negative -- this is their first try at streaming sports. All new media stumble at first, often copying what came before. The first movies were made by filming stage plays and there are many other examples from radio, TV, textbooks, online learning, etc.

Twitter is streaming the US presidential debates next -- let's see how they do on that.

Thursday, September 22, 2016

Verizon wouldn't lie to sell phones -- would they?

In previous posts, I have been unkind to my local-monopoly Internet service provider Time Warner cable and the US ISP industry in general. I've also been unkind to Verizon FIOS in their battles with Netflix and criticized their "gentleman's agreement" to abandon fiber to focus on wireless connectivity, leaving me at the mercy Time Warner Cable.

Those stories all had to do with landlines -- what about Verizon mobile? My wife had an unlimited account with a reseller of Verizon mobile service (a "mobile virtual network operator" or MVNO). She no longer needed the unlimited account, so decided to switch to Verizon.

This was shortly before the new iPhones came out, so she wanted to keep using her old phone for a month or so and, since she had been using it on the Verizon network, she assumed it would work after shifting her account from the MVNO.

To be safe, I went online and had the following chat with a Verizon salesman named Brandon:

Chat transcript -- click image to enlarge

As you see, I gave him the phone's mobile equipment ID (MEID) number and he said it was incompatible with the Verizon network and offered to sell me a new phone. I pointed out that the old phone had worked on the Verizon network for years and he suggested that it may have been blacklisted or have the wrong antenna -- like AM versus FM radio. He elaborated, saying it might be compatible with some, but not all, of their network or perhaps my wife had been roaming for five years.

Maybe the phone would not work somewhere on Earth, but it has worked everywhere my wife has been in the United States and abroad for the past five years. (She is not an early adopter :-). Here are the specs (MEID 990001106522642):

I am not a mobile phone geek -- Is there something that would render the phone incompatible with Verizon mobile service?

We ignored Brandon's warning and opened a Verizon account -- the phone worked fine (in southern California) until it was replaced with a new iPhone.

My guess is that Brandon was just telling me what he saw when he queried Verizon's database, so he was not lying. But is the Verizon compatibility database accurate and, if not, is Verizon lying in order to sell new phones? (This reminds me of the Volkswagen smog check shenanigans).

Thursday, September 15, 2016

The Internet revolution in perspective

Google, Facebook, self-driving cars, iPhones, etc. are changing our lives, organizations and society, but this is not unprecedented. Similar scientific and technical disruption occurred about 100 years ago. Guglielmo Marconi invented and commercialized electronic wireless communication at that time and, in his review of a book on the life of Marconi, Paul Kennedy writes that in the first decade of the 20th century:

Breakthroughs in science and technology occurred so often that it would be brash to claim that any one of them “changed the world” (which doesn’t stop proponents from doing so). The Wright brothers’ success in aviation in 1903 led to national air forces being created only a few years later. The automobile was becoming reliable, standardized and produced in such numbers as to change urban landscapes. Giant trans-Atlantic liners altered oceanic travel. Electric power was coming to houses and oil-fueled propulsion replacing coal-fired engines. The Dreadnought battleship (1906) made all other warships obsolete.
In his biography of Albert Einstein, Walter Isaacson writes that:
In 1915 Einstein wrested from nature his crowning glory, one of the most beautiful theories in all of science, the general theory of relativity ... His fingerprints are all over today's technologies. Photoelectric cell and lasers, nuclear power, fiber optics, space travel and even semiconductors all trace back to his theories.
This is not intended to diminish the impact of the Internet on our lives, organizations and society, but to lend perspective. The Internet is disruptive, but so were the printing press, number systems, phonetic writing, agriculture, the recognition of natural cycles, spoken language, etc. What else?

Wednesday, August 10, 2016

NBC streaming the Olympics on the Internet -- quantity, not quality

NBC Sports is allowing many (not all) cable TV subscribers to stream Olympic Games events live and to watch archived copies after they finish. There are several ways to watch the streams -- in a Windows 10 app, on a TV set using the Roku app and on the Web. I checked them out and was generally disappointed.

With the Roku app, watching a live event is like watching a broadcast TV program -- you sit back and watch, but have no control. When you are watching the archived recording of a completed event, you can pause and fast forward/reverse, but that is all.

The event navigation interface is also lame. There are five linear menus: Features, Live and upcoming, Highlights, Full event replay and Sports. You scroll through a list of thumbnail images to find the one you want to watch. That is fine for the 13 features and 34 sports, but not so handy for the 300 Highlights, 771 Live and upcoming or 538 Full event replays. (These numbers will vary of course). Search is sorely missing.

There are also frequent delays for updating content and, if you try to watch a live event, you are often informed that it has concluded or will begin shortly -- they do not update menus at the time an event starts or ends.

The Windows 10 live-event user interface is a little better, but nothing to write home about. As you see below, there is a volume control and buttons to pause/resume, go full screen and to turn on/off closed captions.

NBC Sports app user interface

The navigation interface, shown below, is similar to that of the Roku. There are five linear menus: Features, Live and upcoming, Replay, Highlights and Sports. There are no scroll bars, so you use the cursor control keys to move through the selections.

Windows 10 app navigation interface

As with the Roku, the "live" events are frequently either completed or not yet begun.

The Web user interface is a bit better. As you see below, it includes the handy "15-second rewind" button and rewind and fast-forward buttons.

Web user interface

The Web navigation is also more flexible and complete than with the Roku or Windows 10 app. The bad news is that there are ads and you cannot go full screen. (The others also have interspersed ads).

There is also an Android app, but I am not going to bother installing it -- I prefer watching sports on a TV set and it will doubtless have some of the limitations discussed above.

As I said at the start, I am generally disappointed with NBC's streaming coverage. Part of that is because my expectations were raised by NBC's excellent coverage of the Tour de France. The mobile and desktop user interfaces were clean and powerful and they presented real time data during the race and inciteful analysis after each stage was completed. The business model was also different. You pay for Tour de France access, but do not have to watch ads (which are often followed by a screen saying the event you want to watch has already concluded).

I realize that the Olympics are a tougher event to cover -- many venues and many elimination levels -- but I hope that by 2020, the NBC Olympic team will sit down with the Tour de France team and re-design their coverage. I hope they also offer an ad-free option.

Thursday, July 28, 2016

The digital divide has persisted over the life of the Internet.

National Bandwidth Potential, a novel Internet diffusion metric indicating application feasibility, shows a persistent digital access divide.

People have been trying to measure the global diffusion of the Internet and the digital divide between rich and poor nation for twenty five years. The first to do so was Larry Landweber, who noted whether or not a nation had an Internet (or other) connection. It was a binary metric -- yes or no -- and it was suitable to its time because there were only a handful of users who were restricted to teaching and research, using a few applications like email, file transfer, news groups and remote login.

1991 Internet diffusion (purple)

Five years later, the Internet had many more users and applications in commerce, government, entertainment, etc., so my colleagues and I developed a multidimensional Internet diffusion framework. One of our dimensions was pervasiveness, based on the number of users of the Internet per capita.

That made sense in 1995 since there were relatively few applications available for the slow dial-up connections of the time. A few people had faster ISDN or DSL connections and an organization might connect over a faster digital link, but most users were running the same few applications over analog phone lines.

Today, users per capita is pretty well meaningless. A Cuban who accesses email using a 2G cell phone and a Google Fiber user who has symmetric gigabit access to multiple computers and devices on a home LAN are clearly not equal.

To some degree, we anticipated this sort of thing via the connectivity infrastructure dimension in our framework. It considered international and intranational backbone bandwidth, Internet exchange points and last-mile access methods, but it was an imprecise measure -- mapping a nation into five levels -- and data was not readily available. (Our case studies typically required two weeks of in-country interviews).

Skipping ahead twenty years, a paper by Martin Hilbert uses an interesting diffusion metric -- nationally installed bandwidth potential (BP), which is a function of the number of telecommunication subscriptions (fixed and mobile), the kind of access technology per subscription (cable, DSL, GSM, etc) and the corresponding bandwidth per access technology. Their estimation of the latter is quite complex, taking factors like data type, upload/download speed, compression, etc. into consideration. The methodology is described in a ten page supplement to the paper. (It is behind a paywall -- let me know if you would like a copy).

Hilbert computed the BP of 172 countries from 1986 to 2014 and observed that the digital access divide is persistent. It is true that wireless connectivity is relatively inexpensive and mobile Internet use is growing rapidly in developing nations, but it is just as clear that many applications are precluded by the speed and form factor of mobile devices. A WhatsApp chat with a friend is not the equivalent of watching a high-resolution movie on a large screen TV and I am confident that Hilbert did not conduct his research or write the paper I read on a mobile phone. Even reading this blog post, following its links to other documents and taking notes on it would be tedious on a phone.

I expect this imbalance to persist because improved technology is costly and it enables ever more complex, demanding applications. The only trend I see that may in part reduce this feasible-application gap is the move to server-side processing for big data and AI applications, but even then interaction and the display of results will require bandwidth.

Hilbert's data also shows global shifts in application feasibility. As shown below, BP dominance has shifted from the US in the early, NSFNET days to China today. Korea has joined the top ten and the shares of Japan and Western Europe have dropped. The share of the bottom 162 countries rose slightly in 2001, but had fallen below the 1986 level by 2014.

Ten countries with most installed bandwidth potential

Income differences explain much of the persistence of the digital divide, but policies regarding Internet infrastructure ownership and regulation are also important. For example Estonia ranks 40th in the world in GDP per capita, but is ranked 20th on the International Telecommunication Union ICT Development Index.

Policy choices may play an even larger role among the top ten nations. The US ranks 9th in GDP per capita and Korea is 30th, but my son, who lives in Korea, pays $22 per month for symmetric, 100 mbps connectivity and has a choice of several competing Internet service providers. I live in the US and pay considerably more than he does for considerably slower service and have no ISP choice -- I am stuck with Time Warner Cable.

While we are waiting for enlightened policies, we can hope for technical change like the OneWeb and Spacex satellite Internet projects.

Tuesday, July 12, 2016

Coverage of the 2016 Tour de France -- big data

Gathering real time data on each rider enables a clean video user interface, real time presentation of the race status and post race data analysis.

For several years, I wrote posts on streaming coverage of the Tour de France, Olympic Games and the Tour de California. Those posts focused on topics like user interface, ads, video quality and comparison of NBC's coverage with that of the BBC.

I missed last year due to travel, but am watching the current Tour de France, and there have been significant changes for the better.

For a start, NBC now bundles coverage of the Tour de France with several other races, so one purchases an annual subscription. That means cycling fans can see more races and, presumably, that the archive footage will remain accessible at least during the year.

(In the past, both NBC and the BBC have deleted their archives some time after the end of the Tour. I believe they have an information stewardship obligation and should maintain the archives of important events for analysis by journalists, scholars, fans, remixers, etc. The cost of doing so would be low and, if they were not behind a paywall, they could be found by search engines.)

The video quality is also better than I recall -- a consistent 2.2 mbps stream with none of the dropouts we saw during 2014.

The user interface has been simplified since 2014 when it had five modes -- live video, standings, stages, riders and more:

2014 Five viewer modes

and you spent most of your time in the four-frame Live Video mode:

2014 four-frame Live Video user interface

By contrast, the live video UI this year is simple, with a small race status indicators like the time between the race leader and peleton in the screenshot below, popping up from time to time on a full video screen with customary controls at the bottom:

2016 live video user interface

At first stripping out ancillary information might seem a step backward (or forward if you are an Apple minion), but it is not. Much more ancillary information is available this year and it is accessed through a "Tour Tracker" site. The Tour Tracker allows you to see in-depth information for each stage, with tabs for Teams, Stages, Standings, Results, Recaps, Replays and Photos and a link to the live video window shown above.

2016 Tour Tracker user interface

All of that data is available because the race is now very well instrumented. Each bike has a small GPS transponder affixed to the seat.

GPS transponder

The data from the transponders is uploaded to the mobile data center of Tour partner (and team sponsor) Dimension Data, enabling them to provide live data during race -- check out the following video (2m 50s).

This data collection enables Dimension Data to provide real time status of the race, individual riders, teams, etc. In the example below, we see the speed of several riders, the time gaps between them and the distance from the leader to the finish superimposed on the live video window.

Status update on Live Video viewer

In addition to real-time status statistics, Dimension Data is able to analyze data after a stage is complete. For example, the following image shows that the stage 6 sprint winner, Mark Cavendish, accelerated a little bit later than the second and third place finishers. I would expect that this sort of data is helpful to the racers and their managers. (The teams receive some information that is not available to the general public).

Post stage analysis

GoPro cameras are another source of Tour data. Since 2015, GoPro has been a Tour partner and they had cameras on cameras on official cars and motorbikes, team cars, mechanics and selected bikes. Fellow Tour enthusiast Jim Rea spotted some live GoPro footage during stage 1, but has not seen any subsequently. That being said, you can see archived video after the stages are complete by searching on Google for "GoPro: Tour de France 2016 - Stage n Highlights", where n is the stage number.

The following video is not from the Tour de France, but it shows what it is like to be in the sprint at the end of a race.

And the following video shows a crash from a mechanic's point of view:

(A 360 degree virtual reality versions would be cool).

The NBC package also includes a free mobile app. The live video on the Android app has a simplified user interface with only a pause/play toggle. The data view offers the options shown below, but it is not as complete as that of the Tour Tracker Web site.

I watch The Tour on a wide-screen laptop with a 3200 by 1800 pixel display and toggle back and forth between the Tour Tracker and video windows. An alternative would be to run the mobile app on a smart phone, casting it to a TV set and using a laptop or tablet for the Tour Tracker.

The bottom line is that race coverage had improved significantly since I last watched The Tour. The video quality has improved noticeably and the addition of real-time GPS data has added to the experience. I didn't even bother trying a VPN tunnel to check the BBC coverage.

Tuesday, May 31, 2016

Brick and mortar stores -- Apple, Microsoft and Google?

Dell, HP and others now have relatively upscale Chromebooks that approach, and in some features surpass, the high end Google Pixel and Google just announced that Chromebooks will be running Android apps in the future. At first, those apps might not be optimized for the Chromebook form factor, but many will look good in phone or tablet-size windows and I bet we see Chromebook-friendly Android apps in the future.

Given all that, I thought I might like to get one, so I headed over to the closest thing I know of to a Google store -- the Google section of my local Best Buy.

It's a total Fail.

As shown here, all they had was half a dozen low-end machines. That might work for a Chromebook for a school child, but it is not sufficient for someone thinking of spending $700 or more.

But, it gets worse.

There were two, sweet, young sales people wearing Google shirts next to the Chromebooks, so I asked if they had other machines -- perhaps a Pixel -- somewhere else in the store. It turned out they didn't know what I meant by "Google Pixel." I explained what a Google Pixel was and one of them went off to inquire. When she came back, she said they did not have them.

Since I was there, I asked about the six machines they had on display and discovered that they were confused about the difference between memory and storage. None of the machines on display had more than 2GB of memory, but they assured me that that was no problem because you could attach a large external hard drive.

(In the early days of personal computers, there was a joke that the difference between computer store sales people and car sales people was that the car sales folks knew they were lying).

I don't know if these kids were Google or BestBuy employees, but they were wearing Google shirts and that surely cheapens the top-notch "Googler" brand.

If Google hopes to sell and support high-end hardware, they will have to do much better than this, and that will be expensive.

A little while ago, I had been in a shopping mall near my home and dropped in on the Apple and Microsoft stores, which are just a few stores apart.

It was the middle of the week, but the Apple store was quite crowded. Customers were talking with sales people, playing around with machines, getting help from Apple "geniuses," etc. Apple runs classes in the stores, offers walk-in customer support and the employees are knowledgeable and helpful. I snapped this picture just before the man in the foreground told me to stop taking pictures:

I walked over to the Microsoft store and found it to be pretty well empty -- the store employees outnumbered the customers. They had a wide range of computers on display -- from both Microsoft and OEMs. They also offered service and classes and the workers were as knowledgeable and friendly as those in the Apple store. There was no pressure and no problem playing around for as long as I wanted to and they were happy to have me take pictures.

I had visited the same Microsoft and Apple stores two days after Christmas in 2014 and, while both were more crowded post Christmas, the Apple store was totally jam packed and the Microsoft store still fairly empty.

I personally don't see much difference between the Microsoft and Apple stores and can't figure out why one is so much more popular than the other, but, I can tell you for sure that Google will have to be creative and spend a lot of money if they want to sell us high end hardware. They will also have to step up customer support. You can sell a $35 Chromecast in a BestBuy store or online, but not a $1,300 Pixel Chromebook.

Monday, May 02, 2016

Two teaching experiments with Google Hangous on Air

I teach a class on the applications, implications and technology of the Internet and a major theme running through the course is the use of the Internet as a tool for collaboration. As such, we tried two teaching experiments using Google Hangouts on Air (GHoA).

(GHoA is a free video conferencing application for up to ten people. It differs from other video conferencing services in two ways -- an audience of unlimited size can watch the video conference while it is live and it is automatically recorded and stored on YouTube when it ends).

Student "office hours"

Our first experiment was having students who had done well on the midterm hold "office hours" online using GHoA. I did not participate in any of the sessions, but reviewed the videos afterward.

Students holding "office hours" online

Since I was not "present," the students were generally unguarded and light hearted, talking more freely than in class. Their discussion revealed a couple of content misconceptions, which I corrected the following week.

They also discussed the class itself. One group agreed that it was harder than they had expected and one group felt free to criticize the class. That gave me the opportunity to bring their criticism to the entire class, discuss the point they made and to give the most critical student extra credit for speaking his mind.

They talked about their study habits and how to do well in the class. In doing so, one group came up with the idea of using our weekly quizzes as a “study guide” and answer/discuss questions online. (I don't give them the answers).

They also got to see and hear themselves in an online conference and learned some practical things about microphone adjustment, camera location when using a phone or tablet, microphone positioning, speaker feedback, etc.

The sessions were not mandatory, but I gave those who participated extra credit for convening or attending a session. Many students chose not to participate and I polled them, asking why. Schedule conflicts at the time sessions were convened was the most frequently cited reason.

An online class meeting

The second experiment was to conduct a class session using GHoA instead of in the classroom. (We met at the usual class time, so schedule conflicts were not an issue).

I begin each week with a presentation of misconceptions I saw in their homework assignments and quiz answers from the previous week and current events relevant to our class. Since the goal of the class is to introduce the "skills and concepts needed for success as a student and after graduation as a professional and a citizen," that is followed by presentations focused on a couple of concepts and on a skill, for example, how to use GHoA, an image editor, etc.

I followed the usual in-class format during the GHoA session. The first nine students who "came" to the GHoA session joined the live video conference and those who logged in later joined the viewing "audience."

This was the first time I had run a GHoA class, and it was a learning experience for me. As shown below, I made a number of technical errors. It also felt strange to be presenting material without seeing the audience -- it made me appreciate radio announcers. I suspect one could get used to it.

Mistakes due to my inexperience

I also made the mistake of not preparing the students well enough. They only had one presentation and one assignment with GHoA before we ran our experiments.

After the session, I polled the students on their experience during the live hangout and their use of the recorded video. Here are the poll results for three of the questions on the live hangout:

Selected responses regarding the live class session

And two of the questions about their use of the recorded video after the session:

Selected responses regarding the session recording

As you see, they said they were more comfortable viewing the session at home than in class, their minds were less likely to wander and they generally thought it was as good or better than the classroom as a learning experience. The majority went back and watched at least a portion of the session recording, but there was an inconsistency in their reporting.

The last four questions asked about their overall preference and solicited comments. When asked whether they preferred meeting in class or meeting in a GHoA, 53% preferred the GHoA, 13% the classroom and 33% were indifferent. When I asked them in class what they thought was the best way to offer the course next semester, the consensus was that the first few meetings should be in the classroom and about half of the remaining meetings should be online.

(There were 19 questions in the entire questionnaire and you can see the full poll results (including their comments) here).

This was my first try at using GHoA and I made several mistakes which could be corrected. If others have used GHoA as a collaborative teaching tool, please share your experience.

Friday, April 01, 2016

The Tesla Model 3 reminds me of the original Macintosh, but Elon Musk does not remind me of Steve Jobs.

The Mac and the Tesla Model 3 have a lot in common. For one, the Model 3 was not the first electric car or Tesla's first electric car and the Macintosh was not the first computer with a graphical user interface (GUI) or Apple's first GUI computer, but both came out at just the right time.

Workstations, the Xerox Star and the Apple Lisa all had GUIs before the Mac, but they were too expensive and remained niche products. When the Mac came out, technology had just improved to the point where a consumer computer with a GUI could gain a foothold and catch on. The Mac, with its proprietary hardware and software, was the only GUI game in town for several years until technology improved to the point that commodity hardware could support a GUI and Microsoft brought out Windows 3 and 3.1.

Similarly, electric cars, including Tesla's, preceded the Model 3, but they were too expensive and inconvenient to grow beyond a niche market. Battery, material and other technologies have now improved to the point where the $35,000 Tesla will appeal to the mainstream. One does not have to care about the environment or global warming to like it. It's sensors, safety features, comfort, size and increased battery capacity (coupled with more charging stations and home chargers) and the ability to be upgraded via software download will appeal to a wide market, including owners of gasoline-powered cars.

As it was for the Mac, the timing is right for the Model 3. it's not too soon and not too late, but just right, like Goldilocks.

As technology improved, personal computers with GUIs became ubiquitous. The transition away from gasoline will take longer than the transition from command lines to GUIs because of the longer replacement cycle for cars, but the tipping point has been reached.

There are also financial parallels between the two. Tesla bankrolled the Model 3 from sales of the Roadster and Models S and X and Apple bankrolled the Mac with sales of the Apple II. Both were fading when the new machines came out. Tesla's stock is now 60% above its price on February 12 and the Apple II was running out of gas when the Mac was delivered.

Both companies developed comprehensive proprietary designs. Apple built the hardware and software for the Mac and Tesla is making the car and the batteries.

But that is where the similarities end. Apple holds on to their hardware and software innovations, protecting them with patents and law suits. Not Tesla. On June 12th 2014 Tesla released all of their 249 patents, saying they would not sue anyone for using their technology in "good faith." As shown below, they took down the plaques on their "wall of patents" after releasing them, replacing them with an image and the slogan "OEMS all our patent are belong to you." (I think Yoda wrote that for them).

Tesla's "wall of patents" before and after (image source)

It seems that Elon Musk sees other car and battery manufacturers as collaborators in the effort to replace gasoline-powered cars rather than competitors.

Disclaimer -- I am kind of an Elon Musk fan boy and make my students watch these videos of Musk being interviewed by Sol Khan, announcing the formation of Tesla Energy and recruiting engineers for the SpaceX satellite Internet project.

Update 4/7/2016

In a blog post entitled The Week that Electric Vehicles Went Mainstream, Tesla says they received 325,000 reservations for the Model 3 in the first week. They also say that translates into about $14.5 billion in sales if all the reserved cars are purchased.

The base price of the car is $35,000, but these figures average out to nearly $45,000 per car. Elon Musk indicated that the base price included all of the sensors and software, but they clearly expect to sell relatively expensive accessories like a second motor for four-wheel drive and additional batteries for extended mileage per charge.

Friday, March 11, 2016

The Khan Academy -- on the Internet or a LAN near you

The Khan Academy began when hedge fund analyst Sal Khan started posting short, conversational videos on YouTube to help his cousin with her math class. The videos went viral. Today there are 23 courses in math, 7 in science, 4 in economics and finance, 25 in the arts and humanities, 3 in computing and preparation for 8 tests like the SAT along with content from 25 high-profile partners.

The Khan Academy is a non-profit organization that promises to provide a world-class education that is "free for everyone forever," and their open source software is available on GitHub. Over 39 million "learners" have used the material and it is being translated into 40 languages.

As shown below, the courses are comprised of fine-grained modules focused on a single concept and each module includes a test of mastery. The modules are arranged hierarchically, and a student has not completed the course until he or she has mastered a module -- they encourage experimentation and failure, but expect mastery. (Getting a C in a typical college course means the student understood only about half of the material and will do poorly in classes for which the course is a prerequisite -- an effect that compounds throughout college and into the workplace).

Portion of the beginning arithmetic course knowledge graph

In addition to the teaching content, the Khan Academy software presents a "dashboard" that enables a teacher, parent or other "coach" to monitor the progress of a student or class. The red bar shown in the dashboard view below indicates that a student is stuck on a given concept. The teacher can then help him or her or, better yet, have a student who has already mastered the concept tutor the one who is stuck. (Research shows that the tutor will benefit as well as the tutee -- "to teach is to learn twice").

Dashboard with fine-grained progress reports

The dashboard enables a coach to adapt to the strengths and weaknesses of each student and spot learning gaps. They understand that students may be blocked by one simple concept, and sprint ahead once it is mastered. (I recall sitting in freshman calculus class, and being totally lost for half the term, until I figured out what the teacher meant when he said "is a function of" and the class snapped into focus).

Confusion on a single concept "blocked" this student.

The third major component facilitates community discussion among the students taking a given class, allowing for questions, answers, comments and tips & thanks.

Tracking student participation in the course community

But, what if you don't have Internet access?

Learning Equality grew out of a project to port the Khan Academy software to a local area network at the University of California at San Diego. Their version, KA-Lite, can be customized for an individual learner, classroom or school running on a Linux, Mac or Windows PC as small as a $35 Raspberry Pi.

KA-Lite is three years old and has been used in 160 nations by over 2 million learners from above the Arctic Circle to the tip of Chile and translations are under way into 17 languages. The following shows organizations that are deploying it and installations.

There is an interactive version of this map online.
To learn more, visit their Web site and contribute to their Indiegogo campaign.

See this companion post on MIT's Open Courseware, which is also available off line.

For the history, pedagogical philosophy, accomplishments and future of the Khan Academy along with a video collage showing examples of their content, see this 20-minute talk by Sal Khan:

Wednesday, March 09, 2016

MIT Open Courseware -- on the Internet or a mirror site near you

The grandaddy of online education is 15 years old.

MIT's Open Courseware project (OCW) has been offering free, open courseware under a Creative Commons licence for 15 years. About 2/3 of tenure track faculty at MIT have put material from over 2,300 courses online and they are viewed by over 1.5 million unique visitors per month (monthly statistics here).

There are courses from 31 departments and it is not all engineering and science -- the schools of Management, Humanities, Arts, and Social Sciences and Architecture and Planning all have OCW courses.

The format varies from course to course, each offering at least one and perhaps all of the following: video/audio lectures, student work, lecture notes, assessments, online textbooks or interactive simulations.

OCW users are pleased -- 80% rate OCW's impact as extremely positive or positive, 96% of educators say the site has/will help improve courses and 96% of visitors would recommend the site. (My guess is that these figures are dependent upon which course the person had taken since the quality and quantity of material varies from course to course).

The most appealing facet of OCW for me is their Mirror Site Program, which provides copies of their Web site to non-profit educational organizations that have significant challenges to Internet accessibility, inadequate Internet infrastructure, or prohibitive Internet costs.

A mirror site requires a computer with a terabyte of storage that should be accessible by students and faculty from lab or over a local area network or intranet. The courseware is regularly updated, so someone has to be available for a download every week or so and to coordinate with OCW. They recommend an Internet connection of at least 1 mbit/second for updates. The initial install (about 600 gigabytes) is typically from a portable hard drive supplied by MIT.

They currently have 368 registered mirror sites around the world (about 80% in sub-Saharan Africa) and, while most of the material is English, selected courses have been translated into at least ten languages. For example, there are 94 in Spanish.

Most courses are in English, but some have been translated.

Translation is less important for university-level courses than for primary or secondary school since university students can often read and speak English; however, MIT would be happy for others to contribute translations.

Don't forget that the OCW system and course content are under a Creative Commons license, and they encourage people to replicate the material. For example, several copies could be made available in labs run by different departments within a university and at many universities within a nation.

If you are fortunate enough to have Internet connectivity, you can browse the site and course material online. If not, consider setting up a mirror site -- contact Yvonne Ng at MIT. If you do, keep me in the loop and let me know if I can help.

See this companion post on the Khan Academy educational site, which is also available off line.

Tuesday, February 23, 2016

Apple versus DOJ

I am far from an expert on this case or security in general, but this feels increasingly like a political battle that goes beyond this phone and this case. It is an issue between the FBI, which would like to see Congress pass a law to insure "backdoors" on all phones to allow access and decryption and Apple and most other tech firms that oppose such a law.

I believe both sides are sincere. The FBI believes they could better guard us against terrorists (and drug dealers and other criminals) if they could get a warrant to search any phone, as they can a car, home, etc. Apple believes that since the US is the current world leader in encryption technology, we are better off without such backdoors because the "keys" would be discovered by others and other governments, for example the Chinese, might press for backdoor access in selected cases. (Apple has also invested a lot in a pro-privacy marketing image).

There is no obvious correct answer and there will be unintended and unforeseen consequences regardless of the outcome. Only one thing is clear -- this is not a matter that should be decided by supporters of Donald Trump or Bernie Sanders or Hillary Clinton.

For more, see this article, which has links to statements by both sides and this Pew survey on public opinion:

Friday, February 12, 2016

Sci-Hub, a site with open and pirated scientific papers

Sci-Hub is a Russian site that seeks to remove barriers to science by providing access to pirated copies of scientific papers. It was established in 2011 by Russian neuroscientist Alexandra Elbakyan, who could not afford papers she needed for her research. She was sued by Elsevier, a science publisher, and enjoined to shut the site down, but she has refused to do so.

The site claims links to over 48 million journal articles, so I decided to try it out by searching for the title of an article I had just read: "A technological overview of the community network." The paper was published by Elsevier and costs $35.95 to download if you are not from an organization with an Elsevier account.

My search generated an indirect referral to Google Scholar, which returned the following error message:

The request had timed out, probably due to latency in the Google search plus transit time to Russia. It returned an error message with a link (arrow) and the suggestion that I try again, so I did. This time, it returned a captcha screen:

After the captcha, it retrieved a PDF file with the full article as it had been published.

The site is inconsistent. I tried it for a couple of the articles I have published in the Communications of the Association for Computing Machinery (ACM), which are online behind a paywall. It found the Google Scholar references, but was not able to retrieve the articles.

However, it was able to retrieve some of my ACM articles that other people had managed to liberate and post on their own Web sites and it found drafts that are on my Web site. It must do a Google Web search as well as call on Google Scholar.

The best way to use Sci-Hub is to find the Digital Object Identifier (DOI) of the publication you are looking for before you go to Sci-Hub. The DOI is a standard, persistent identifier of scientific paper or other digital object and, if you have it for the paper you are seeking, you can simply enter it into the search box on the Sci-Hub home page.

Many publishers and organizations assign DOIs to their material and you can often find them in databases like PubMed or on the Web sites of the publisher, like the ACM Digital Library.

Sci-Hub openly violates copyright law, is slow and clumsy to use and the access is inconsistent, but what alternative does a researcher in a developing nation or at a relatively poor university or other organization in a developed nation have? There are a number of open scientific publication sites, and my guess is that they will prevail in the long run, leading publishers like ACM and Elsevier to change their business models. But that is just a guess.

One can also imagine a world in which copyright law makes fair use exceptions for scientific research, as opposed to entertainment. I don't feel guilty about pirating a scientific paper, but am happy to pay to see Star Wars.

Finally, after visiting this site, one cannot help thinking of the case of Aaron Swartz, who committed suicide as a result of prosecution for his attempt to free scientific literature.

Tuesday, January 19, 2016

Two cool podcast interviews on bit rot, an unsolved problem

We all need to be aware of the problem of bit rot in our work.

One of my favorite podcasts, OnTheMedia, produced a program called Digital Dark Age (52:14) last year. It consists of several segments on the problems of protecting and archiving the vast amounts of data we are generating. If you don't have time to listen to the entire podcast, at least check out two segments, interviews of Vint Cerf on information preservation problems (6:28) and Nick Goldman on using DNA as a storage medium (9:02).

Let's start with the Goldman interview. He notes that applications like video entertainment and scientific research are generating immense amounts of data, which is already overwhelming today's optical and magnetic storage media. Goldman is experimenting with DNA as a very high capacity, long lived data storage medium. How might that work?

A strand of DNA is made up of strings four bases abbreviated A, T, C and G, as shown here:

The DNA strand is like a twisted ladder where
the "rungs" are either A-T or C-G bonds. (For
more, see this animation).

Biologists have developed equipment for sequencing (reading) the list of bases making up a strand of DNA and for synthesizing arbitrary strands of DNA. That makes it possible to store a copy of a binary file in a strand of synthesized DNA, for example by synthesizing strands of DNA in which binary 0s are represented by an A or C base and 1s are represented by a T or G. A DNA sequencer could then convert those A, T, C and Gs back into 1s and 0s.

Goldman thinks we will see expensive DNA storage devices in three or four years and they will be cheap enough for consumer storage in 10-15 years. DNA stored in cool dry places will last hundreds of thousands of years and "all the digital information in the whole world everything that's connected to the Internet" will fit in "the back of a minivan." If Goldman falters there are other DNA storage projects at Harvard (article, video) and Microsoft.

But even if DNA or some other storage technology gives us dense, cheap storage, there are other problems, as outlined by Vint Cerf.

Cerf talks about "bit rot." The simplest type of bit rot is media deterioration -- becoming unreadable after 20 0r 30 years. That can be overcome by creating a new copy periodically, but that is not be enough. Let me give you a personal example.

In 2004 my wife had a small hole in the atrial wall of her heart repaired. In a 30 minute, outpatient procedure a skilled surgeon installed a small device in which tiny umbrellas were clamped over the hole, held there by a spring.

A ballon determines hole diameter (left), the device in place (right)

That was pretty amazing, so I asked for a video of the procedure, which was provided on an optical CD. The CD also included a program for viewing the video, the Ecompass CD Viewer. The CD media might be somewhat deteriorated by now, but I have transferred the program and data to magnetic storage, so I can still view it on my laptop.

But, my laptop is running Windows 7. The version of Ecompass CD Viewer I have was written in the Windows XP days. It still works, but will it be compatible with Windows 10 or later? If not, I could upgrade to a current version, but, as far as Google Search knows, Ecompass CD Viewer is no longer with us -- perhaps the company went bankrupt or dropped the program.

Ecompass CD Viewer works by stringing together clips stored in a video file format called SSM. If I could find an SSM viewer, maybe I could cobble together the entire video since I have the SSM clips. That might work for these static files, but we need the functionality of the original program for something that a user can interact with, like a spreadsheet. Looking further in the future Windows will disappear along with Intel-processor machines.

Cerf does not see a complete solution to the bit rot problem, but he pointed to Project OLIVE at Carnegie Mellon University as a significant step in the right direction. OLIVE emulates old computers running old software on virtual machines. Below, you see an image from a demonstration of an emulation of a 1991, OS7 Macintosh running Hypercard. The display, mouse and keyboard of the Dell PC are interacting with the simulated Macintosh, which is running on an Internet server.

Still image from a Project OLIVE demo video.

The OLIVE demonstration is impressive, but it is a research prototype that assumes standard input/output devices and capturing the vast number of programs and hardware configurations that exist today would require a massive effort. Similarly, DNA storage is at the early proof-of-concept stage. Both feel like longshots to me, so, for now, we all need to be aware of the problem of bit rot in our work.

Monday, December 21, 2015

Yahoo relies on annoying ads and specialized Websites

Will Internet news go the way of newspapers?

December 17 was the anniversary of President Obama's call for change in our Cuba policy. That milestone sparked a lot of coverage in the mainstream press, which I discussed in a post on my blog on the Cuban Internet.

The most extensive Cuba coverage I saw was a week-long series of posts on Yahoo -- U. S. and Cuba, One Year Later. The series has many well written posts on various aspects of Cuban culture and the political situation. Most are human interest stories on tourism, fashion, baseball, etc., but several were Internet-related.

The posts are not detailed or technical, but they are well written for a general audience -- like newspaper readers. (Remember newspapers)?

That is the good news.

The bad news is that the posts are overrun by annoying ads and auto-play videos. This is illustrated by the series "home page," shown below.

The series table of contents -- can you spot the ads?

Most of the elements in this array of phone-sized "cards" consist of an image from and link to a story on Cuba, but, if you look carefully, you will see that several of them link to sneaky ads. It's like Where's Waldo -- can you spot the ads?

In an earlier post, I suggested that advertising-based, algorithm-driven Internet news might be increasingly redundant and concentrated in high-volume sites. The advertising revenue is used to pay talented writers, photographers and videographers who are capable of producing timely news coverage of a story like this one.

But, people are fed up with those ads and increasingly deploying ad blockers.

Reasons people turn to ad blockers

People are circulating manifestos and Google and Facebook are proposing standards to improve the advertising and speed of the Web, but will those moves cut Yahoo's revenue?

Furthermore, mainstream media like Yahoo rely to some extent on specialized, "long tail" sources, like my Cuban Internet blog. Yahoo interviewed me (and many others) and took material from some of my posts in preparing their coverage. But, will specialized blogs and sites continue to exist? Losing focused sources would increase the cost of mainstream media stories.

I really liked Yahoo's coverage of Cuba one year later, but I wonder if they will be around for Cuba five years later.

Update 12/27/2015

ASUS will include the AdBlock Plus ad blocker in their proprietary browser. Apple is now allowing ad blockers on iOS devices. Will Firefox be next? Microsoft Edge?

Thursday, December 10, 2015

Google Fiber considering Los Angeles and Chicago

Will Google free me from the evil clutches of the dreaded Time Warner Cable?

Google's first foray into municipal networking was connecting 12 square miles of Mountain View California in 2007. In 2010 they issued a call for proposals from cities wishing to participate in an "experiment" called Google Fiber, which would offer symmetric, 1 Gbps connectivity to customers. In 2012, Kansas City was selected as the first Google Fiber city.

But, was it an experiment? An attempt to goad ISPs to upgrade their networks? The start of a new Google business? In 2013, Milo Medin, who was heading the Google Fiber project, said that they intended to make money from Google Fiber and that it was a "great business to be in."

Today, Google Fiber is operating in three cities and they are committed to installing it in six others. Eleven cities, including Los Angeles and Chicago, have been invited to apply.

Google is considering big cities Los Angeles and Chicago.

Los Angeles and Chicago were just added to the list and it is significant that they are the first very large cities -- both in population and area -- on the list.

Since the initial installation in Kansas City, Google has codified the city-selection process in an informative checklist document. Google knows they are offering a service that will benefit the city in many ways, so the checklist is essentially the guide to an application form in which the city has to offer access to poles and tunnels, 2,000 square-foot parcels for equipment "huts," fast track permitting, etc.

I expect that Google will also have their eye on the Los Angeles tech startup community and entertainment industries. While Google Fiber does not seem to be a mere "experiment," they will doubtless enable and discover new applications that captialize upon gigabit connectivity (and increase Google ad revenue).

Rollout order within a selected city is governed by the willingness of residents of a neighborhood to sign up for the service. High demand areas get high priority. But, this can exacerbate the digital divide within the city -- serving wealthy areas before poor areas. Google encountered this problem in Kansas City. As shown below, wealthy neighborhoods (green) committed before the poorer areas, so Google initiated programs to reach out to them.

Wealthy KC neighborhoods committed early.

Based on that experience, they now consider inclusion plans in the application process and hire city-impact managers for fiber cities. They also offer very low-cost copper connections for those who cannot afford fiber.

I am not familiar with the situation in Chicago, but Los Angeles has been pursuing fiber connectivity for some time. The city issued a request for proposals for city-wide fiber two years ago, and last year CityLinkLA was formed with the goal of providing "basic access to all for free or at a very low cost and gigabit (1 Gbps) or higher speed access at competitive rates." The effort has been led by Los Angeles Mayor Eric Garcetti and Councilman Bob Blumenfield and they are working with both Google and AT&T toward that goal.

I assume that AT&T will upgrade their current infrastructure to DOCSIS 3.1 in order to achieve faster speeds over copper running from fiber nodes to individual premises, but they only serve a portion of Los Angeles. Other areas may have to wait for Google. It seems that Verizon gave up on their fiber offering, FIOS, some time ago.

Now for the belated full-disclosure. I live in Los Angeles, and am hoping that competition between Google or AT&T or someone will one day free me from the evil clutches of my current monopoly broadband service provider, the dreaded Time Warner Cable.

Real Time Analytics