Permit-less Parking Technology

| July 2, 2014 | 0 Comments

While my primary enthusiasm is on how technology can be effectively applied to teaching and learning, occasionally I run across an administrative problem\solution that sparks my interest. What could be seemingly mundane but of interest to almost everyone on campus than … parking!

I recently saw on the EDUCAUSE CIO Listserv a request for information on “permit-less” parking solutions. Given that Microsoft had adopted such a solution about a year ago I thought I would find out what we use and how it works. After all, we do consider our corporate headquarters here in Redmond, Washington as our “campus” with over 100 separate buildings (over 700 globally) that contain over 60K parking spaces for our Puget Sound area employees. In fact, one of our parking decks has over 8,000 parking spaces and, if lined up end-to-end, would be 8/10ths of a mile long.

I spent some time with Don Rufo who is in charge of our Campus Operations to find out the solution we use. I found out that we gave up years of hanging permits on our campus for a variety of reasons. First, they cost Microsoft about $25K every other year to administer. Second, we believed that having a Microsoft parking permit on employee cars made them a potential target for theft. Thieves might assume that such a car contained valuables such as laptops or Surface computers. Third, hanging permits, or the lack-thereof, did not give us the ability to easily identify suspicious vehicles.

After about a year of research and a proof-of-concept Microsoft purchased a solution from IRSA. We purchased their License Plate Recognition (LPR) program. Microsoft employees go to a simple website where we register all the cars we might bring to campus. Microsoft has outfitted a number of its security vehicle with IRSA cameras that scan the license plates of vehicles parted in our lots. It analyzes in real-time the plates and compares them to the database of registered employee license plates. We also have a list of license plates in the database for persons of interest (POI). The results show up on a PC in the security vehicle and identify if the plate is registered or not. If the plate is from a POI, it immediately notifies security personnel in the vehicle and back at our Global Security office.

To date the new permit-less parking program appears to be a great success. Employees like the ease of registration and not having to worry about something hanging from their mirrors.

View Post

Educators: To boldly take us where we would not have gone before

| May 1, 2014 | 0 Comments
Jim Ptaszynski, Ph.D speaks with Sir Patrick Stewart

Jim Ptaszynski, Ph.D speaks with Sir Patrick Stewart

I had the distinct honor and pleasure tonight of having dinner with, and hearing speak, Sir Patrick Stewart. Most of us remember him from Star Trek, X-Men or his recent performances on Broadway but tonight, he was speaking after dinner at the British Council’s meeting in Miami Beach. He is also the Chancellor of the University of Huddersfield in the UK.

He shared some very compelling personal stories of how educators immensely impacted his life.  First, how at 15 a school teacher (Cecil Dormand) introduced him to Shakespeare which ignited a love of the theater and second, how the involvement of two college professors helped to land him the job as Jean-Luc Picard on Star Trek.  Most in the audience nodded in agreement how their lives too had been changed by educators (thanks Drs. Passaro, O’Brien, Morrison and Harpster).  What educators changed your life forever?

View Post

Slides from presentation in Kiev, the Ukraine

| November 6, 2013 | 0 Comments

Here are the slides I presented today at the Second International Scientific Conference

“European Integration of Higher Education of Ukraine in Context of Bologna Process” held in Kiev, the Ukraine.

View Post

Duke Faculty Say No

| April 30, 2013 | 0 Comments

I found this article about Duke faculty “forced the institution to back out of a deal with nine other universities and 2U to create a pool of for-credit online classes for undergraduates” in today’s Inside Higher Ed particularly interesting.  Not because it illustrates how some of the elites are starting to pull back from online or MOOCs but rather, the issues they raise in doing so.   The article points out the irony in some of the new models and indeed it is.  The two comments I reference below I found especially entertaining.

I do not think that this is a bell weather for the decline of on-line or MOOCs.  Rather, I think they will accelerate. But there are two lessons here. First, in the rush to try new models of education, don’t try and out-pace the faculty. While they may sometimes be the bottleneck for rapid change, they take their job of institutional governance very seriously and in turn they should be taken seriously. Second, what are the constraints and affordances of any new model? Is there a good “fit” between the technology, pedagogy and content knowledge?

From the article:

  • Thomas Pfau, a professor of English and German, who spoke publicly against the 2U effort during the faculty meeting, said there were many ironic elements of Duke’s online push. “There we are believing in a brick and mortar framework in our pedagogical mission 8,000 miles away,” he said referring to the new campus in China, “but here where the students are actually in place, we seem to want to encourage them to take classes online – the absurdity of that was noted by a number of faculty.”
  • Another irony opponents seized on: Duke would be granting credit to students who were not admitted to Duke and allowing Duke students to receive credit for online courses from institutions that Duke presumably markets itself as better than.

Read more:

Inside Higher Ed

View Post

Higher education faculty board members visit Microsoft TechFest

| March 6, 2013 | 2 Comments

Pictured: Bill McDiarmid, Dean, School of Education, University of North Carolina, Geoffrey Zweig, MSR, David Slykhuis, Associate Professor, James Madison University, Robin Angotti, Associate Professor, University of Washington, Sumit Gulwani, MSR, Ben Zoran, MSR. Missing from picture, Michael Searson, Executive Director, School for Global Education & Innovation, Kean University, President, Society for Information Technology & Teacher Education.


Attended Microsoft Tech Fest (should be named Wow! Fest) where the Microsoft Research labs from around the world come together to share what they have been working on during the past year. This year I was able to invite four of our Higher Education Board faculty members to the special pre-day event. We were able to hear an amazing set of lectures by computer Science luminaries as Microsoft Senior Vice President of Microsoft Research, Rick Rashid.

Below are some of the project we were able to see. More information at:

Each year during TechFest, Microsoft Research displays a collection of cutting-edge research projects that offer new functionalities for Microsoft products and, often, for the greater research ecosystem. Many of those projects are discussed below.

Deep Zoom technology from Microsoft Silverlight enables you to interact with the TechFest project posters. You can zoom in or out, and smoothly load and pan the poster images. In this fashion, you can enjoy an immersive experience almost as satisfying as being on the show floor.

BodyAvatar: Creating 3-D Avatars with Your Body

Bored of having an ordinary-looking avatar? Want to create something unique? A dragon? A lobster? An alien? BodyAvatar is a natural interface that lets Kinect players create 3-D avatars of any shape they can imagine, using their bodies as the input. Based on a first-person, “you’re the avatar” metaphor, the player simply scans his or her body posture into an initial shape for the avatar and then performs various intuitive gestures that change the shape of the avatar on the screen. BodyAvatar unleashes the creativity of everybody, letting people turn their wildest imagination into reality without needing to learn complex 3-D modeling tools. Learn more »


Geo-Database Applications at the Speed of Thought

In 2012, Microsoft formed a unique partnership with the International Union for Conservation of Nature’s Red List of Threatened Species. Central to the partnership is creating the Red List Threat Mapping Tool — a spatial database application that enables experts and decision-makers around the world to find, map, explore, add, modify, and notate the various threats to any focal species. This SQL Server 2012 application enables visitors to query global biodiversity, protected area, and threat databases in real time. New software is being built to make it easy for anyone to construct these kinds of geo-data applications “at the speed of thought,” without having to write a line of code. The software natively understands spatial data and spatial search, introduces a new, iterative search method, and produces databases that remain flexible, so that all aspects of the database and the application can be modified at any time.


Facing Interaction

Have you ever encountered a situation for which it is hard to describe your sensation? Have you ever wanted to transfer your facial and other biometric physics to someone close via tactile, audio, and visual signals? This project reflects on the meaning of interaction and communication, from the perspective of our innate sensing and beyond verbal communication. Considering facial expressions and head poses as meaningful indicators, the project maps them onto a plethora of interrelated aural, tactile, and visual responses. The project also aims to be a platform for study of different sensing techniques for information retrieval and communication. For example, the projection of music beats to vibrations in a person’s joints would be a natural way to aid dancing by hearing-impaired people. Other usages could include mapping eye gaze, laughter, eye blinks, or voice pitch to audio, visual, and vibration to create intimacy with another person. Learn more »


3-D Reconstruction by Portable Devices

Augmented reality is an important technique for improving user experience in many applications, especially in the mobile Internet era, in which smart devices are cheap and popular. This project features augmented-reality scenarios for mobile phones or tablets based on 3-D reconstruction technologies. A typical scenario: Assuming that all sellers such as Amazon or IKEA make 3-D models for their products with a portable 3-D scanner app, if you want to buy a vase for your desk, you can find candidates by keyword or visual search. By then photographing your desk with your phone camera, a photo of the vase on the desk will be shown on the screen. With this true, 3-D vase model, you can walk around to evaluate the effect to see which vase is most desirable. Other scenarios include 3-D facial modeling, social-network sharing, and 3-D printing. Learn more »


High-Quality, Robust Video Stabilization

Obtaining a steady video from hand-held video cameras, mobile phones, and Surface is becoming increasingly necessary for normal users. Achieving high-quality output from existing video editors remains challenging, though. For example, some results still have jitters and undesired, low-frequency motion, too much cropping, or annoying shearing and wobbling. This project demonstrates a new optimization technique, without hardware support, that effectively can suppress these artifacts altogether. Moreover, this technique also can be applied to different devices for different scenarios: a video post-processing editor on the desktop and on Surface, or real-time stabilization on mobile phones for better viewfinders or face-to-face communication.


Teaching Kinect to Read Your Hands

Kinect has brought full-body tracking to your living room, enabling you to control games and apps with your gestures. One promising direction in Kinect’s evolution is hand-gesture recognition. By capturing a large, varied set of images of people’s hands, the project uses machine learning to train Kinect to determine reliably whether your hand is open or closed. A handgrip detector, the gestural equivalent of the mouse click, then can be built. This detector will be included in a forthcoming release of the Kinect for Windows SDK and should open a new wave of natural-user-interaction applications. Learn more »


Adaptive Machine Learning for Real-Time Streaming

Big data usually refers to the volume of data to process, but in a real-time environment, velocity is equally important. Direct processing of real-time data enables quicker reaction to events, providing a competitive advantage over processing offline data. The software-and-services industry is embracing machine learning to make its offerings more intelligent. This project combines technology for efficient temporal stream processing with support for machine learning. The project shows how to compose temporal processing and Infer.NET machine learning into a reasoning flow running in StreamInsight and how to provide incremental online updates of the machine-learning model at runtime. Also featured is how to go between online stream processing and offline data analysis, as well as how to operationalize an offline, validated reasoning flow in a production system. This work adds value to concrete customer scenarios in the manufacturing and cloud/IT services domains.


Augmenting Textbooks with Educational Videos

Textbooks are acknowledged as the educational input most consistently associated with gains in student learning. They are the primary conduits for delivering content knowledge to students, and teachers base lesson plans primarily on the material in textbooks. This project features a data-mining-based approach for enhancing the quality of textbooks. The approach includes a diagnostic tool for authors and educators to identify algorithmically any deficiencies in textbooks. Techniques are provided for algorithmically augmenting sections of a book with links to selective web content. The focus is on augmenting textbook sections with links to relevant videos, mined from an abundant collection of free, high-quality educational videos available on the web. These techniques have been validated over a corpus of high school textbooks spanning various subjects and grades. Learn more »


Automated Problem Generation for Education

Intelligent Tutoring Systems (ITS) can enhance significantly the educational experience, both in the classroom and online. Problem generation, an important component of ITS, can help avoid copyright or plagiarism issues and help generate personalized workflows. This capability, for a variety of subject domains, can be demonstrated with user-interaction models:

  • Algebraic-proof problems: Given an example problem, the tool generates similar problems.
  • SAT sentence-completion problems: Given a vocabulary word w, the tool generates a sentence completion whose correct answer is w, along with a few incorrect alternates.
  • Logic-proof problems: Given an input problem, the tool generates variants. Given parameters such as number or size of variables or clauses, the tool generates fresh problems.
  • Board-game problems: Given rules of a board game—such as 4×4 tic-tac-toe with only row/column sequences—and hardness level, the tool generates starting configurations that require few steps to win.

Learn more »

VidWiki: Crowd-Enhanced, Online Educational Videos

Recent efforts by organizations such as Coursera, edX, Udacity, and Khan Academy have produced thousands of educational videos logging hundreds of millions of views in attempting to make learning freely available to the masses. While the presentation style of the videos varies by the author, they all share a common drawback: Videos are time-consuming to produce and are difficult to modify after release. VidWiki is an online platform to take advantage of the massive numbers of online students viewing videos to improve video-presentation quality and content iteratively, similar to other crowdsourced information projects such as Wikipedia. Through the platform, users annotate videos by overlaying content atop a video, lifting the burden on the instructor to update and refine content. Layering annotations also assists in video indexing, language translation, and the replacement of illegible handwriting or drawings with more readable, typed content. Learn more »


Predictive Decision-Making at the Speed of Thought

Since 2007, the Computational Ecology and Environmental Science (CEES) group at Microsoft Research Cambridge has been pursuing the fundamental research needed to build predictive models of critical global environmental systems. Such predictions are needed urgently at a variety of scales—and to support effective decision-making, they must include uncertainty. In recent years, the philosophy of how to make such predictions has become clear: A “defensible modeling pipeline” is needed in which data and models are integrated in a Bayesian context and which is transparent and repeatable enough to stand up in court. The technology, though, is lagging far behind, making this pipeline impossible to build for all but the most technically savvy. Enter CEES Distribution Modeler, a browser app that enables users to visualize data, define a complex model, parameterize it using Bayesian methods, make predictions with uncertainty, and then share all that in a fully transparent and repeatable form.


Productivity Tools to Discover and Analyze Data

Information workers (IWs) need to gather structured data from various sources, combine that with their own data, analyze the data, and take business decisions based on the data. Discovering and importing the data into Excel is tedious and cumbersome, and data analysis is either time-consuming or requires programming skills. This project presents tools for non-expert Excel users to discover and analyze data quickly and easily. For data discovery, it offers technology that extracts structured data from the web, indexes them, and enables IWs to search over them. IWs can perform the searches directly from Excel, easily import the data into a spreadsheet, and combine them with their own data. For data analysis, this project presents a set of machine-learning tools seamlessly integrated into Excel. The technology automatically can infer the values of missing cells, detect outliers, and enable users to analyze data tables more productively.


Telling Stories with Data via Freeform Sketching

This project uses and extends the narrative storytelling attributes of whiteboard animation with interactive information-visualization techniques to create a new, engaging form of storytelling with data. SketchInsight is an interactive whiteboard system for storytelling with data through real-time sketching. It facilitates the creation of personalized, expressive data charts quickly and easily. The presenter sketches an example icon, and SketchInsight automatically completes the chart by synthesizing from example sketches. Furthermore, SketchInsight enables the presenter to interact with the data charts. Learn more »


Real-Time, 3-D Scene Capture and Reconstruction

This project is a novel method for real-time, 3-D scene capture and reconstruction. Using several live color and depth images, this technology builds a high-resolution voxelization of visible surfaces. Unlike previous methods, this effort captures dynamic scene geometry, such as people moving and talking. The key to the approach is an efficient, sparse voxel representation ideally suited to GPU acceleration. Rather than allocating voxel memory as a 3-D array corresponding to the entire volume in a space, the project stores only those voxels that contain the visible surfaces; leading to a much more compact representation for the same voxel resolution. As a result, the project captures and processes ultra-high-resolution voxelizations from fused image data, utilizing depth, silhouette, and color cues consistently.


ViralSearch: Identifying & Visualizing Viral Content

Though the phrase “going viral” has permeated popular culture, the concept of virality itself is surprisingly elusive, with past work failing to define rigorously or even definitively show the existence of viral content. By examining nearly a billion information cascades on Twitter—involving the diffusion of news, videos, and photos—this project has developed a quantitative notion of virality for social media and, in turn, identified thousands of viral events. ViralSearch lets users interactively explore the diffusion structure of popular content. After selecting a story, users can view a time-lapse video of how the story spread from one user to the next, identify which users were particularly influential in the process, and examine the chain of tweets along any path in the diffusion cascade. The science and technology behind ViralSearch can help identify topical experts, detect trending topics, and provide virality metrics for a variety of content.


Enabling Real-Time Business-Metadata Extraction

Today, mobile users decide which business to visit next based only on distance information, stale business reviews, and old ratings. But because users need to decide what to do next, real-time information about the business — such as the current occupancy level, the music level, and the type or exact music playing — can be invaluable. This project proposes to crowdsource real-time business metadata through real-user check-in events. Every time a user checks into a business, this project uses the phone’s microphone and advanced signal processing to infer the occupancy level, the exact song playing, and the music and noise levels in the business. The extracted metadata either can be shown in the search results as business info or can be indexed to enable a new generation of queries, such as “crowded bars playing hip-hop music.” Using real business audio traces recorded on multiple devices, the project achieves accuracy of better than 80 percent in inferring real-time business metadata.


Making Smooth Topical Connections on Touch Devices

A strategy is proposed for mining, browsing, and searching through documents consisting of text, images, and other modalities: A collection of documents is represented as a grid of keywords with varying font sizes that indicate the words’ weights. The grid is based on the counting-grid model so that each document matches in its word usage the word-weight distribution in some window in the grid. This strategy leads to denser packing and higher relatedness of nearby documents—documents that map to overlapping windows literally share the words found in the overlap. Smooth thematic shifts become evident in the grid, providing connections among distant topics and guiding the user’s attention in search for the spot of interest. Images and other modalities are embedded into the grid, too, providing a multimodal surface for interactive, touch-based browsing and search for documents. An example can be found in browsers of four months of CNN news, cooking recipes, and scientific papers.



The natural user interface meets big data meets visualization: SandDance is a web-based visualization system that exploits 3-D hardware acceleration to explore the relationships between hundreds of thousands of items. Arbitrary data tables can be loaded, and results can be filtered using facets and displayed using a variety of layouts. Natural-user-interaction techniques including multitouch and gesture interactions are supported. Learn more »


Kinect Fusion

Kinect Fusion enables high-quality scanning and reconstruction of 3-D models using just a handheld Kinect for Windows sensor. The implementation leverages C++ Accelerated Massive Parallelism, enabling support for a variety of graphics hardware. Simple samples are demonstrated to get developers up to speed with 3-D scanning.


Toward Large-Display Experiences

We are heading toward a world of a “society of appliances,” where every connected device can use its strengths and complement each other’s. At the same time, large displays are becoming ubiquitous. Soon everyone potentially could have a large office display. This project addresses two important things in the context of an augmented office: 1) when the user is close to the large display, a new user experience designed for large displays, with commands appearing directly next to the finger in combination with a pen. 2) when the user is far from the large display: a model that shows that the phone can be used as a proxy for a large display, whether it is used as a remote mouse or keyboard for digital inclusion; an extension in the context of the current experience, such as a palette for a painting application; or as a device to initiate document sharing on a large display.


Actuated 3-D Display with Haptic Feedback

This project features a device that enables the natural visual and haptic exploration of a 3-D data set. It is the start of an investigative research tool that will enable the exploration of various natural touch interactions in 3-D with both visual and haptic feedback. A table-top system enables the user to explore a 3-D data set in X, Y, and Z with natural touch interactions. The X and Y interactions come via X and Y touch interaction on the screen, visually scrolling in X and Y through the data set. As the user naturally explores in depth, a gentle push on the touch screen physically moves the screen in Z with appropriate video rendering at the appropriate XY cutting plane. At appropriate Z positions, haptic detents and other Z-axis force feedback will be rendered as the user explores along the Z axis.

View Post

Is Technology Leading the Change?

| March 3, 2013 | 0 Comments

First survey:
Second survey:

View Post

Making Smart Decisions about Technology

| March 3, 2013 | 0 Comments

A presentation at the American Council on Education Annual Meeting

The American Council on Education's 95th Annual MeetingI just finished a presentation at the American Council on Education’s New Presidents Institute at the ACE Annual Meeting. The topic was “Making Smart Decisions about Technology.” Diana Oblinger, President and CEO, EDUCAUSE was the moderator and the panel included Craig Chanoff, Senior Vice President and General Manager, Blackboard Student Services, Blackboard, Inc., David Clinefelter, Chief Academic Officer, The Learning House, Inc. and myself.

Results from ACE’s 2012 American College Presidency Study show that “Technology Planning” is one of the top three areas that Presidents report being the least prepared to handle. Colleges and universities use IT to create strategic advantage in learning, student support, and research as well as to leverage technology to increase efficiency and effectiveness. Our session brought together experts with experience using technology to support a campus’ strategic directions.

Most of my remarks focused on using technology as a strategic asset. Too often this is equated simply with providing lots of technology on campus. In this regard, many campuses already have lots of technology – they have wired their classrooms, provided access in their residence halls and even many of their green spaces have access. But few, outside of their on line courses, are using it strategically. Compare this to the Massively Open On-Line courses (MOOCs) who must use technology strategically to have an offer (see Kenneth Green’s article in AGE or my previous post on MOOCs).

The main thrust of my remarks were, given the increasingly strategic importance of technology and given that anything strategic on campus must pass through the president, presidents must become more involved in technology decisions on their campus. Specifically, they need to use their bully pulpit to get their constituents, especially their faculty, to use technology beyond simple automation. It must focus on strategic and holistic change. They also must support the organizational chance and organizational development that will get people to use technology differently.

I also talked about Microsoft’s Teacher Education Initiative (TEI). The current scope of TEI is on helping faculty in schools of education increase their appropriate use of technology in the instruction of pre-service teachers. But, it very soon become a resource across the entire higher education curriculum. It helps faculty understand the increased integration of technology into their courses by use of the TPACK framework, 21st Century Skills Development (21CLD) and through using an inquiry and hands on approach to learning in a day-long workshop format.

The Campus Computing ProjectThe presidents voiced the need for more faculty assistance in using technology beyond simple instruction in using PowerPoint or the institution’s LMS. This was consistent with Dr. Kenneth Green’s 2012 Campus Computing Report which indicated that this is a top priority among CIOs surveyed.

Stay tuned for more presentations tomorrow.


(Used with permission)

View Post

MOOCs: The New Internet Bubble in Education?

| November 13, 2012 | 1 Comment

I started this post before attending EDUCUASE and I put it aside thinking that my perspective might change.  It didn’t.  It still appears to me that there are three major themes driving interest by universities in Massively Open On line Courses, or MOOCs.  First, for some to universities it is a marketing or public relations activity.  The huge number of students, most of whom either do not have the means nor the traditional qualifications to attend these universities now can be taught by some of the “greatest professors in the world” and best of all, for “free” (free as in someone else is paying for them).

The second reason is the allure of incremental revenue.  With the continuing economic recession and state and federal cutbacks, institutions are looking for new ways to add revenue.   Physical plant (e.g., classroom space) or human resource limitations (e.g., faculty and staff) may prevent simply increasing enrollments.

Third, MOOCs may represent a new Internet gold rush where institutions want to carve out some space on the web before other universities do even though the financial model has yet to be worked out.

So a few institutions have the resources (i.e., extra cash for experimental projects) to test the concept themselves while most seem to be using third-part services which do all the work for making  the institutions IP widely available.  Of course, these provides also take the majority of the profits which in many cases is north of 80%.

There seem to be two big questions on the horizon. First, if MOOCs do go mainstream and become a viable model for institutions, when do they figure out that they do not need the third-party middle men?  That is, why not take the work back inside the university and reverse the allocation in their favor (they get the 80% of the revenue).

The second question is, while the current technology infrastructure is open source, when do the institutions simply utilize their existing campus LMS infrastructure? Granted, the LMS providers will need to provide some additional features and capabilities to their systems but surely this will be more efficient than developing what appears to be an alternative LMS platform. Case in point, after about a decade, Sakai the open source LMS seems to be imploding given most of the original university funding partners now walking away from the project.

While MOOCs seem to have some promise to providing greater access to learning they have not yet figured out how to provide access to “education” with all the associated credentials.  While some might scoff at this and say that alternatives like badges are all that will be required in the future, that has yet to happen and I would argue that most employees will be slow to jettison the traditional degree.  But to provide certificates and eventually actual credits, someone, and most likely the student themselves will need to pay.  At that point, it may be difficult to differentiate MOOCs from traditional university on line programs.

View Post