From Touch to Call: Tracing the Path of a Touch Gesture

What actually happens when I touch my touchscreen?

This article will explore everything involved in tracking a touch, from the physics of capacitive sensing to the final action on the screen. We describe how the finger is detected and methods of determining finger position. We follow the finger further into the phone’s software stack and see how it reaches the proper application. Gestures such as pinch and zoom are demystified.

How is touch detected?

Almost all smartphone touchscreens react to the capacitance of your finger. The touchscreen contains an array of sensors that detect the change in capacitance caused by your finger. When your finger touches the screen you affect the self-capacitance of each of these sensors, and the mutual capacitance between them. Most smart phones sense mutual capacitance sensing rather than self capacitance. Since mutual capacitance is the interaction between any given sensor pair, it can be used to collect information about every position on the screen (X * Y points). Self capacitance is only capable of detecting the reaction of each sensor not each point (X + Y samples).

From Touch to Call: Tracing the Path of a Touch Gesture

Figure 1: Mutual capacitance fundamentals.

The capacitive sensor contains several layers: a top layer of glass or plastic, followed by an optically clear adhesive (OCA) layer, then the touch sensor, then the LCD. The touch sensor is a grid of sensors that are typically about 5 mm x 5 mm. These sensors are built using indium-tin-oxide (ITO). ITO has some interesting properties that make it a great material for touchscreen construction. It’s over 90 percent transparent, but it’s also conductive. Some designs use a diamond pattern, which is visually pleasing, since it doesn’t align with the LCD pattern. Others use a simpler “bars and stripes” pattern. If you examine your device at the correct angle with good lighting, you may be able to see the ITO sensor lines with the LCD turned off.

Sensing mutual capacitance is fundamentally different from sensing self capacitance. To sense self capacitance, we typically measure the time constant of an RC circuit containing the sensor. Sensing mutual capacitance involves measuring the interaction between an X and a Y sensor. A signal driven on each X line and each Y line is sensed to detect the level of coupling between the sensors. Interestingly, a finger touch will decrease the mutual-capacitance coupling while a finger touch increases the self-capacitance value.

From Touch to Call: Tracing the Path of a Touch Gesture

Figure 2: Mutual capacitance sensing response.

In either method, simply measuring the capacitance is not enough. The system must react to changes in capacitance, not raw capacitance. To do this, the system maintains a baseline value for each sensor. This baseline value is a long term average of the signal that allows for signal variations caused by temperature change and other factors. One of the challenges in building a touchscreen system is establishing the proper baseline. For example, the system must be able to properly start up with a finger on the screen. The system must also be able to start with water or a palm on the screen.

Once the baseline value is subtracted from the sensed capacitance, we have an array of signal values representing the touch like the figure below:

From Touch to Call: Tracing the Path of a Touch Gesture

Figure 3: Determining finger location based on raw capacitance data.

Various methods are used to determine the finger position from this information. One of the simplest is a centroid (center of mass) calculation, which is a weighted average of the sensor values in one or two dimensions. Using a 1-D centroid, the X coordinate above is (5*1+15*2+25*3+10*4) / (5+15+25+10) = 150/55 = 2.73. We then scale this position to match the LCD resolution. If the ITO sensor pattern extends beyond the LCD’s sides, some translation is performed for this as well.

Edges complicate the finger location problem. Consider the array shown above if the panel ended at each of the columns. The simple centroid shown above will start to “pull” to the right as the terms on the left drop off. To counter this issue, we must use special edge processing techniques that examine the shape of the remaining signal and estimate the portion of the finger that’s off of the screen.

Communication to the host processor

Once a valid touch signal is present and the X/Y coordinates of the touch are known, it’s time to get the data to the host CPU for processing. Embedded touchscreen devices communicate using the venerable I2C interface or SPI. Larger touchscreens typically use USB interfaces, since Windows, MacOS and Linux all have built-in support for HID (Human Interface Devices) over USB.

Although several different interfaces are employed, the OS drivers end up doing similar work with each one. We’ll discuss the Android driver in our example. Since Android and MeeGo are both built on Linux, all three use similar drivers.

The touchscreen driver’s interrupt triggers an interrupt service routine (ISR) that schedules a worker thread. No work is done in the ISR to maintain interrupt latency and prevent priority inversions. When the worker thread is called by the OS, it starts a communication transaction to read the data from the device and goes to sleep. When the communication transaction completes, the host driver has the data it needs to proceed.

The host driver translates the proprietary data format used by the device manufacturer into a standard format. In Linux, the driver populates an event’s fields with a series of subroutine calls, then it sends the event with a final call. For example, creating a single-touch Linux input event looks like this:

input_report_abs(ts->input, ABS_X, t->st_x1); // Set X location
input_report_abs(ts->input, ABS_Y, t->st_y1); // Set Y location
input_report_abs(ts->input, ABS_PRESSURE, t->st_z1); // Set Pressure
input_report_key(ts->input, BTN_TOUCH, CY_TCH); // Finger is pressed
input_report_abs(ts->input, ABS_TOOL_WIDTH, t->tool_width); // Set width
input_sync(ts->input); // Send event

This touch event then goes into the OS. Android saves the event’s history in the gesture processing buffer and passes the event up to the View class. Several touchscreen devices (like the Cypress TrueTouch™ products support hardware gesture processing. Hardware gesture processing relieves the host OS of the burden of gesture processing and in many cases it eliminates the processing of all touch data until a gesture is seen. For example, if you’re in your photo viewer, the host doesn’t have to process dozens or hundreds of touch packets to see that you want to flick to the next photo. No interrupts take place until you actually flick over to the next photo.

From Touch to Call: Tracing the Path of a Touch Gesture

Figure 4: Example of simple gesture processes.

Android’s View class determines which application is active where the touch occurs. Each application that shows up on the screen has at least one View class. This class contains methods that process user input, including OnTouchListener, which passes the information received from the input driver along with additional information in a MotionEvent. If you’re used to writing windows programs that accept mouse events, you may be surprised at the difference between the variety of mouse events and the touch interface. The MotionEvent includes the methods you would normally expect from something like WM_LBUTTONDOWN, such as GetX and GetY, but also the preceding touch positions and the amount of time the finger has been on the panel.

Once the event is seen by the application, the application reacts to the touch. This is generally done by widgets rather than the application itself. Android’s widgets include simple items like buttons, and complex interfaces like a date picker and progress bar with a cancel button. Alternatively, the application can directly consume touches. A drawing application uses a mixture of both types, using direct touch input in the drawing area and widgets for menus and buttons.

One difference between Windows Touch processing and Android is gesture interpretation. Android provides a rich library of gesture creation tools, but doesn’t provide any built-in gestures. Each designer is free to create his or her own gestures, including complex gestures like handwriting. This approach has enabled applications like character-recognition contact search, but it means that the same action may not do the same thing on two Android platforms. Windows provides a fixed set of well-understood gestures with OS-level support: GID_PAN, GID_ZOOM, GID_ROTATE, GID_PRESSANDTAP and GID_TWOFINGERTAP. These actions always cause the same actions in every application, which enables users to quickly use new applications. Each method has some strengths.

The path from touch to gesture is technically challenging and involves the interaction of many pieces. Everything from material selection to manufacturing to electronics plays a role in touch sensing. Once the touch has been digitized, it still must be located, communicated to the host, and interpreted. Now that these challenges have been met, it’s up to software developers to build exciting applications on them. How will your next application use these new touch capabilities?

  • From Touch to Call: Tracing the Path of a Touch Gesture已关闭评论
    A+
发布日期:2019年07月13日  所属分类:参考设计