Visual C ++ Thread Synchronization Technology Analysis

xiaoxiao2021-04-10  381

Abstract: Multithreaded synchronization technology is an important technique for computer software development, and this paper has conducted a preliminary discussion on the principles and implementation of various synchronous techniques of multi-threaded. Key words: VC 6.0; thread synchronization; critical area; event; mutual exclusion; semaphore;

Reading Directory: Make thread synchronous critical regions management event kernel object Signal kernel objects Mutual illegal kernel object Summary Text Synchronization When using multi-thread in the program, there are few threads that can be fully independent in their lifetime. operating. More situations are some threads to perform some processing operations, while other threads must understand their processing results. Under normal circumstances, the understanding of this processing should be performed after its processing task is completed. If appropriate, other threads tend to access processing results before the end of the thread processing task, which is likely to get an error in the process of processing results. For example, multiple threads simultaneously access the same global variable, and if it is a read operation, there will be no problem. If a thread is responsible for changing the value of this variable, while other threads are responsible for reading the variable content while ensuring that the read data is modified by writing threads. In order to ensure that the read thread read is modified by modified variables, it is necessary to disable any access to other threads when writing data to the variable until the access limit of other threads is released after the assignment process ends. This guarantees the protection measures taken by this guarantee line to understand the processing results after the end of other thread tasks. Thread synchronization is a very topic, including all aspects of content. From a large aspect, the thread synchronous can be synchronized with the thread synchronization of the thread synchronization and the thread of the kernel object synchronizes two categories. The synchronization method of the thread in the user mode is mainly atomic access and critical regions. It is characterized by a case where the synchronous speed is particularly fast and is suitable for intervals of threading operations. The thread synchronization of the kernel object is mainly composed of an event, waiting timer, a signal amount, and a signal object such as the signal. Since this synchronization mechanism uses kernel objects, the thread must switch from the user mode to the kernel mode, and this conversion generally takes nearly a thousand CPU cycles, so the synchronization speed is slow, but it is far from applicability. Thread synchronization of user mode.

1

Critical area

Critical Section is a code exclusive to some shared resource access, only one thread is allowed to access a shared resource at any time. If there are multiple threads to try to access the critical regions while entering a thread, all other attempts to access this critical zone will be suspended and continue until the thread entering the critical region. After being released, other threads can continue to seize and achieve the purpose of sharing resources with atomic way.

The critical area protects shared resources with critical_section structural objects when using, and uses EntercriticalSECTION () and LeaveCriticalSection () functions to identify and release a critical area. The Critical_SECTION structure object used must be used after initializecriticalsection (), and must ensure that any code in all threads attempt to access this shared resource is under the protection of this critical area. Otherwise, the critical area will not play the role, shared resources are still possible.

Figure 1 Maintain thread synchronization using critical regions

The following is displayed in a single code to the role of the critical area in the shared resources that protects multithreaded access. By two threads, the global variable g_carray [10] is written separately, and the thread synchronization is held by the critical area structure object G_CS, and the initialization is initialized before the thread is turned on. In order to make the experimental effect more obvious, it reflects the role of the critical region. When the write function is written to the shared resource G_CARRAY [10], it delays in the SLEEP () function in 1 milliseconds, enabling other threads to grab the possibility of CPU. . If you do not protected it using the critical zone, shared resource data will be damaged (see the calculation result shown in Figure 1 (a)), and the correct result can be obtained after using the critical area to keep synchronization with the thread (see Figure 1) b) The result shown). Code implementation list attached to: // critical area structure object critical_section g_cs; // shared resource char g_carray [10]; uint threadProc10 (lpvoid pParam) {// Enter the critical area EntercriticalSECTION (& g_cs); // Write the shared resource Operation for FOR (INT i = 0; i <10; i ) {g_carray [i] = 'a'; Sleep (1);} // Leave the critical area LeavecriticalSection (& g_cs); Return 0;} uint threadproc11 (lpvoid pParam) {// Enter the critical area ENTERCRITICTION (& g_cs); // Write the shared resource for operation for (int i = 0; i <10; i ) {g_carray [10 - i - 1] = 'b'; SLEEP (1 );} // leaves the critical region LeaveCriticalSection (& g_cs); return 0;} ...... void CSample08View :: OnCriticalSection () {// initialization critical section InitializeCriticalSection (& g_cs); // start threads AfxBeginThread (ThreadProc10, NULL); AfxBeginThread ( ThreadProc11, NULL); // Waiting for the calculation of Sleep (300); // Report Calculation result CString SResult = CString (g_carray); AFXMessageBox (SResult);}

When using the critical regions, it is generally not allowed to run too long, as long as the thread entering the critical regions has not left, all threads that try to enter this critical area will be suspended and entered into the waiting state, and will be somewhat influences. The running performance of the program. In particular, it should be noted that the operation of waiting for the user to input or other external intervention is included in the critical region. If you enter the critical area but have not been released, it will also cause long wait for other threads. In other words, in the execution of the EntercriticalSECTION () statement enters the critical region, it must be ensured that LeaveCriticalSection () that matches it can be executed. The execution of the LeaveCriticalSection () statement can be made by adding structured exception processing code. Although the critical zone synchronous speed is very fast, it can only be used to synchronize the threads in this process, not to synchronize the threads in multiple processes.

The MFC provides a CcriticalSECTION class for the critical region. It is very simple to use this class to make a very simple process, just use the CcriticalSECTION class member function LOCK () and UNLOCK () to calibrate. For the above code, it can be rewritten by the CcriticalSECTION class: // MFC critical area class object ccriticalsection g_clscriticalsection; // shared resource char g_carray [10]; uint threadproc20 (lpvoid pParam) {// Enter the critical area g_clscriticalSECTION.lock () ; // write operation for shared resources for (int i = 0; i <10; i ) {g_carray [i] = 'a'; Sleep (1);} // Leave the critical area g_clscriticalsection.unlock (); Return 0;} uint threadProc21 (lpvoid pParam) {// enters the critical area g_clscriticalsection.lock (); / / write operation for shared resources for (int i = 0; i <10; i ) {g_carray [10 - i - 1] = 'b'; Sleep (1);} // Leave the critical area g_clscriticalsection.unlock (); return 0;} ... void csample08view :: oncriticalsectionMFC () {// launch thread AfXbeginthread (threadproc20, null); AFXBEGINTHREAD (ThreadProc21, Null); // Waiting for the calculation of Sleep (300); // Report calculation result cstring SResult = cstring (g_carray); afxMARRAGEBOX (SRESULT);}

2

Manage event kernel objects

In the foregoing, the communication kernel object has been used to perform the communication between the event kernel objects, in addition to this, the event kernel object can also keep the thread synchronization by notifying the operation. For the previous paragraph, the thread synchronization method of the code that uses the critical area to keep thread synchronization can be rewritten as follows:

// Event handle Handle HEVENT = NULL; // Share resource char g_carray [10]; ... uint threadproc12 (lpvoid pParam) {// Waiting for the event to set WaitForsingleObject (HEVENT, INFINITE); / / Write the shared resource For (int i = 0; i <10; i ) {g_carray [i] = 'a'; Sleep (1);} // After processing is completed, the event object sets setEvent (hEvent);} uint threadproc13 (LPVOID PPARAM) {// Waiting for the event to set WAITFORSINGLEOBJECT (HEVENT, INFINITE); / / Write the shared resource to operate for For (int i = 0; i <10; i ) {g_carray [10 - i - 1] = 'b'; Sleep (1);} // After processing, the event object is set to setEvent (hEvent); returnent;} ... void csample08view :: onevent () {// Create an event hEvent = CreateEvent (Null, False) , False, null; // Event Setting setEvent (HEVENT); // Starting Thread AfXBeginthread (ThreadProc12, Null); AFXBEGINTHREAD (ThreadProc13, null); // Waiting for the calculation of Sleep (300); // Report calculation results CSTRING SResult = cstring (g_carray); AFXMessageBox (SRESULT);} Before creating a thread, first create an event kernel object hevent that can be automatically reset, and the thread function is waiting for the WaitForsingleObject () waiting function in the event of HEVENT, only in the event When WaitForsingleObject () will return, the protected code will be executed. For event objects created in an automatic reset mode, it will be reset immediately after it is in place, which means that the event object is already reset when executing the protected code in THREADPROC12 (). At this time, even if there is THReadProc13 () to the CPU's preemption, it will not continue to perform due to WaitForsingleObject () without HEVENT, and it is not possible to destroy the protected shared resources. After the processing in ThreadProc12 (), you can allow THREADPROC13 () to the processing of the shared resource g_carray by setEvent () to HEVENT. The role of STEVENT () here can be regarded as a notification completed for a particular task.

Using critical regions can only synchronize threads in the same process, while using event kernel objects, you can synchronize the threads outside the process, and the premise is to get access to this event object. It can be obtained by OpenEvent () function, its function prototype is:

Handle OpenEvent (DWORD DWDESIREDACCESS, // Access Sign Bool BinheritHandle, // Inherited Sign LPCTSTSTR LPNAME / / Pointer to the Event Object Name;

If the event object is created (you need to specify an event name when you create an event), the function will return the handle of the specified event. For event kernel objects that do not specify event names when creating events, CreateEvent () can be invoked by using the inheritance of the kernel object or call the duplicateHandle () function to obtain access to the specified event object. The synchronization operation performed after obtaining access is the same as the thread synchronization operation performed in the same process. If you need to wait for multiple events in a thread, wait for WaitFormultiPleObjects (). WaitFormultiPleObjects () is similar to WaitForsingleObject (), and monitor all handles located in the handle array. These monitored objects have equal priority, and any handle is not more priority than other handles. WaitFormultiPleObjects () function prototype is:

DWORD WAITFORMULTIPLEOBJECTS (DWORD NCOUNT, // Waiting Handle CONST HANDLE * LPHANDLES, // Handle Number of Group The Add Address BOOL FWAITALL, // Waiting Logo DWORD DWMILLISECONDS // Wait Interval);

The parameter ncount specifies the number of kernel objects to wait, and the array of these kernel objects is stored by LPHandles. Fwaitall specifies the two types of waiting modes of the specified NCOUNT kernel object. When all objects are notified, all objects are notified, and for False, it can be returned if any of them is notified. DWMilliseConds is completely consistent with the role in WaitForsingleObject (). If you wait, the function will return to WAIT_TIMEOUT. If returns a value in WAIT_Object_0 to WAIT_Object_0 NCOUNT-1, the state of all specified objects is the notified state (when fwaitall is TRUE) or to obtain the index of the object to obtain a notification ( When fwaitall is false). If the return value is between WAIT_ABANED_0 and WAIT_ABANED_0 NCOUNT-1, the state of all specified objects is notified, and at least one object is discarded mutex (when fwaitall is TRUE), or used To subtract WAIT_OBJECT_0 to represent an index of the mutex object waiting for the normal end (when fwaitall is false). The code given below mainly shows the use of WaitFormultiPleObjects () functions. The execution and movement of thread tasks is controlled by waiting for two event kernel objects:

/ / Start the event dWord dwret1 = WaitForm1 {// Waiting for the Event DWORD DWRET1 = WaitFormultiPleObjects (2, HEVENTS, FALSE, Infinite); // If the event arrives, thread starts to perform tasks IF (dwret1 == Wait_Object_0) {AFXMessageBox ("Thread Start Work!"); while (true) {for (INT i = 0; I <10000; // Waiting for the Event Dword dwrt2 = during task processing WaitFormultiPleObjects (2, HEVENTS, FALSE, 0); // If the end event is set, the task is immediately terminated (dwret2 == Wait_Object_0 1) Break;}} AFXMESSAGEBOX ("Thread Exit!"); Return 0;} ...... void csample08view :: onStartEvent () {// Create thread for (INT i = 0; i <2; i ) hEvents [i] = createEvent (null, false, false, null); // Turn on thread AfXbeginthread (ThreadProc14 , NULL); // Setting Event 0 (HEVENTS [0]);} void csample08view :: OnendEvent () {// Set Event 1 (End Event) setEvent (HEVENTS [1]);} MFC is Event related processing also provides a CEVENT class that contains 4 member functions except for constructors PulseEvent (), resetEvent (), setEvent (), and unlock (). Functional (), resetEvent (), setEvent (), and closehandle () and other functions are equivalent to PulseEvent (), resetEvent (), setEvent (), and closehandle (), respectively. The constructor performs the responsibility of the original CreateEvent () function to create an event object, and its function prototype is:

Cevent (Bool Binitiallyown = False, Bool BmanualReset = false, lpctstr lpszname = null, lpsecurity_attributes lpsaattribute = null;

By this default setting will create an automatic reset, an initial state of an event object without a name. The encapsulated CEVENT class is more convenient, Figure 2 shows the synchronization process of the CEVENT class to A, B two-thread:

Figure 2 Synchronization process of the CEVENT class

B thread will block when executed to the CEVENT class member function Lock (), and A thread can be processed on the shared resource without B-thread interference, and passed the member function set after the processing is completed. An event is issued to B, which is released to operate on a shared resource that has been previously processed. It can be seen that the method of processing the thread using the CEVENT class is basically consistent with the processing method of thread synchronization through the API function. The previous API processing code can be rewritten in the CEVENT class as:

// mfc event class object CEVENT G_clsevent; uint threadProc 22 (LPVOID PPARAM) {// Writes for shared resources for (int i = 0; i <10; i ) {g_carray [i] = 'a'; SLEEP 1);} // Event set g_clsevent.sevent (); return 0;} uint threadproc23 (lpvoid pParam) {// Waiting g_clsevent.lock (); // Write a shared resource for operation for (int i = 0; I <10; i ) {g_carray [10 - i - 1] = 'b'; Sleep (1);} return 0;} ... Void csample08view :: oneventmfc () {// launch thread AfxBeginthread (ThreadProc22, NULL); AFXBEGINTHREAD (ThreadProc23, null); // Waiting for the calculation of Sleep (300); // Report calculation result CString SResult = cstring (g_carray); afxMARRAGEBOX (SRESULT);} 3

Semicone kernel object

Semaphore kernel objects are different from the synchronization of threads. It allows multiple threads to access the same resource at the same time, but requires limit to the maximum number of threads to access this resource at the same time. When you create a semaphore with createSemaphore (), you should also indicate the allowed maximum resource count and the currently available resource count. Generally, the currently available resource count is set to the maximum resource count, and each additional thread is access to the shared resource. The currently available resource count will be reduced. As long as the current available resource count is greater than 0, the signal amount signal can be issued. However, when the currently available count is reduced to 0, the number of threads currently occupying the current resource has reached the maximum number of allowed, and cannot be entered in allowing other threads, and the seminated signal signal will not be issued. After processing the shared resource, threads should be plus the currently available resource count when leaving, while leaving the ReleaseSemaphore () function. At any time, the currently available resource count cannot be greater than the maximum resource count.

Figure 3 Controlling resources using semaphore object

The semaphore object control is shown below to present the control of the resource. In Figure 3, the maximum resource count allowed by the shared resource is indicated by arrows and white arrows and the currently available resource count. Initial as shown in Figure (a), the maximum resource count and the currently available resource count are 4, thereafter, each of the threads accessed by the resource (indicated with black arrows) The current resource count will be minus 1, Figure (b) That is, the state indicated when the shared resource is accessed at three threads. When the number of enters the thread reaches 4, the maximum resource count is reached, and the current available resource count has also been reduced to 0, and the other threads cannot access the shared resource. After the thread processing of the current occupying the resource is exited, the space will be released, and the figure (d) has two threads exits the occupation of the resource. The currently available count is 2, and the 2 threads can be allowed to enter the resource. deal with. It can be seen that the amount of signal is controlled by counting the thread access resource, and the actual semaphore is indeed referred to as a Dijkstra counter.

Thread synchronization using the semaphore core object is mainly used to use createMaphore (), OpenSemaphore (), ReleaseSemaphore (), WaitForsingleObject (), and waitformultipleObjects (). Wherein, the CreateSemaphore () semaphore is used to create a kernel object, which is a function prototype: HANDLE CreateSemaphore (LPSECURITY_ATTRIBUTES lpSemaphoreAttributes, // pointer to the security attributes LONG lInitialCount, // initial count LONG lMaximumCount, // maximum count LPCTSTR lpName // object name pointer);

The parameter LMAXIMUMCount is a symbol 32-bit value that defines the maximum number of resource counts, and the maximum value cannot exceed 4294967295. The lpname parameter can define a name for the selected semaphore, because it creates a kernel object, so this signal can be obtained in other processes. The OpenSemaphore () function can be used to open the quantity created in other processes based on the signal quantity name, the function prototype is as follows:

HANDLE OpenSemaphore (DWORD dwDesiredAccess, // access flag BOOL bInheritHandle, // inherit flag LPCTSTR lpName // semaphore name);

When the thread leaves the processing of the shared resource, you must add the current available resource count by ReleaseSemaphore (). Otherwise, the number of actual threads currently being processed to handle shared resources is not reaching the value to be restricted, but other threads cannot enter the case due to the current available resource count 0. The RELESESEMAPHORE () function prototype is:

Bool ReleaseSemaphore (Handle HSemaphore, // Semic Signal Handle Long Lreesecount, / / ​​Counting Quantity LPlong LPPREVIOUSCOUNT / / Previous Count);

This function adds the value in LreamECount to the current resource count of the signal, typically set LreamECount to 1, and other values ​​can be set if needed. WaitforsingleObject () and WaitFormultipleObjects () are mainly used at an attempt to enter a thread function entry that enters the shared resource, mainly used to determine whether the current available resource count of the semaphore allows the entry of this thread. Only when the currently available resource count is greater than 0, the monitored semaphore core object will be notified.

The use of semaphors makes it more suitable for synchronization of threads in the Socket program. For example, the HTTP server on the network is restricted to access the number of users accessing the same page in the same time. At this point, it is possible to set a thread for the page request of the server, and the page is the shared resource to be protected, by using the signal. The synchronization of threads ensures that there is no matter how many users have access to a page, only the threads that are not more than the maximum number of users can be accessed, while other access attempts are hanged, only in Users have possible access to access to this page. The sample code given below shows a similar processing process:

// Semic Sample Object Handle Handle HSemaphore; Uint ThreadProc15 (LPVOID PPARAM) {// Attempts to enter the semaphore gatewaitforsingleObject (HSEMaphore, Infinite); // Thread task processes AFXMessageBox ("thread one is executing!"); // Release signal Quantity ReleaseSemaphore (HSemaphore, 1, Null); Return 0;} uint threadProc16 (LPVOID PPARAM) {// Try to enter the semaphore gatewaitforsingleObject (hsemaphore, infinite); // thread task processes AFXMessageBox ("Thread II is executing!" ); // Release semaphore count ReleaseSemaphore (HSEMaphore, 1, null); return 0;} uint threadProc17 (lpvoid pParam) {// attempt to enter semaphore gatewaitforsingleObject (hsemaphore, infinite); // thread task processes AFXMESSAGEBOX (" Thread three is being implemented! "); // Release the quantity count ReleaseSemaphore (HSemaphore, 1, null); return 0;} ... void csample08view :: ONSemaphore () {// Create Sample Objects HSemaphore = CreateSemaphore (null, 2 , 2, null; // Open Thread AfXBeginthread (ThreadProc15, Null); AFXBEGINTHREAD (ThreadProc16, Null); AFXBEGINTHREAD (ThreadProc17, null);} Figure 4 begins to enter two threads

Figure 5 thread two after retreating the thread three before entering

The above code first created an initial count and the maximum resource count between 2 semaphore objects HSemaphore before turning on the thread. That is, only 2 threads are allowed to enter the shared resource protected by HSemaphore. The three threads that are subsequently open are trying to access this shared resource. When the first two threads are trying to access the shared resource, because the current available resource count of HSEMaphore is 2 and 1, the HSEMaphore is available, that is to say WaitforsingleObject () at the thread entrance will return immediately, and after the first two threads enter the protection area, the current resource count of HSemaphore is reduced to 0, and HSemaphore will no longer be notified, WaitForsingleObject () hangs the thread. It can be entered until the thread before entering the protected area is exited. Figures 4 and 5 are the operation results of the above. As can be seen from the experimental results, the signal volume has always maintained an entry that does not exceed 2 threads at the same time.

In the MFC, the quantity is expressed by the Csemaphore class. This class only has a constructor, which can construct a semaphore object, and initialize the initial resource count, maximum resource count, object name, and security attributes. The prototype is as follows:

Csemaphore (long Linitialcount = 1, long lmaxcount = 1, lpctstr pstrname = null, lpsecurity_attributes lpsaattributes = null);

After constructing the CSemaphore class, any thread that accesss the protected shared resource must be accessed or released from the parent class CSyncObject class from the parent class CSyncObject class to access or release the CSEMaphore object. Similar to several ways to keep thread synchronization through the MFC class, through the CSemaphore class, the previous thread synchronization code can be rewritten. The two-semaphore synchronization method is the implementation result in principle or from implementation results. It is exactly the same. Below, the SPS Synchronization Code after the MFC rewezes: // MFC selection type object CSemaphore g_clssemaphore (2, 2); uint threadproc24 (lpvoid pParam) {// attempt to enter the semaphoret g_clssemaphore.lock () / Thread task processes AFXMessageBox ("Thread One is Executing!"); // Release Semual Count G_clssemaphore.unlock (); Return 0;} Uint ThreadProc25 (LPVOID PPARAM) {// Attempt to enter Sample G /clock () ; // thread task processes AFXMessageBox ("Thread II is executing!"); // Release the quantity count g_clssemaphore.unlock (); Return 0;} uint threadProc26 (lpvoid pParam) {// Try to enter the semaphore.lock (); // thread task processes AFXMessageBox ("Thread III is executing!"); // Release the quantity count g_clssemaphore.unlock (); return 0;} ... void csample08view :: ONSEMAPHOREMFC () {// Turn on thread AfxBeginthread (ThreadProc24, Null); AfXBeginthread (ThreadProc25, Null); AfxBeginthread (threadproc26, null);}

4

Mutually exclusive kernel object

Mutex (MUTEX) is a very widely used kernel object. It is possible to ensure that multiple threads are mutually exclusive to the same shared resource. Some of the critical regions, only the thread with mutex has access to resources, because the mutual exclusive object is only one, so this shared resource will not be accessed by multiple threads at the same time. The thread that currently occupies the resource should be handed over after the task is handled, so that other threads can be accessed later. Unlike several other kernel objects, mutually exclusive objects have special codes in the operating system, managed by the operating system, and the operating system also allows them to perform unconventional operations that cannot be performed by other kernel objects. For ease of understanding, the working model of the mutual karyocarry object given by Figure 6:

Figure 6 Protection of shared resources using mutually exclusive kernel objects

The arrow in Figure (a) is the thread to access the resource (rectangular box), but only the second thread has a mutex (black point) and can enter the shared resource, and other threads will be rejected (as shown in the figure) (B) shown). When this thread is completely shared, the mutual exclusive object it owns (as shown in Figure (c) is prepared by leaving this area), and any other thread attempt to access this resource has a chance to be mutually exclusive. Object.

The function that can be used to keep thread synchronization to keep the thread synchronization in CreateMutex (), OpenMutex (), ReleaseMutex (), WaitForsingleObject () and WaitFormultiPleObjects (), etc. Before using the mutective object, first create or open a mutually exclusive object with CreateMuteX () or OpenMutex (). Createmutex () function original is: Handle Createmutex (LPSecurity_Attributes LPMUTEXATTRIBUTES, / / ​​Security Properties Pointer Bool Binitialowner, // Initial Owners LPCTSTSTR LPNAME // Mutual Exclusion Name);

The parameter binitialowner is mainly used to control the initial state of the mutex. It is generally set to FALSE to indicate that the mutex is created when it is created. If you specify an object name when you create a mutex, you can get the handle of this mutually exclusive object in other places or other processes or other processes through the openmutex () function. OpenMutex () function original is:

Handle OpenMutex (DWORD DWDESIREDACCESS, // Access Sign Bool BinheritHandle, // Inherited Sign LPCTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSTSUED "

When the thread currently has access to the resource no longer needs to access this resource, you must release its own mutually exclusive object through the ReleaseMutex () function, its function prototype:

Bool ReleaseMuteX (Handle Hmutex);

Its unique parameter hmutex is a mutually exclusive object handle to be released. As for WaitForsingleObject () and waitformultipleObjects () Waiting functions in the mutual exclusive object hold thread synchronization and the role in other kernel objects is basically consistent, and it is also a notification waiting for mutual exclusive kernel objects. However, it is important to specifically point to: When the mutex notification causes the call waiting function to return, the return value of the waiting function is no longer usually WAIT_Object_0 (for the waitforsingleObject () function) or is between Wait_Object_0 to WAIT_Object_0 ncount-1 A value (for the waitformultipleObjects () function), but will return a WAIT_ABANDONED_0 (for WaitForsingleObject () function) or a value between WAIT_ABANDONED_0 to WAIT_ABANDONED_0 NCOUNT-1 (for WaitFormultiPleObjects () functions). By this, the thread is waiting for the mutex owned by another thread, and this thread has terminated before using the shared resource. In addition, the method of using a mutex object is also different from the method of waiting for a thread, and other kernel objects are notified, and the thread will be called when notified. Will hang, at the same time lose dispatchability, while using mutual exclusive methods can still be adjustable while waiting, which is also one of the unconventional operations that the mutex can be completed.

When writing a program, the mutual exclusive object is used in the protection of the memory blocks accessed as multiple threads, ensuring that any thread has reliable exclusive access to this memory block. The sample code given below is a exclusive access protection for threading of the shared in-room expressions G_CARRAY [] via mutually exclusion kernel object hmutex. The implementation code list is given: // Mutually exclusive object Handle HMutex = null; char g_carray [10]; uint threadproc18 (lpvoid pParam) {// Waiting for the mutually exclusive object to inform WaitForsingleObject (hmutex, infinite); // Dissiphered shared resources Write operation for (int i = 0; i <10; i ) {g_carray [i] = 'a'; SLEEP (1);} // Release the mutually exclusive object ReleaseMuteMutex (hmutex); Return 0;} uint thread threadProc19 ( LPVOID PPARAM) {// Waiting for the mutex to inform WaitForsingleObject (hmutex, infinite); // Write the shared resource for the operation for (int i = 0; i <10; i ) {g_carray [10 - i - 1] = 'b'; Sleep (1);} // Release the mutually exclusive object ReleaseMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMuteMutemutemutemutemutex (Return 0;} ... void csample08view :: onmutex () {// Create a mutually exclusive object hmutex = Createmutex (Null, False, Null ); // Start thread AFXBEGINTHREAD (ThreadProc18, null); AFXBEGINTHREAD (ThreadProc19, null); // Waiting for the calculation of Sleep (300); // Report calculation result CString SResult = CString (g_carray); afxMARESSAGEBOX (SRESULT);

The mutual exclusive object is expressed in the MFC through the CMUTEX class. The method of using the CMuteX class is very simple. You can specify the name of the mutex object to be queried while constructing the CMUTEX class object. You can access this mutex. The CMuteX class is also a unique member function that contains only constructor. After completing access to the mutective object protection resource, the unlock () function that is inherited from the parent class CSyncObject is completed by calling the unlock () function inherited from the parent class CSyncObject. The CMUTEX class constructor prototype is:

Cmutex (Bool Binitiallyown = false, lpctstr lpszname = null, lpsecurity_attributes lpsaattribute = null);

The applicable range and implementation principle of this class is completely similar, but it is necessary to make a simpler, the following is a list of procedures after the CMUTEX class is rewritten.

// mfc mutual exclusion object cmutex g_clsmutex (false, null); uint threadproc27 (lpvoid pParam) {// Waiting for the mutex to notify g_clsmutex.lock (); // Write the shared resource for writing For (int i = 0 i <10; i ) {g_carray [i] = 'a'; SLEEP (1);} // Release the mutually exclusive object g_clsmutex.unlock (); Return 0;} uint threadproc28 (lpvoid pParam) {// Wait to each other The object notification g_clsmutex.lock (); / / write operation for shared resources for (INT i = 0; I <10; i ) {g_carray [10 - i - 1] = 'b'; SLEEP (1); } // release mutex g_clsMutex.Unlock (); return 0;} ...... void CSample08View :: OnMutexMfc () {// start thread AfxBeginThread (ThreadProc27, NULL); AfxBeginThread (ThreadProc28, NULL); // wait has been calculated SLEEP (300); // Report calculation results CString SResult = CSTRING (g_carray); AfxMessageBox (SRESULT);}

The use of threads makes program processing more flexible, and this flexibility will also bring various uncertain possibilities. Especially when multiple threads are accessible to the same common variable. Although the program code that does not use thread synchronization is logically prolonged, thread synchronization measures must be taken in an appropriate occasion in order to ensure the correct, reliable operation of the program.

转载请注明原文地址:https://www.9cbs.com/read-133497.html

New Post(0)