分布式數(shù)據(jù)庫論文_第1頁
分布式數(shù)據(jù)庫論文_第2頁
分布式數(shù)據(jù)庫論文_第3頁
分布式數(shù)據(jù)庫論文_第4頁
分布式數(shù)據(jù)庫論文_第5頁
已閱讀5頁,還剩14頁未讀 繼續(xù)免費(fèi)閱讀

下載本文檔

版權(quán)說明:本文檔由用戶提供并上傳,收益歸屬內(nèi)容提供方,若內(nèi)容存在侵權(quán),請(qǐng)進(jìn)行舉報(bào)或認(rèn)領(lǐng)

文檔簡(jiǎn)介

1、Deadlock Detection Views of Distributed DatabaseB.M. Monjurul Alom, Frans Henskens, Michael HannafordSchool Of Electrical Engineering and Computer Science, University Of Newcastle,Callaghan, NSW 2308, AustraliaAbstractDeadlock detection is very difficult in a distributed database system because no c

2、ontroller has complete and current information about the system and data dependencies. The deadlock problem is intrinsic to a distributed database system which employs locking as its concurrency control algorithm. This paper attempts a comprehensive study of deadlock detection in distributed databas

3、e systems. Afterwards, a deadlock detection algorithm is presented. The algorithm is based on creating Linear Transaction Structure (LTS, Distributed Transaction Structure (DTS, finding local and global cycle, deciding priority Id of the transaction and local-global abortion. The proposed algorithm

4、does not detect any false deadlock or exclude any really existing deadlocks. In this technique global deadlock is not dependent on the local deadlock.Key Words: Deadlock Cycle, Priority_Id, Transaction Queue (TQ, TWFG, Transaction Manager (TM.1. IntroductionDuring the last decade computing systems h

5、ave undergone substantial development, which has greatly impacted on distributed database systems. While commercial systems are gradually maturing, new challenges are imposed by the world-wide interconnection of computer systems 1. This creates an ever growing need for large scale enterprise-wide di

6、stributed solutions. In the future, distributed database systems will have to support hundreds or even thousands of sites and millions of clients and, therefore, will face tremendous scalability challenges with regard to performance, availability and administration 1. Deadlocks can arise in each dat

7、abase system that permits concurrent execution of transactions using locking protocols, which is the case in most of todays (distributed database systems.In modern computer systems, several transactions may compete for a finite number of resources 2. Upon requesting a resource, a transaction enters

8、a wait state if the request is not granted due to non-availability of the resource. A situation may arise wherein waiting transactions may not ever get a chance to change their states. This can occur if the requested resources are held by other similarly waiting transaction. This situation is called

9、 deadlock.In a distributed database system, data access by concurrent transactions are synchronized in order to preserve database consistency 3. This synchronization can be achieved using concurrency control algorithms such as two phase locking (2PL, timestamp ordering 4, optimistic concurrency cont

10、rol 5 or a variation of these basic algorithms. In practice, the most commonly used concurrency control algorithm is 2PL. However, if locking is used, a group of transactions may become involved in a deadlock 3. Consequently, some form of deadlock resolution must accompany 2PL.There are many algorit

11、hms implemented in centralized database systems for deadlock detection. All of them are based on finding cycles in a transaction wait for graph (TWFG in which the nodes of the graph are used to represent transactions, with the directed edges indicating for which transactions a given transaction is w

12、aiting 6. When cycles are detected in TWFG, they are broken by choosing a transaction that is involved in the cycle causing the transaction to fail (usually allowing the transaction to restart with the original input. This operation becomes more complex when TWFG is distributed among multiple sites

13、of a distributed database.In a distributed database system, although a transaction may perform all of its actions at the site in which it originates, it may also perform actions (or actions may be performed on behalf of it at other than the original site. If this happens, an agent 7 is created at th

14、e remote site to represent the transaction at that site. This agent becomes part of the original transaction for concurrency control and recovery purposes.Many algorithms have been proposed to detect deadlocks in distributed database systems 6, 8-11. Some methods are based on transmitting probes bet

15、ween sites. Probes are special messages used to detect deadlocks. Probes (these messages follow the edges of the wait-for graph without constructing a separate representation of the graph 8, 9, 11. The advantage of this approach is that probe algorithms are more efficient than wait-for-graphs. The d

16、isadvantage of the probe approach is that after deadlock is detected, the constituents of the cycle remain to be discovered.In this paper we describe an algorithm that is based on creating Linear Transaction Structure (LTS to find local2009 Sixth International Conference on Information Technology: N

17、ew Generationscycles for each site of distributed database systems (DDBS. To find the global deadlock cycle Distributed Transaction Structure (DTS is used for DDBS. In each site, each transaction has a unique priority id assigned by the transaction manager (TM; priority id is used to find the younge

18、st transaction which caused a deadlock cycle. Transaction Queues (TQ are used to store the transactions priority id which forms cycles in LTS and DTS. The proposed technique is efficient as it does not detect any false deadlock.The remainder of this paper is organized as follows: Existing Algorithms

19、 are described in section 2. The distributed database system model and formal model of transaction processing are presented in section 3. The proposed algorithm is described in 4. Section 5 presents the explanation of the algorithm with an example. The paper concludes with a discussion and final rem

20、arks in section 6.2. Related WorkThe distributed deadlock detection algorithms that have been proposed are divided into two categories. Algorithms that belong to the first category pass information about transaction requests to maintain a global wait-for-graph. In the algorithms in the second catego

21、ry, simpler messages are sent among transactions; no global wait-for-graph is explicitly constructed. However, a cycle in the graph will ultimately cause messages to return to the initiator of the deadlock detection message, signaling the existence of deadlock. 2.1. Chandy & Mishra Algorithm 9Th

22、is algorithm uses transaction wait for graphs (TWFG to represent the status of transactions at the local sites and uses probes to detect global deadlocks. The algorithm by which a transaction T i determines if it is deadlocked is called a probe computation. A probe is issued if a transaction begins

23、to wait on another transaction and gets propagated from one site to another based on the status of the transaction that received the probe. The probes are meant only for deadlock detection and are distinct from requests and replies. A transaction sends at most one probe in any probe computation. If

24、the initiator of the probe computation gets back the probe, then it is involved in a deadlock. This scheme does not suffer from false deadlock detection even if the transactions do not obey the two-phase locking protocol. 2.2. Sinhas Scheme 11This algorithm, an extension to Chandys scheme 9 is based

25、 on priorities of transactions. Using priorities, the number of messages required for deadlock detection is reduced considerably. An advantage of the scheme is that the number of messages in the best and worst cases can be easily determined.The authors model consists of transactions and data manager

26、s that are responsible for granting and releasing locks. A transactions request for a lock on a data item is sent to the data manager for the item. If the request can not be granted, the data manager initiates deadlock computation by sending a probe to the transaction that holds a lock on the data i

27、tem, if the priority of the holder is greater than that of the requestor. The transaction inserts this probe in a probe-q that it maintains. The probe is then sent to the data manger of the data item it is waiting for. At this stage of deadlock computation, priorities of transaction are used to deci

28、de whether to propagate or not. The probe is propagated only if the priority of the holder of the data item it manages is greater than that of the initiator. When a transaction begins to wait for a lock, all the probes from its queue are propagated. When a data manager gets back the probe it initiat

29、ed, deadlock is detected. Since the probe contains the priority of the youngest transaction in the cycle, the youngest transaction is aborted.2.3. Obermacks Algorithm6This algorithm at each site builds and analyzes directed TWFG and uses a distinguished node at each site. This node is called “extern

30、al” and is used to represent the portion of TWFG that is external (unknown to the site. This algorithm does not work correctly; it detects false deadlocks because the wait-for graphs constructed do not represent a snap-shot of the global TWFG at any instant. The detection algorithm at each site perf

31、orms the following steps:Build TWFG.Obtain and add information received as strings from other sites to the TWFGCreate wait-for edges from “external” to each node representing agent of transaction that is expected to send on communication link.Create wait-for edges to “external” to each node represen

32、ting agent of transaction that is waiting to receive from communication link.Analyze the TWFG listing all elementary cycles. Select a victim to break each cycle that does not contain the node “external.” As each victim is chosen, remove all cycles that include victim.For each cycle Ex-> T1-> T

33、2 .T X->Excontaining the node “external”, send a string Ex, T1, T2 T X to the site T X is waiting for to receive, if the transaction id of T1 is greater than that of T X.2.4. Menasces Scheme 10This algorithm was the first to use a condensed transaction-wait-for graph (TWFG in which the vertices r

34、epresent transactions and edges indicate dependencies between transactions. This algorithm can fail to detect some deadlocks and may discover false deadlocks. The algorithm is described by the following rules:Rule 1:Event: Transaction T requests for a resource rd at site S k , and rd is currently he

35、ld by transactions T1 , T2 , T n . Action: An edge is added from the node denoting T, to each of the transactions T1 , T2 , T n . If this action causes a cycle in TWF (k, then a deadlock exists.Rule 2:Event: A blocking pair (T, T' is received at site S k. Action: An edge is added from T to T'

36、; in TWF (K. If a cycle results, then a deadlock is detected.2.5. Hos Algorithm 12According to this 12 approach, the transaction table at each site maintains information regarding resources held and waited on by local transactions. The resources table at each of the sites maintains information regar

37、ding the transactions holding and waiting for local resources. Periodically, a site is chosen as a central controller responsible for performing deadlock detection. The drawback of this scheme is that it requires 4n messages, where n is the number of sites in the system. 2.6. Kawazus Algorithm 13The

38、 algorithm is based on two phases. In the first phase local deadlocks are detected, and in the second phase global deadlocks are detected in the absence of local deadlocks. This scheme suffers from phantom deadlocks, because each local wait-for graph is not collected at the same time due to communic

39、ation delays. Also, in case a transaction simultaneously waits for more than one resource, some global deadlocks may go undetected since the global deadlock detection is initiated only if no local deadlock is detected. Also Brachas 14, Mitchells 15 and Krivokapics 1 algorithms have been described to

40、 detect deadlock in distributed database.3. Distributed Database System Model and Formal Model of Transaction ProcessingA distributed database is a collection of data objects spread across a number of sites which communicate with each other via messages. Each site has data objects, controllers (sche

41、dulers, and data managers. A distributed database model 4 is shown in figure 1. Each node has one or more of the following modules: a Transaction Manager (TM, a Data Manager (DM, a scheduler (S, and a Transaction Process (T. The scheduler at each site synchronizes the transaction requests and perfor

42、ms deadlock detection. A transaction may request multiple data objects simultaneously. In this case, all objects must be granted before the transaction can continue.A transaction can be initiated in one site and it may then initiate one or more remote transactions at other sites. In this case the or

43、iginal transaction is referred to as the master transaction and the remote transactions as slave transactions. A slave can become a master by creating its own slave transactions. Figure 1. Distributed database system model.The database is considered to be distributed among n sites, S1, S2 Sn, of a c

44、omputer network. Users interact with the database via transactions 10. A transaction is a sequence of actions which can be read, write, lock, or unlock operations. If the actions of a transaction involve data at a single site, the transaction is called local, as opposed to a distributed transaction

45、which involves resources at several sites. It is assumed that distributed transactions are implemented as a collection of processes which act on behalf of the transaction. Those processes are called transaction incarnations. There may be one or more incarnations of the same transaction at each parti

46、cipating site. A transaction incarnation is responsible for things such as acquiring, using, and releasing resources to the site at which it is executing as needed by the transaction.A transaction can be in two different states, namely, active and blocked. A transaction is blocked if its execution c

47、annot proceed because a needed resource isbeing held by another transaction, and the transaction is active otherwise. Figure 2, shows an example of a transaction_wait_for graph for a network with three sites S1, S2, S3. This graph shows five deadlock cycles. Three of them are local deadlocks at site

48、 S2, S3. The others are global deadlock cycles between S2 , S3 and S1, S2 .Figure 2. TWFG for a network with three sites S1, S2and S3.4. Proposed Distributed Deadlock Detection TechniqueThe proposed technique is based on calculating the following:Linear Transaction Structure (LTS for each local site

49、.Distributed Transaction Structure (DTS for global resource transaction communication. Priority Id for each transaction in each of the sites. The local and global cycles.Abort of the victim transaction based on the cycles.In this technique, a Linear Transaction Structure (LTS is maintained for the t

50、ransactions of each local site of the database systems and Distributed Transaction Structures (DTS are used to handle the global deadlock among the distributed sites. Each transaction is assigned a unique Priority Id from the local Transaction Manager (TM of the systems. All the local TMs are contro

51、lled by Global TM (GTM. GTM also manages the TQ for DTS. The existence of the cycle from in local LTS represents a local deadlock and the existence of a cycle in the DTS represents a global deadlock. The proposed technique uses a graph (TWFG to represent the transactions data request; that is mainta

52、ined by Global Scheduler (GS. GS also collects the request (information of the transaction from each local scheduler (S.This technique assures that global deadlock is not dependent on local deadlock. On the other words, local deadlock does not cause the global deadlock. The victim transactions are (

53、youngest transactions decided based on priority id (from Transaction Queue (TQ from the detected cycles. This technique stores the id (number of each transaction to their corresponding LTS and DTS whilst finding the local and global deadlock cycles. According to this approach, no false deadlock is d

54、etected or does not exclude any really deadlocked cycle.If any transaction T p requests a data item that is held by another transaction T q , this technique stores the values of p and q to the linear transaction structure (LTS, where p and q represents their corresponding transaction number. Each ro

55、w of the LTS stores a pair of values (p, q. Each site must have its own LTS. It is assumed that the total no of distributed sites are n, total number of transaction pairs in each LTS are N.Distributed Transaction Structure (DTS generally stores all the transactions which are interconnected (requests

56、 for data item from other sites from one site to another site. DTS also records the transactions (those are connected to other sites intra requests (connectivity. DTS is managed by Data Manager (DM. Each transaction should have a unique priority transaction id named PT id from transaction manager (T

57、M. Transaction queue (TQ is used to store the PT id for the abortion of the youngest (victim transaction from a deadlocked cycle.To find local deadlock, DM starts storing transactions requests for data item in LTS, from any transaction in any site. TM records priority transaction id in TQ for those

58、transactions which form cycles in LTS; it is recommended that (any starting transaction in the cycle has the highest priority Id (but the starting transaction in LTS could be any transaction. Whilst detecting global deadlock, at first Data Manager (DM starts storing the intra connected transactions (those are connected to other sites request in DTS from any site. After then, DM records the transaction requests from site to site. This is to provide less priority id for the transactions data request from one site to another site, as in general , global deadlock cycles

溫馨提示

  • 1. 本站所有資源如無特殊說明,都需要本地電腦安裝OFFICE2007和PDF閱讀器。圖紙軟件為CAD,CAXA,PROE,UG,SolidWorks等.壓縮文件請(qǐng)下載最新的WinRAR軟件解壓。
  • 2. 本站的文檔不包含任何第三方提供的附件圖紙等,如果需要附件,請(qǐng)聯(lián)系上傳者。文件的所有權(quán)益歸上傳用戶所有。
  • 3. 本站RAR壓縮包中若帶圖紙,網(wǎng)頁內(nèi)容里面會(huì)有圖紙預(yù)覽,若沒有圖紙預(yù)覽就沒有圖紙。
  • 4. 未經(jīng)權(quán)益所有人同意不得將文件中的內(nèi)容挪作商業(yè)或盈利用途。
  • 5. 人人文庫網(wǎng)僅提供信息存儲(chǔ)空間,僅對(duì)用戶上傳內(nèi)容的表現(xiàn)方式做保護(hù)處理,對(duì)用戶上傳分享的文檔內(nèi)容本身不做任何修改或編輯,并不能對(duì)任何下載內(nèi)容負(fù)責(zé)。
  • 6. 下載文件中如有侵權(quán)或不適當(dāng)內(nèi)容,請(qǐng)與我們聯(lián)系,我們立即糾正。
  • 7. 本站不保證下載資源的準(zhǔn)確性、安全性和完整性, 同時(shí)也不承擔(dān)用戶因使用這些下載資源對(duì)自己和他人造成任何形式的傷害或損失。

最新文檔

評(píng)論

0/150

提交評(píng)論