<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # 圖操作符 正如RDDs有基本的操作map, filter和reduceByKey一樣,屬性圖也有基本的集合操作,這些操作采用用戶自定義的函數并產生包含轉換特征和結構的新圖。定義在[Graph](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph)中的核心操作是經過優化的實現。表示為核心操作的組合的便捷操作定義在[GraphOps](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.GraphOps)中。然而,因為有Scala的隱式轉換,定義在`GraphOps`中的操作可以作為`Graph`的成員自動使用。例如,我們可以通過下面的方式計算每個頂點(定義在GraphOps中)的入度。 ~~~ val graph: Graph[(String, String), String] // Use the implicit GraphOps.inDegrees operator val inDegrees: VertexRDD[Int] = graph.inDegrees ~~~ 區分核心圖操作和`GraphOps`的原因是為了在將來支持不同的圖表示。每個圖表示都必須提供核心操作的實現并重用很多定義在`GraphOps`中的有用操作。 ### 操作一覽 一下是定義在`Graph`和`GraphOps`中(為了簡單起見,表現為圖的成員)的功能的快速瀏覽。注意,某些函數簽名已經簡化(如默認參數和類型的限制已刪除),一些更高級的功能已經被刪除,所以請參閱API文檔了解官方的操作列表。 ~~~ /** Summary of the functionality in the property graph */ class Graph[VD, ED] { // Information about the Graph =================================================================== val numEdges: Long val numVertices: Long val inDegrees: VertexRDD[Int] val outDegrees: VertexRDD[Int] val degrees: VertexRDD[Int] // Views of the graph as collections ============================================================= val vertices: VertexRDD[VD] val edges: EdgeRDD[ED] val triplets: RDD[EdgeTriplet[VD, ED]] // Functions for caching graphs ================================================================== def persist(newLevel: StorageLevel = StorageLevel.MEMORY_ONLY): Graph[VD, ED] def cache(): Graph[VD, ED] def unpersistVertices(blocking: Boolean = true): Graph[VD, ED] // Change the partitioning heuristic ============================================================ def partitionBy(partitionStrategy: PartitionStrategy): Graph[VD, ED] // Transform vertex and edge attributes ========================================================== def mapVertices[VD2](map: (VertexID, VD) => VD2): Graph[VD2, ED] def mapEdges[ED2](map: Edge[ED] => ED2): Graph[VD, ED2] def mapEdges[ED2](map: (PartitionID, Iterator[Edge[ED]]) => Iterator[ED2]): Graph[VD, ED2] def mapTriplets[ED2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2] def mapTriplets[ED2](map: (PartitionID, Iterator[EdgeTriplet[VD, ED]]) => Iterator[ED2]) : Graph[VD, ED2] // Modify the graph structure ==================================================================== def reverse: Graph[VD, ED] def subgraph( epred: EdgeTriplet[VD,ED] => Boolean = (x => true), vpred: (VertexID, VD) => Boolean = ((v, d) => true)) : Graph[VD, ED] def mask[VD2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED] def groupEdges(merge: (ED, ED) => ED): Graph[VD, ED] // Join RDDs with the graph ====================================================================== def joinVertices[U](table: RDD[(VertexID, U)])(mapFunc: (VertexID, VD, U) => VD): Graph[VD, ED] def outerJoinVertices[U, VD2](other: RDD[(VertexID, U)]) (mapFunc: (VertexID, VD, Option[U]) => VD2) : Graph[VD2, ED] // Aggregate information about adjacent triplets ================================================= def collectNeighborIds(edgeDirection: EdgeDirection): VertexRDD[Array[VertexID]] def collectNeighbors(edgeDirection: EdgeDirection): VertexRDD[Array[(VertexID, VD)]] def aggregateMessages[Msg: ClassTag]( sendMsg: EdgeContext[VD, ED, Msg] => Unit, mergeMsg: (Msg, Msg) => Msg, tripletFields: TripletFields = TripletFields.All) : VertexRDD[A] // Iterative graph-parallel computation ========================================================== def pregel[A](initialMsg: A, maxIterations: Int, activeDirection: EdgeDirection)( vprog: (VertexID, VD, A) => VD, sendMsg: EdgeTriplet[VD, ED] => Iterator[(VertexID,A)], mergeMsg: (A, A) => A) : Graph[VD, ED] // Basic graph algorithms ======================================================================== def pageRank(tol: Double, resetProb: Double = 0.15): Graph[Double, Double] def connectedComponents(): Graph[VertexID, ED] def triangleCount(): Graph[Int, ED] def stronglyConnectedComponents(numIter: Int): Graph[VertexID, ED] } ~~~ ### 屬性操作 如RDD的`map`操作一樣,屬性圖包含下面的操作: ~~~ class Graph[VD, ED] { def mapVertices[VD2](map: (VertexId, VD) => VD2): Graph[VD2, ED] def mapEdges[ED2](map: Edge[ED] => ED2): Graph[VD, ED2] def mapTriplets[ED2](map: EdgeTriplet[VD, ED] => ED2): Graph[VD, ED2] } ~~~ 每個操作都產生一個新的圖,這個新的圖包含通過用戶自定義的map操作修改后的頂點或邊的屬性。 注意,每種情況下圖結構都不受影響。這些操作的一個重要特征是它允許所得圖形重用原有圖形的結構索引(indices)。下面的兩行代碼在邏輯上是等價的,但是第一個不保存結構索引,所以不會從GraphX系統優化中受益。 ~~~ val newVertices = graph.vertices.map { case (id, attr) => (id, mapUdf(id, attr)) } val newGraph = Graph(newVertices, graph.edges) ~~~ 另一種方法是用[mapVertices](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@mapVertices[VD2]((VertexId,VD)?VD2)(ClassTag[VD2]):Graph[VD2,ED])保存索引。 ~~~ val newGraph = graph.mapVertices((id, attr) => mapUdf(id, attr)) ~~~ 這些操作經常用來初始化的圖形,用作特定計算或者用來處理項目不需要的屬性。例如,給定一個圖,這個圖的頂點特征包含出度,我們為PageRank初始化它。 ~~~ // Given a graph where the vertex property is the out degree val inputGraph: Graph[Int, String] = graph.outerJoinVertices(graph.outDegrees)((vid, _, degOpt) => degOpt.getOrElse(0)) // Construct a graph where each edge contains the weight // and each vertex is the initial PageRank val outputGraph: Graph[Double, Double] = inputGraph.mapTriplets(triplet => 1.0 / triplet.srcAttr).mapVertices((id, _) => 1.0) ~~~ ### 結構性操作 當前的GraphX僅僅支持一組簡單的常用結構性操作。下面是基本的結構性操作列表。 ~~~ class Graph[VD, ED] { def reverse: Graph[VD, ED] def subgraph(epred: EdgeTriplet[VD,ED] => Boolean, vpred: (VertexId, VD) => Boolean): Graph[VD, ED] def mask[VD2, ED2](other: Graph[VD2, ED2]): Graph[VD, ED] def groupEdges(merge: (ED, ED) => ED): Graph[VD,ED] } ~~~ [reverse](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@reverse:Graph[VD,ED])操作返回一個新的圖,這個圖的邊的方向都是反轉的。例如,這個操作可以用來計算反轉的PageRank。因為反轉操作沒有修改頂點或者邊的屬性或者改變邊的數量,所以我們可以在不移動或者復制數據的情況下有效地實現它。 [subgraph](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@subgraph((EdgeTriplet[VD,ED])?Boolean,(VertexId,VD)?Boolean):Graph[VD,ED])操作利用頂點和邊的謂詞(predicates),返回的圖僅僅包含滿足頂點謂詞的頂點、滿足邊謂詞的邊以及滿足頂點謂詞的連接頂點(connect vertices)。`subgraph`操作可以用于很多場景,如獲取感興趣的頂點和邊組成的圖或者獲取清除斷開鏈接后的圖。下面的例子刪除了斷開的鏈接。 ~~~ // Create an RDD for the vertices val users: RDD[(VertexId, (String, String))] = sc.parallelize(Array((3L, ("rxin", "student")), (7L, ("jgonzal", "postdoc")), (5L, ("franklin", "prof")), (2L, ("istoica", "prof")), (4L, ("peter", "student")))) // Create an RDD for edges val relationships: RDD[Edge[String]] = sc.parallelize(Array(Edge(3L, 7L, "collab"), Edge(5L, 3L, "advisor"), Edge(2L, 5L, "colleague"), Edge(5L, 7L, "pi"), Edge(4L, 0L, "student"), Edge(5L, 0L, "colleague"))) // Define a default user in case there are relationship with missing user val defaultUser = ("John Doe", "Missing") // Build the initial Graph val graph = Graph(users, relationships, defaultUser) // Notice that there is a user 0 (for which we have no information) connected to users // 4 (peter) and 5 (franklin). graph.triplets.map( triplet => triplet.srcAttr._1 + " is the " + triplet.attr + " of " + triplet.dstAttr._1 ).collect.foreach(println(_)) // Remove missing vertices as well as the edges to connected to them val validGraph = graph.subgraph(vpred = (id, attr) => attr._2 != "Missing") // The valid subgraph will disconnect users 4 and 5 by removing user 0 validGraph.vertices.collect.foreach(println(_)) validGraph.triplets.map( triplet => triplet.srcAttr._1 + " is the " + triplet.attr + " of " + triplet.dstAttr._1 ).collect.foreach(println(_)) ~~~ 注意,上面的例子中,僅僅提供了頂點謂詞。如果沒有提供頂點或者邊的謂詞,`subgraph`操作默認為true。 [mask](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@mask[VD2,ED2](Graph[VD2,ED2])(ClassTag[VD2],ClassTag[ED2]):Graph[VD,ED])操作構造一個子圖,這個子圖包含輸入圖中包含的頂點和邊。這個操作可以和`subgraph`操作相結合,基于另外一個相關圖的特征去約束一個圖。例如,我們可能利用缺失頂點的圖運行連通體(?連通組件connected components),然后返回有效的子圖。 ~~~ / Run Connected Components val ccGraph = graph.connectedComponents() // No longer contains missing field // Remove missing vertices as well as the edges to connected to them val validGraph = graph.subgraph(vpred = (id, attr) => attr._2 != "Missing") // Restrict the answer to the valid subgraph val validCCGraph = ccGraph.mask(validGraph) ~~~ [groupEdges](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@groupEdges((ED,ED)?ED):Graph[VD,ED])操作合并多重圖中的并行邊(如頂點對之間重復的邊)。在大量的應用程序中,并行的邊可以合并(它們的權重合并)為一條邊從而降低圖的大小。 ### 連接操作 在許多情況下,有必要將外部數據加入到圖中。例如,我們可能有額外的用戶屬性需要合并到已有的圖中或者我們可能想從一個圖中取出頂點特征加入到另外一個圖中。這些任務可以用join操作完成。下面列出的是主要的join操作。 ~~~ class Graph[VD, ED] { def joinVertices[U](table: RDD[(VertexId, U)])(map: (VertexId, VD, U) => VD) : Graph[VD, ED] def outerJoinVertices[U, VD2](table: RDD[(VertexId, U)])(map: (VertexId, VD, Option[U]) => VD2) : Graph[VD2, ED] } ~~~ [joinVertices](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.GraphOps@joinVertices[U](RDD[(VertexId,U)])((VertexId,VD,U)?VD)(ClassTag[U]):Graph[VD,ED])操作將輸入RDD和頂點相結合,返回一個新的帶有頂點特征的圖。這些特征是通過在連接頂點的結果上使用用戶定義的`map`函數獲得的。在RDD中沒有匹配值的頂點保留其原始值。 注意,對于給定的頂點,如果RDD中有超過1個的匹配值,則僅僅使用其中的一個。建議用下面的方法保證輸入RDD的唯一性。下面的方法也會預索引返回的值用以加快后續的join操作。 ~~~ val nonUniqueCosts: RDD[(VertexID, Double)] val uniqueCosts: VertexRDD[Double] = graph.vertices.aggregateUsingIndex(nonUnique, (a,b) => a + b) val joinedGraph = graph.joinVertices(uniqueCosts)( (id, oldCost, extraCost) => oldCost + extraCost) ~~~ 除了將用戶自定義的map函數用到所有頂點和改變頂點屬性類型以外,更一般的[outerJoinVertices](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@outerJoinVertices[U,VD2](RDD[(VertexId,U)])((VertexId,VD,Option[U])?VD2)(ClassTag[U],ClassTag[VD2]):Graph[VD2,ED])與`joinVertices`類似。因為并不是所有頂點在RDD中擁有匹配的值,map函數需要一個option類型。 ~~~ val outDegrees: VertexRDD[Int] = graph.outDegrees val degreeGraph = graph.outerJoinVertices(outDegrees) { (id, oldAttr, outDegOpt) => outDegOpt match { case Some(outDeg) => outDeg case None => 0 // No outDegree means zero outDegree } } ~~~ 你可能已經注意到了,在上面的例子中用到了curry函數的多參數列表。雖然我們可以將f(a)(b)寫成f(a,b),但是f(a,b)意味著b的類型推斷將不會依賴于a。因此,用戶需要為定義的函數提供類型標注。 ~~~ val joinedGraph = graph.joinVertices(uniqueCosts, (id: VertexID, oldCost: Double, extraCost: Double) => oldCost + extraCost) ~~~ ### 相鄰聚合(Neighborhood Aggregation) 圖分析任務的一個關鍵步驟是匯總每個頂點附近的信息。例如我們可能想知道每個用戶的追隨者的數量或者每個用戶的追隨者的平均年齡。許多迭代圖算法(如PageRank,最短路徑和連通體)多次聚合相鄰頂點的屬性。 為了提高性能,主要的聚合操作從`graph.mapReduceTriplets`改為了新的`graph.AggregateMessages`。雖然API的改變相對較小,但是我們仍然提供了過渡的指南。 ### 聚合消息(aggregateMessages) GraphX中的核心聚合操作是[aggregateMessages](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])?Unit,(A,A)?A,TripletFields)(ClassTag[A]):VertexRDD[A])。這個操作將用戶定義的`sendMsg`函數應用到圖的每個邊三元組(edge triplet),然后應用`mergeMsg`函數在其目的頂點聚合這些消息。 ~~~ class Graph[VD, ED] { def aggregateMessages[Msg: ClassTag]( sendMsg: EdgeContext[VD, ED, Msg] => Unit, mergeMsg: (Msg, Msg) => Msg, tripletFields: TripletFields = TripletFields.All) : VertexRDD[Msg] } ~~~ 用戶自定義的`sendMsg`函數是一個[EdgeContext](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.EdgeContext)類型。它暴露源和目的屬性以及邊緣屬性以及發送消息給源和目的屬性的函數([sendToSrc](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.EdgeContext@sendToSrc(msg:A):Unit)和[sendToDst](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.EdgeContext@sendToDst(msg:A):Unit))。可將`sendMsg`函數看做map-reduce過程中的map函數。用戶自定義的`mergeMsg`函數指定兩個消息到相同的頂點并保存為一個消息。可以將`mergeMsg`函數看做map-reduce過程中的reduce函數。[aggregateMessages](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])?Unit,(A,A)?A,TripletFields)(ClassTag[A]):VertexRDD[A])操作返回一個包含聚合消息(目的地為每個頂點)的`VertexRDD[Msg]`。沒有接收到消息的頂點不包含在返回的`VertexRDD`中。 另外,[aggregateMessages](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])?Unit,(A,A)?A,TripletFields)(ClassTag[A]):VertexRDD[A])有一個可選的`tripletFields`參數,它指出在`EdgeContext`中,哪些數據被訪問(如源頂點特征而不是目的頂點特征)。`tripletsFields`可能的選項定義在[TripletFields](http://spark.apache.org/docs/latest/api/java/org/apache/spark/graphx/TripletFields.html)中。`tripletFields`參數用來通知GraphX僅僅只需要`EdgeContext`的一部分允許GraphX選擇一個優化的連接策略。例如,如果我們想計算每個用戶的追隨者的平均年齡,我們僅僅只需要源字段。所以我們用`TripletFields.Src`表示我們僅僅只需要源字段。 在下面的例子中,我們用`aggregateMessages`操作計算每個用戶更年長的追隨者的年齡。 ~~~ // Import random graph generation library import org.apache.spark.graphx.util.GraphGenerators // Create a graph with "age" as the vertex property. Here we use a random graph for simplicity. val graph: Graph[Double, Int] = GraphGenerators.logNormalGraph(sc, numVertices = 100).mapVertices( (id, _) => id.toDouble ) // Compute the number of older followers and their total age val olderFollowers: VertexRDD[(Int, Double)] = graph.aggregateMessages[(Int, Double)]( triplet => { // Map Function if (triplet.srcAttr > triplet.dstAttr) { // Send message to destination vertex containing counter and age triplet.sendToDst(1, triplet.srcAttr) } }, // Add counter and age (a, b) => (a._1 + b._1, a._2 + b._2) // Reduce Function ) // Divide total age by number of older followers to get average age of older followers val avgAgeOfOlderFollowers: VertexRDD[Double] = olderFollowers.mapValues( (id, value) => value match { case (count, totalAge) => totalAge / count } ) // Display the results avgAgeOfOlderFollowers.collect.foreach(println(_)) ~~~ 當消息(以及消息的總數)是常量大小(列表和連接替換為浮點數和添加)時,`aggregateMessages`操作的效果最好。 ### Map Reduce三元組過渡指南 在之前版本的GraphX中,利用[mapReduceTriplets]操作完成相鄰聚合。 ~~~ class Graph[VD, ED] { def mapReduceTriplets[Msg]( map: EdgeTriplet[VD, ED] => Iterator[(VertexId, Msg)], reduce: (Msg, Msg) => Msg) : VertexRDD[Msg] } ~~~ `mapReduceTriplets`操作在每個三元組上應用用戶定義的map函數,然后保存用用戶定義的reduce函數聚合的消息。然而,我們發現用戶返回的迭代器是昂貴的,它抑制了我們添加額外優化(例如本地頂點的重新編號)的能力。[aggregateMessages](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.Graph@aggregateMessages[A]((EdgeContext[VD,ED,A])?Unit,(A,A)?A,TripletFields)(ClassTag[A]):VertexRDD[A])暴露三元組字段和函數顯示的發送消息到源和目的頂點。并且,我們刪除了字節碼檢測轉而需要用戶指明三元組的哪些字段實際需要。 下面的代碼用到了`mapReduceTriplets` ~~~ val graph: Graph[Int, Float] = ... def msgFun(triplet: Triplet[Int, Float]): Iterator[(Int, String)] = { Iterator((triplet.dstId, "Hi")) } def reduceFun(a: Int, b: Int): Int = a + b val result = graph.mapReduceTriplets[String](msgFun, reduceFun) ~~~ 下面的代碼用到了`aggregateMessages` ~~~ val graph: Graph[Int, Float] = ... def msgFun(triplet: EdgeContext[Int, Float, String]) { triplet.sendToDst("Hi") } def reduceFun(a: Int, b: Int): Int = a + b val result = graph.aggregateMessages[String](msgFun, reduceFun) ~~~ ### 計算度信息 最一般的聚合任務就是計算頂點的度,即每個頂點相鄰邊的數量。在有向圖中,經常需要知道頂點的入度、出度以及總共的度。[GraphOps](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.GraphOps)類包含一個操作集合用來計算每個頂點的度。例如,下面的例子計算最大的入度、出度和總度。 ~~~ // Define a reduce operation to compute the highest degree vertex def max(a: (VertexId, Int), b: (VertexId, Int)): (VertexId, Int) = { if (a._2 > b._2) a else b } // Compute the max degrees val maxInDegree: (VertexId, Int) = graph.inDegrees.reduce(max) val maxOutDegree: (VertexId, Int) = graph.outDegrees.reduce(max) val maxDegrees: (VertexId, Int) = graph.degrees.reduce(max) ~~~ ### Collecting Neighbors 在某些情況下,通過收集每個頂點相鄰的頂點及它們的屬性來表達計算可能更容易。這可以通過[collectNeighborIds](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighborIds(EdgeDirection):VertexRDD[Array[VertexId]])和[collectNeighbors](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.graphx.GraphOps@collectNeighbors(EdgeDirection):VertexRDD[Array[(VertexId,VD)]])操作來簡單的完成 ~~~ class GraphOps[VD, ED] { def collectNeighborIds(edgeDirection: EdgeDirection): VertexRDD[Array[VertexId]] def collectNeighbors(edgeDirection: EdgeDirection): VertexRDD[ Array[(VertexId, VD)] ] } ~~~ 這些操作是非常昂貴的,因為它們需要重復的信息和大量的通信。如果可能,盡量用`aggregateMessages`操作直接表達相同的計算。 ### 緩存和不緩存 在Spark中,RDDs默認是不緩存的。為了避免重復計算,當需要多次利用它們時,我們必須顯示地緩存它們。GraphX中的圖也有相同的方式。當利用到圖多次時,確保首先訪問`Graph.cache()`方法。 在迭代計算中,為了獲得最佳的性能,不緩存可能是必須的。默認情況下,緩存的RDDs和圖會一直保留在內存中直到因為內存壓力迫使它們以LRU的順序刪除。對于迭代計算,先前的迭代的中間結果將填充到緩存中。雖然它們最終會被刪除,但是保存在內存中的不需要的數據將會減慢垃圾回收。只有中間結果不需要,不緩存它們是更高效的。這涉及到在每次迭代中物化一個圖或者RDD而不緩存所有其它的數據集。在將來的迭代中僅用物化的數據集。然而,因為圖是由多個RDD組成的,正確的不持久化它們是困難的。對于迭代計算,我們建議使用Pregel API,它可以正確的不持久化中間結果。
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看