<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ThinkChat2.0新版上線,更智能更精彩,支持會話、畫圖、視頻、閱讀、搜索等,送10W Token,即刻開啟你的AI之旅 廣告
                # Parquet文件 Parquet是一種柱狀(columnar)格式,可以被許多其它的數據處理系統支持。Spark SQL提供支持讀和寫Parquet文件的功能,這些文件可以自動地保留原始數據的模式。 ### 加載數據 ~~~ // sqlContext from the previous example is used in this example. // createSchemaRDD is used to implicitly convert an RDD to a SchemaRDD. import sqlContext.createSchemaRDD val people: RDD[Person] = ... // An RDD of case class objects, from the previous example. // The RDD is implicitly converted to a SchemaRDD by createSchemaRDD, allowing it to be stored using Parquet. people.saveAsParquetFile("people.parquet") // Read in the parquet file created above. Parquet files are self-describing so the schema is preserved. // The result of loading a Parquet file is also a SchemaRDD. val parquetFile = sqlContext.parquetFile("people.parquet") //Parquet files can also be registered as tables and then used in SQL statements. parquetFile.registerTempTable("parquetFile") val teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19") teenagers.map(t => "Name: " + t(0)).collect().foreach(println) ~~~ ### 配置 可以在SQLContext上使用setConf方法配置Parquet或者在用SQL時運行`SET key=value`命令來配置Parquet。 | Property Name | Default | Meaning | |-----|-----|-----| | spark.sql.parquet.binaryAsString | false | 一些其它的Parquet-producing系統,特別是Impala和其它版本的Spark SQL,當寫出Parquet模式的時候,二進制數據和字符串之間無法區分。這個標記告訴Spark SQL將二進制數據解釋為字符串來提供這些系統的兼容性。 | | spark.sql.parquet.cacheMetadata | true | 打開parquet元數據的緩存,可以提高靜態數據的查詢速度 | | spark.sql.parquet.compression.codec | gzip | 設置寫parquet文件時的壓縮算法,可以接受的值包括:uncompressed, snappy, gzip, lzo | | spark.sql.parquet.filterPushdown | false | 打開Parquet過濾器的pushdown優化。因為已知的Paruet錯誤,這個特征默認是關閉的。如果你的表不包含任何空的字符串或者二進制列,打開這個特征仍是安全的 | | spark.sql.hive.convertMetastoreParquet | true | 當設置為false時,Spark SQL將使用Hive SerDe代替內置的支持 |
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看