<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # Accessing OpenStack Swift from Spark Spark’s support for Hadoop InputFormat allows it to process data in OpenStack Swift using the same URI formats as in Hadoop. You can specify a path in Swift as input through a URI of the form `swift://container.PROVIDER/path`. You will also need to set your Swift security credentials, through `core-site.xml` or via `SparkContext.hadoopConfiguration`. Current Swift driver requires Swift to use Keystone authentication method. # Configuring Swift for Better Data Locality Although not mandatory, it is recommended to configure the proxy server of Swift with `list_endpoints` to have better data locality. More information is [available here](https://github.com/openstack/swift/blob/master/swift/common/middleware/list_endpoints.py). # Dependencies The Spark application should include `hadoop-openstack` dependency. For example, for Maven support, add the following to the `pom.xml` file: ``` <dependencyManagement> ... <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-openstack</artifactId> <version>2.3.0</version> </dependency> ... </dependencyManagement> ``` # Configuration Parameters Create `core-site.xml` and place it inside Spark’s `conf` directory. There are two main categories of parameters that should to be configured: declaration of the Swift driver and the parameters that are required by Keystone. Configuration of Hadoop to use Swift File system achieved via | Property Name | Value | | --- | --- | | fs.swift.impl | org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem | Additional parameters required by Keystone (v2.0) and should be provided to the Swift driver. Those parameters will be used to perform authentication in Keystone to access Swift. The following table contains a list of Keystone mandatory parameters. `PROVIDER` can be any name. | Property Name | Meaning | Required | | --- | --- | --- | | `fs.swift.service.PROVIDER.auth.url` | Keystone Authentication URL | Mandatory | | `fs.swift.service.PROVIDER.auth.endpoint.prefix` | Keystone endpoints prefix | Optional | | `fs.swift.service.PROVIDER.tenant` | Tenant | Mandatory | | `fs.swift.service.PROVIDER.username` | Username | Mandatory | | `fs.swift.service.PROVIDER.password` | Password | Mandatory | | `fs.swift.service.PROVIDER.http.port` | HTTP port | Mandatory | | `fs.swift.service.PROVIDER.region` | Keystone region | Mandatory | | `fs.swift.service.PROVIDER.public` | Indicates if all URLs are public | Mandatory | For example, assume `PROVIDER=SparkTest` and Keystone contains user `tester` with password `testing` defined for tenant `test`. Then `core-site.xml` should include: ``` <configuration> <property> <name>fs.swift.impl</name> <value>org.apache.hadoop.fs.swift.snative.SwiftNativeFileSystem</value> </property> <property> <name>fs.swift.service.SparkTest.auth.url</name> <value>http://127.0.0.1:5000/v2.0/tokens</value> </property> <property> <name>fs.swift.service.SparkTest.auth.endpoint.prefix</name> <value>endpoints</value> </property> <name>fs.swift.service.SparkTest.http.port</name> <value>8080</value> </property> <property> <name>fs.swift.service.SparkTest.region</name> <value>RegionOne</value> </property> <property> <name>fs.swift.service.SparkTest.public</name> <value>true</value> </property> <property> <name>fs.swift.service.SparkTest.tenant</name> <value>test</value> </property> <property> <name>fs.swift.service.SparkTest.username</name> <value>tester</value> </property> <property> <name>fs.swift.service.SparkTest.password</name> <value>testing</value> </property> </configuration> ``` Notice that `fs.swift.service.PROVIDER.tenant`, `fs.swift.service.PROVIDER.username`, `fs.swift.service.PROVIDER.password` contains sensitive information and keeping them in `core-site.xml` is not always a good approach. We suggest to keep those parameters in `core-site.xml` for testing purposes when running Spark via `spark-shell`. For job submissions they should be provided via `sparkContext.hadoopConfiguration`.
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看