<ruby id="bdb3f"></ruby>

    <p id="bdb3f"><cite id="bdb3f"></cite></p>

      <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
        <p id="bdb3f"><cite id="bdb3f"></cite></p>

          <pre id="bdb3f"></pre>
          <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

          <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
          <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

          <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                <ruby id="bdb3f"></ruby>

                ??一站式輕松地調用各大LLM模型接口,支持GPT4、智譜、豆包、星火、月之暗面及文生圖、文生視頻 廣告
                # Informatica 面試的前 50 個問題&答案 > 原文: [https://www.guru99.com/informatica-interview-questions.html](https://www.guru99.com/informatica-interview-questions.html) **1.企業數據倉庫是什么意思?** 當在單個訪問點創建組織數據時,稱為企業數據倉庫。 數據可以通過單個源存儲以全局視圖的形式提供給服務器。 可以對同一來源進行定期分析。 它可以提供更好的結果,但是所需的時間卻很長。 **2.數據庫,數據倉庫和數據集市之間有什么區別?** 數據庫包括一組明智的關聯數據,與數據倉庫相比,這些數據通常較小。 在數據倉庫中,各種各樣的數據種類繁多,僅根據客戶的需求取出數據。 另一方面,數據集市也是一組旨在滿足不同域需求的數據。 例如,一個組織在其不同部門(即銷售,財務,市場營銷等)擁有不同的數據塊。 **3.域是什么意思?** 當所有相關關系和節點都被一個唯一的組織點覆蓋時,則稱為其域。 通過這種數據管理可以得到改善。 **4.存儲庫服務器和 Powerhouse 有什么區別?** 儲存庫服務器控制著完整的儲存庫,其中包括表格,圖表和各種過程等。其主要功能是確保儲存庫的完整性和一致性。 而強大的服務器則控制著服務器數據庫存儲庫中各種過程的執行。 **5.在 informatica 中可以創建多少個存儲庫?** informatica 中可以有任意數量的存儲庫,但最終取決于端口數量。 **6.分區會話有什么好處?** 分區會話意味著會話中的單獨實現序列。 其主要目的是提高服務器的運行效率。 并行執行其他轉換,包括單個分區的提取和其他輸出。 **7.完成加載過程后如何創建索引?** 為了在加載過程之后創建索引,可以使用會話級別的命令任務。 索引創建腳本可以與會話的工作流程或會話后的實現順序保持一致。 而且,在轉換級別的加載過程之后,無法控制這種類型的索引創建。 **8.解釋會話。 說明如何使用批處理組合執行?** 需要實現將數據從源轉換為目標的教學集稱為會話。 可以使用會話的管理器或 pmcmd 命令來執行會話。 批處理執行可用于以串行方式或并行方式組合會話執行。 批處理可以具有以并行或串行方式結轉的不同會話。 **9.一組可以一組會話多少次?** 一個人可以分組任意數量的會話,但如果批量中的會話數較少,則遷移起來會更容易。 **10.解釋映射參數和映射變量之間的區別?** 當在會話執行期間值發生變化時,它稱為映射變量。 完成后,Informatica 服務器將存儲變量的結束值,并在會話重新啟動時被重用。 此外,在會話執行期間未更改的那些值稱為映射參數。 映射過程說明了映射參數及其用法。 在開始會話之前,將值分配給這些參數。 **11.什么是復雜映射?** 以下是復雜映射的功能。 * 困難的要求 * 大量的轉換 * 復雜的業務邏輯 **12.在沒有連接會話的情況下,如何識別映射是否正確?** 如果沒有連接會話,則可以通過調試選項來查找會話是否正確。 **13.是否可以將在一個映射中創建的映射參數或變量用于任何其他可重用的轉換?** 是的,可以這樣做是因為可重用轉換不包含任何 mapplet 或映射。 **14.解釋使用聚合器緩存文件嗎?** 每次運行期間,聚合器轉換都以指令塊的形式進行處理。 它存儲在本地緩沖存儲器中找到的過渡值。 如果需要額外的內存,聚合器會提供額外的緩存文件來存儲轉換值。 **15.簡要描述查找轉換?** 查找轉換是那些有權訪問基于 RDBMS 的數據集的轉換。 服務器通過使用查找表查看顯式表數據或數據庫來加快訪問速度。 通過匹配轉換過程中交付的所有查找端口的查找條件來獲得結論數據。 **16.角色扮演維度是什么意思?** 保留在同一數據庫域中用于扮演多種角色的維度稱為角色扮演維度。 **17.如何在不進行 SQL 或其他轉換的情況下訪問存儲庫報告?** 答案:存儲庫報告由元數據報告器建立。 由于它是一個 Web 應用程序,因此不需要 [SQL](/sql.html) 或其他轉換。 **18.存儲在存儲庫中的元數據類型是什么?** 元數據的類型包括源定義,目標定義,映射,Mapplet,轉換。 **19.解釋代碼頁兼容性嗎?** 如果兩個代碼頁具有相同的字符集,則數據從一個代碼頁移動到另一個代碼頁,則不會發生數據丟失。 源頁面的所有特征必須在目標頁面中可用。 而且,如果源頁面的所有字符都沒有出現在目標頁面中,那么這將是一個子集,并且由于兩個代碼頁不兼容的事實,在轉換期間肯定會發生數據丟失。 **20.如何同時驗證存儲庫中的所有映射?** 由于每次只能驗證一個映射,因此無法同時驗證所有映射。 **21.簡要解釋聚合器轉換?** 它允許人們進行匯總計算,例如求和,平均值等。這與表達式轉換不同,在表達式轉換中,人們可以成組進行計算。 **22.描述表達式轉換?** 在以這種形式的轉換寫入目標之前,可以在單行中計算值。 它可用于執行非聚合計算。 在輸出結果進入目標表之前,也可以測試條件語句。 **23.濾波器轉換是什么意思?** 它是過濾映射中的行的媒介。 需要通過過濾器轉換對數據進行轉換,然后應用過濾條件。 篩選器轉換包含輸入/輸出的所有端口,并且滿足條件的行只能通過該篩選器。 **24.什么是 Joiner 轉換?** Joiner 轉換將居住在不同位置的兩個關聯異構源組合在一起,而源限定符轉換可以合并從公共源中出現的數據。 **25.什么是查找轉換?** 它用于通過映射在關系表中查找數據。 任何關系數據庫中的查找定義都是從具有連接客戶端和服務器的趨勢的源中導入的。 一個人可以在一個映射中使用多個查找轉換。 **26.如何使用聯合變換?** 回答:這是一個多樣化的輸入組轉換,可用于合并來自不同來源的數據。 它的工作方式類似于 UNION SQL 中的 All 語句,用于組合兩個 SELECT 語句的結果集。 **27.您是什么意思增量聚合?** 每當為映射聚合創建會話時,就啟用增量聚合選項。 Power Center 通過映射和歷史緩存數據執行增量聚合,以增量方式執行新的聚合計算。 **28.連接的查詢和未連接的查詢有什么區別?** 當直接從管道中的其他轉換獲取輸入時,稱為連接查找。 雖然未連接查找不會直接從其他轉換中獲取輸入,但是它可以在任何轉換中使用,并且可以使用 LKP 表達式作為函數來引發。 因此,可以說在映射中可以多次調用未連接的查找。 **29.什么是 mapplet?** 使用 mapplet 設計器的可回收對象稱為 mapplet。 它允許一個人在多個映射中重用轉換邏輯,而且它還包含一組轉換。 **30.簡要定義可重用轉換?** 可重用轉換在映射中被多次使用。 它與使用該轉換的其他映射不同,因為它存儲為元數據。 每當對可重用轉換進行任何更改時,轉換將在映射中無效。 **31.更新策略是什么意思,它的不同選擇是什么?** 逐行處理由 informatica 完成。 每行都被插入到目標表中,因為它被標記為默認行。 每當必須根據某個順序更新或插入行時,都會使用更新策略。 此外,必須在更新策略中指定條件,以將已處理的行標記為已更新或已插入。 **32.在什么情況下迫使 Informatica 服務器拒絕文件?** 當它在更新策略轉換中面對 DD_Reject 時,就會發生這種情況。 此外,它破壞了行中壓縮的數據庫約束。 **33.什么是代理密鑰?** 代理密鑰是自然主密鑰的替代。 它是表中每一行的唯一標識。 這是非常有益的,因為自然主鍵可以更改,這最終會使更新變得更加困難。 它們始終以數字或整數形式使用。 **34.要實現會話分區,先決條件是什么?** 為了執行會話分區,需要配置會話以對源數據進行分區,然后將 Informatica 服務器計算機安裝在多重 CPU 中。 **35.在會話朗姆酒期間,信息服務器創建了哪些文件?** 在會話運行期間,創建的文件即錯誤日志,錯誤文件,工作流不足和會話日志。 **36.簡要定義會話任務?** 這是指導 Power Center 服務器如何以及何時將數據從源傳輸到目標的大量指令。 **37.命令任務是什么意思?** 此特定任務允許在 [Unix](/unix-linux-tutorial.html) 或 Windows 中的 DOS 中使用一個或多個外殼命令在工作流程期間運行。 **38.什么是獨立命令任務?** 可以在工作流中的任何位置使用此任務來運行 Shell 命令。 **39.會話前和會話后 shell 命令是什么意思?** 命令任務可以稱為會話任務的會話前或會話后 Shell 命令。 可以將其作為會話前命令或會話后成功命令或會話后失敗命令來運行。 **40.什么是預定義事件?** 這是一個文件監視事件。 它等待特定的文件到達特定的位置。 **41.如何定義用戶自定義事件?** 用戶定義的事件可以描述為工作流中的任務流。 可以創建事件,然后根據需要引發事件。 **42.什么是工作流程?** 回答:工作流是一堆指令,可以與服務器交流有關如何實現任務的信息。 **43.工作流管理器中有哪些不同的工具?** 以下是工作流管理器中的不同工具,即 * 任務設計師 * 工作器設計師 * 工作流程設計師 **44.除了工作流程管理器 pmcmd 之外,還告訴我其他任何用于計劃目的的工具嗎?** 除了工作流管理器之外,用于計劃目的的工具可以是第三方工具,例如“ CONTROL M”。 **45.什么是 OLAP(在線分析處理)?** 進行多維分析的方法。 **46\. OLAP 有哪些不同類型? 舉個例子?** ROLAP(例如 BO),MOLAP(例如 Cognos,HOLAP,DOLAP) **47\. Worklet 是什么意思?** 將工作流任務分組在一組中時,稱為工作集。 工作流程任務包括計時器,決策,命令,事件等待,郵件,會話,鏈接,分配,控制等。 **48.目標設計器有什么用?** 目標定義是在目標設計者的幫助下創建的。 **49.我們在哪里可以找到 informatica 的吞吐量選項?** 吞吐量選項可以在工作流監視器的 informatica 中找到。 在工作流監視器中,右鍵單擊會話,然后單擊獲取運行屬性,然后在源/目標統計信息下找到吞吐量選項。 **50.什么是目標裝載順序?** 回答:目標加載順序是根據映射中的源限定符指定的。 如果有多個源限定符鏈接到不同的目標,則可以按順序授權 Informatica 服務器將數據加載到目標中。 **1\. What do you mean by Enterprise Data Warehousing?** When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. When the organization data is created at a single point of access it is called as enterprise data warehousing. Data can be provided with a global view to the server via a single source store. One can do periodic analysis on that same source. It gives better results but however the time required is high. **2\. What the difference is between a database, a data warehouse and a data mart?** Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. Database includes a set of sensibly affiliated data which is normally small in size as compared to data warehouse. While in data warehouse there are assortments of all sorts of data and data is taken out only according to the customer's needs. On the other hand datamart is also a set of data which is designed to cater the needs of different domains. For instance an organization having different chunk of data for its different departments i.e. sales, finance, marketing etc. **3\. What is meant by a domain?** When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. When all related relationships and nodes are covered by a sole organizational point, its called domain. Through this data management can be improved. **4\. What is the difference between a repository server and a powerhouse?** Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. Repository server controls the complete repository which includes tables, charts, and various procedures etc. Its main function is to assure the repository integrity and consistency. While a powerhouse server governs the implementation of various processes among the factors of server's database repository. **5\. How many repositories can be created in informatica?** There can be any number of repositories in informatica but eventually it depends on number of ports. There can be any number of repositories in informatica but eventually it depends on number of ports. There can be any number of repositories in informatica but eventually it depends on number of ports. **6\. What is the benefit of partitioning a session?** Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. Partitioning a session means solo implementation sequences within the session. It's main purpose is to improve server's operation and efficiency. Other transformations including extractions and other outputs of single partitions are carried out in parallel. **7\. How are indexes created after completing the load process?** For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. For the purpose of creating indexes after the load process, command tasks at session level can be used. Index creating scripts can be brought in line with the session's workflow or the post session implementation sequence. Moreover this type of index creation cannot be controlled after the load process at transformation level. **8\. Explain sessions. Explain how batches are used to combine executions?** A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. A teaching set that needs to be implemented to convert data from a source to a target is called a session. Session can be carried out using the session's manager or pmcmd command. Batch execution can be used to combine sessions executions either in serial manner or in a parallel. Batches can have different sessions carrying forward in a parallel or serial manner. **9\. How many number of sessions can one group in batches?** One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. One can group any number of sessions but it would be easier for migration if the number of sessions are lesser in a batch. **10\. Explain the difference between mapping parameter and mapping variable?** When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. When values change during the session's execution it's called a mapping variable. Upon completion the Informatica server stores the end value of a variable and is reused when session restarts. Moreover those values that do not change during the sessions execution are called mapping parameters. Mapping procedure explains mapping parameters and their usage. Values are allocated to these parameters before starting the session. **11.What is complex mapping?** Following are the features of complex mapping. * 困難的要求 * 大量的轉換 * 復雜的業務邏輯 Following are the features of complex mapping. * 困難的要求 * 大量的轉換 * 復雜的業務邏輯 Following are the features of complex mapping. * 困難的要求 * 大量的轉換 * 復雜的業務邏輯 **12\. How can one identify whether mapping is correct or not without connecting session?** One can find whether the session is correct or not without connecting the session is with the help of debugging option. One can find whether the session is correct or not without connecting the session is with the help of debugging option. One can find whether the session is correct or not without connecting the session is with the help of debugging option. **13\. Can one use mapping parameter or variables created in one mapping into any other reusable transformation?** Yes, One can do because reusable transformation does not contain any mapplet or mapping. Yes, One can do because reusable transformation does not contain any mapplet or mapping. Yes, One can do because reusable transformation does not contain any mapplet or mapping. **14\. Explain the use of aggregator cache file?** Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. Aggregator transformations are handled in chunks of instructions during each run. It stores transitional values which are found in local buffer memory. Aggregators provides extra cache files for storing the transformation values if extra memory is required. **15\. Briefly describe lookup transformation?** Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. Lookup transformations are those transformations which have admission right to RDBMS based data set. The server makes the access faster by using the lookup tables to look at explicit table data or the database. Concluding data is achieved by matching the look up condition for all look up ports delivered during transformations. **16\. What does role playing dimension mean?** The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. The dimensions that are utilized for playing diversified roles while remaining in the same database domain are called role playing dimensions. **17\. How can repository reports be accessed without SQL or other transformations?** Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. Ans:Repositoryreports are established by metadata reporter. There is no need of [SQL](/sql.html) or other transformation since it is a web app. **18\. What are the types of metadata that stores in repository?** The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. The types of metadata includes Source definition, Target definition, Mappings, Mapplet, Transformations. **19\. Explain the code page compatibility?** When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. When data moves from one code page to another provided that both code pages have the same character sets then data loss cannot occur. All the characteristics of source page must be available in the target page. Moreover if all the characters of source page are not present in the target page then it would be a subset and data loss will definitely occur during transformation due the fact the two code pages are not compatible. **20\. How can you validate all mappings in the repository simultaneously?** All the mappings cannot be validated simultaneously because each time only one mapping can be validated. All the mappings cannot be validated simultaneously because each time only one mapping can be validated. All the mappings cannot be validated simultaneously because each time only one mapping can be validated. **21\. Briefly explain the Aggregator transformation?** It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. It allows one to do aggregate calculations such as sums, averages etc. It is unlike expression transformation in which one can do calculations in groups. **22\. Describe Expression transformation?** Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. Values can be calculated in single row before writing on the target in this form of transformation. It can be used to perform non aggregate calculations. Conditional statements can also be tested before output results go to target tables. **23\. What do you mean by filter transformation?** It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. It is a medium of filtering rows in a mapping. Data needs to be transformed through filter transformation and then filter condition is applied. Filter transformation contains all ports of input/output, and the rows which meet the condition can only pass through that filter. **24\. What is Joiner transformation?** Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. Joiner transformation combines two affiliated heterogeneous sources living in different locations while a source qualifier transformation can combine data emerging from a common source. **25\. What is Lookup transformation?** It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. It is used for looking up data in a relational table through mapping. Lookup definition from any relational database is imported from a source which has tendency of connecting client and server. One can use multiple lookup transformation in a mapping. **26\. How Union Transformation is used?** Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. Ans: It is a diverse input group transformation which can be used to combine data from different sources. It works like UNION All statement in SQL that is used to combine result set of two SELECT statements. **27\. What do you mean Incremental Aggregation?** Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. Option for incremental aggregation is enabled whenever a session is created for a mapping aggregate. Power center performs incremental aggregation through the mapping and historical cache data to perform new aggregation calculations incrementally. **28\. What is the difference between a connected look up and unconnected look up?** When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. When the inputs are taken directly from other transformations in the pipeline it is called connected lookup. While unconnected lookup doesn't take inputs directly from other transformations, but it can be used in any transformations and can be raised as a function using LKP expression. So it can be said that an unconnected lookup can be called multiple times in mapping. **29\. What is a mapplet?** A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. A recyclable object that is using mapplet designer is called a mapplet. It permits one to reuse the transformation logic in multitude mappings moreover it also contains set of transformations. **30.Briefly define reusable transformation?** Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. Reusable transformation is used numerous times in mapping. It is different from other mappings which use the transformation since it is stored as a metadata. The transformations will be nullified in the mappings whenever any change in the reusable transformation is made. **31\. What does update strategy mean, and what are the different option of it?** Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. Row by row processing is done by informatica. Every row is inserted in the target table because it is marked as default. Update strategy is used whenever the row has to be updated or inserted based on some sequence. Moreover the condition must be specified in update strategy for the processed row to be marked as updated or inserted. **32\. What is the scenario which compels informatica server to reject files?** This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. This happens when it faces DD_Reject in update strategy transformation. Moreover it disrupts the database constraint filed in the rows was condensed. **33\. What is surrogate key?** Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. Surrogate key is a replacement for the natural prime key. It is a unique identification for each row in the table. It is very beneficial because the natural primary key can change which eventually makes update more difficult. They are always used in form of a digit or integer. **34.What are the prerequisite tasks to achieve the session partition?** In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. In order to perform session partition one need to configure the session to partition source data and then installing the Informatica server machine in multifold CPU's. **35\. Which files are created during the session rums by informatics server?** During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. During session runs, the files created are namely Errors log, Bad file, Workflow low and session log. **36\. Briefly define a session task?** It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. It is a chunk of instruction the guides Power center server about how and when to transfer data from sources to targets. **37\. What does command task mean?** This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. This specific task permits one or more than one shell commands in [Unix](/unix-linux-tutorial.html) or DOS in windows to run during the workflow. **38\. What is standalone command task?** This task can be used anywhere in the workflow to run the shell commands. This task can be used anywhere in the workflow to run the shell commands. This task can be used anywhere in the workflow to run the shell commands. **39\. What is meant by pre and post session shell command?** Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. Command task can be called as the pre or post session shell command for a session task. One can run it as pre session command r post session success command or post session failure command. **40.What is predefined event?** It is a file-watch event. It waits for a specific file to arrive at a specific location. It is a file-watch event. It waits for a specific file to arrive at a specific location. It is a file-watch event. It waits for a specific file to arrive at a specific location. **41\. How can you define user defied event?** User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. User defined event can be described as a flow of tasks in the workflow. Events can be created and then raised as need arises. **42\. What is a work flow?** Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. Ans: Work flow is a bunch of instructions that communicates server about how to implement tasks. **43\. What are the different tools in workflow manager?** Following are the different tools in workflow manager namely * 任務設計師 * 工作器設計師 * 工作流程設計師 Following are the different tools in workflow manager namely * 任務設計師 * 工作器設計師 * 工作流程設計師 Following are the different tools in workflow manager namely * 任務設計師 * 工作器設計師 * 工作流程設計師 **44\. Tell me any other tools for scheduling purpose other than workflow manager pmcmd?** The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. The tool for scheduling purpose other than workflow manager can be a third party tool like 'CONTROL M'. **45\. What is OLAP (On-Line Analytical Processing?** A method by which multi-dimensional analysis occurs. A method by which multi-dimensional analysis occurs. A method by which multi-dimensional analysis occurs. **46\. What are the different types of OLAP? Give an example?** ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP ROLAP eg.BO, MOLAP eg.Cognos, HOLAP, DOLAP **47\. What do you mean by worklet?** When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. When the workflow tasks are grouped in a set, it is called as worklet. Workflow tasks includes timer, decision, command, event wait, mail, session, link, assignment, control etc. **48\. What is the use of target designer?** Target Definition is created with the help of target designer. Target Definition is created with the help of target designer. Target Definition is created with the help of target designer. **49\. Where can we find the throughput option in informatica?** Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. Throughput option can be found in informatica in workflow monitor. In workflow monitor, right click on session, then click on get run properties and under source/target statistics we can find throughput option. **50\. What is target load order?** Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets. Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets. Ans: Target load order is specified on the basis of source qualifiers in a mapping. If there are multifold source qualifiers linked to different targets then one can entitle order in which informatica server loads data into targets.
                  <ruby id="bdb3f"></ruby>

                  <p id="bdb3f"><cite id="bdb3f"></cite></p>

                    <p id="bdb3f"><cite id="bdb3f"><th id="bdb3f"></th></cite></p><p id="bdb3f"></p>
                      <p id="bdb3f"><cite id="bdb3f"></cite></p>

                        <pre id="bdb3f"></pre>
                        <pre id="bdb3f"><del id="bdb3f"><thead id="bdb3f"></thead></del></pre>

                        <ruby id="bdb3f"><mark id="bdb3f"></mark></ruby><ruby id="bdb3f"></ruby>
                        <pre id="bdb3f"><pre id="bdb3f"><mark id="bdb3f"></mark></pre></pre><output id="bdb3f"></output><p id="bdb3f"></p><p id="bdb3f"></p>

                        <pre id="bdb3f"><del id="bdb3f"><progress id="bdb3f"></progress></del></pre>

                              <ruby id="bdb3f"></ruby>

                              哎呀哎呀视频在线观看