Installing and Configuring Concierge (v7.1.0+)

The aim of the Concierge feature is to help users save time locating content either in the application or on our help site. Concierge Portal Page is added to all releases, but by default is disabled.

This article describes the steps required to enable the Concierge Portal Page on your instance.

  1. Enable the Feature
  2. Configure Concierge Settings
  3. Enable the Concierge App
  4. Configure Feedback Collection
  5. Use Concierge in Messengers

PREREQUISITES:

Supported models:

  • All models of GPT4 and GPT4 Turbo family
  • Mistral-Large
  • Claude 3 Opus

1. Enable Concierge Feature

Access Admin > System > System Variables

  1. Find ENABLE_CONCIERGE Variable and click the gear icon to open the Edit Variable window;
  2. Assigned value: Set to "Y" to enable the Concierge tab under Search Setup;
  3. [Save].

For more information on how to work with System Variables, refer to Setting System Variables.

2. Configure Concierge Settings

2.1. General Tab

Access Admin > System > Concierge Setup > General Tab

NOTE: Required fields can differ according to the model used.

  1. Main LLM: (required) Specify the model.
  2. Display Main LLM in Concierge: Select this checkbox to show the icon that displays used LLM model on hover.
  3. Large Context LLM: If necessary for processing larger content, specify the large model.
  4. LLM Deployment Name: (required for Azure) Provide the name of the Deployment.
  5. LLM API Key: (required) Input the key, obtained in LLM service provider account.
  6. LLM API Type: (required) Define the type of the LLM, that can be either "openai" or "azure".
  7. LLM API Base URL: (required for Azure) Specify the API endpoint URL.
  8. LLM API Version: (required for Azure) Provide the version of the LLM.
  9. Display "Show Search Request" icon: Select this checkbox to show the icon that displays search request.
  10. Response Streaming Support: Select his checkbox if you don't want the process of writing the answer to be exposed to the User.
  11. Custom Instructions: dd a Dataset with custom instructions for Concierge.
  12. Mask Personal Data Sent to LLM: Enable this checkbox to tokenize personal user data before sending it to LLM.
  13. Internationalize Concierge: Enable this checkbox to give the Users opportunity to make queries on different languages.
  14. [Update All Search Indexes].

NOTE: The search index update is a required step and must be finished before Concierge will work correctly. Perform index update every time after changing any of Concierge settings.

This step may take some time to complete, depending on how many objects you have in your Metric Insights instance. Indexing progress is shown in the Indexing in progress bar.

2.2. Content Sources Tab

  1. Exclude MI Help Docs: Enable this checkbox to prevent Concierge from searching information throught Metric Insights documentation.
  2. Include Glossary Terms: Here an Administrator can choose to include Glossary Terms into search.
  3. Custom FAQ Dataset: Concierge allows usage of custom FAQ.
  4. Default Out of Scope Message: When Users query something that is out of Concierge scope, they will receive the message provided here.
  5. External Resource Configuration: Concierge supports external resources usage.

2.3. Customize Tab

In this tab the Administrator can define the Concierge appearance. Check Concierge Sidebar Customization for details.

3. Enable Concierge App

Concierge can still be used as an App, if needed. However, it is strongly recommended to not use the Sidebar and App simultaneously.

The detailed instruction on activating the Concierge App is described in Install Concierge article.

4. Configure Feedback Collection

Concierge can be configured to collect feedback from users who interact with it.

Find instructions on the feedback collection configuration in Configure Feedback Collection for Concierge.

5. Using Concierge in Messengers

Concierge can be connected and used in Slack and Microsoft Teams messengers. For more details, check this articles: