Research Mode: Search Filtering API
Overview
The GPT-Enhanced Search Filtering feature automatically improves search relevance by using AI to understand the intent and context behind your questions. When enabled, the system extracts key entities (locations, brands, organizations, dates, etc.) and keywords from your query, then builds a more precise internal search query to return the most relevant answers.
This feature is especially beneficial for research workflows requiring contextual accuracy.
Base URL
All requests must be sent to: https://ask.lucy.ai or customer specific answer engine base URL
Feature Benefits
- Improved Relevance: Results automatically prioritize the most relevant content
- Context Awareness: AI understands intent and extracts important entities
- Automatic Query Refinement: No manual tuning required
- Entity Recognition: Identifies and boosts entities such as locations, brands, organizations
Enabling researchTypeMode Search Filtering
Prerequisites
- Valid Lucy User credentials withe active X-Auth-Token
- researchTypeMode enabled for your company/instance
- AI model configuration completed by Answer Engine team
Activation
Enable researchTypeMode filtering by adding the parameter: researchTypeMode=2 or researchTypeMode=3
Any other value or omission results in standard (non-GPT) search.
API Endpoint Details
- Endpoint:
https://ask.lucy.ai/api/qna/answers - Method: GET
Request Parameters
Required Parameters
Parameter | Type | Description | Example |
|---|---|---|---|
q | string | The search question | What are the sales figures for New York office? |
source | string | Source identifier | lucy |
GPT Parameter
Parameter | Type | Description | Values |
|---|---|---|---|
researchTypeMode | string | Enables GPT filtering | 3 = enabled, 0/omit = disabled |
Additional Parameters
Parameter | Type | Description |
|---|---|---|
selected_answers_limit | integer | Maximum results |
selectedLanguage | string | Language code |
view | string | answer / other views |
saveHistory | boolean | Save to user history |
tempQuestionId | string | Temporary ID |
selected_solr_companies | string | Source ID filters |
selected_file_types | string | pdf, docx, etc. |
time_from / time_to | string | Date range |
selected_brands | string | Brand filters |
selected_locations | string | Location filters |
Request Examples
Example 1: Basic Search with GPT Filtering
GET https://ask.lucy.ai/api/qna/answers?q=What+are+the+sales+figures+for+New+York+office+in+Q4&source=lucy&researchTypeMode=2&selected_answers_limit=10&view=answer&saveHistory=true GET https://ask.lucy.ai/api/qna/answers?q=What+are+the+sales+figures+for+New+York+office+in+Q4&source=lucy&researchTypeMode=3&selected_answers_limit=10&view=answer&saveHistory=true
Example 2: With Additional Filters
GET https://ask.lucy.ai/api/qna/answers?q=What+are+the+marketing+strategies+for+PepsiCo+products&source=lucy&researchTypeMode=3&selected_answers_limit=20&selected_file_types=pdf,docx&time_from=2024-01-01&time_to=2024-12-31
Example 3: Standard Search (No GPT)
GET https://ask.lucy.ai/api/qna/answers?q=What+are+the+sales+figures+for+New+York+office&source=lucy
Note: Omitting researchTypeMode uses standard search.
Response Format
The API response format remains the same whether GPT filtering is enabled or not. The difference is in the relevance and quality of the results returned.
Example Response
{
"answers": [
{
"AnswerID": "118034592_18",
"MD5": "",
"Title": "Lucys an answer engine, and so you can ask her a natural language question and she brings back,... <keyword> Lucys an answer engine, and so you can ask her a natural language question and she brings back, you know, kind of that, that answer. </keyword> ",
"Text": "<lucy-player video=\"1ba0609387\" time=\"135\" auGeneratedFrom=\"transcript\"></lucy-player>",
"Confidence": 0.0,
"ExpertRating": 0.0,
"TrainingCount": 2,
"Company": "003ux",
"Source": "[ObjectStoreURL]/l2-003ux/PepsiCoHRWebinar.mp4?bsaccount=1f441773-53bb-478e-9969-f71d3eaf6791&bsid=eyJzaXRlIjoiZXF1YWxzM2FpLnNoYXJlcG9pbnQuY29tLDliZTQzNzcyLTU3MmItNDU0OC1hOGJkLWY2MGEyYTZmMzkxMyxkMTYyZWFkYi1jNjg0LTQ1NzQtYmU2Mi1hNWFlMzExNmJkZjEiLCJkcml2ZSI6ImIhY2pma215dFhTRVdvdmZZS0ttODVFOXZxWXRHRXhuUkZ2bUtscmpFV3ZmR1JlNkxBM1l5aFJwNndaaUNaZFhhcSIsImZpbGUiOiIwMTY1Q1oyQzc0NERBUldLTzNYWkJaTVBGVldVVVg2QzZPIn0=",
"Cite": "HR Docs and Policy",
"Answer_Concepts": "none",
"Answer_Taxonomy": "none",
"Answer_Keywords": "none",
"Filter3": "18of523",
"author_name": "",
"FileName": "PepsiCoHRWebinar.mp4",
"currentPageNumber": 18,
"totalpageCount": 523,
"Description": "",
"Language": "",
"Topic": "",
"assetDetailsUrl": "",
"section": "",
"V2Passage": "",
"isGPS": false,
"isThirdPartySource": false,
"shouldShowSourceNameInChat": false,
"answer_locations": "",
"answer_brands": "",
"answer_persons": "Lucy",
"combinedData": "So we we dont use the word search engine.. \nLucys an answer engine, and so you can ask her a natural language question and she brings back, you know, kind of that, that answer.. \nHow do where And again with opportunity of those kinds of things.. \nWhats also really cool as we step into this is were enabling that inside of chat or other places.. \nSo if youre using Teams or Slack where you can actually ask those same questions and teams or Slack and get an answer, so.. \n",
"userSelectedDate": null,
"TagData": null,
"Taxonomies": [],
"CustomTaxonomies": [],
"Concepts": [],
"Entities": [],
"DiscoveryConcepts": [],
"DiscoveryTaxonomies": [],
"DiscoveryKeywords": [],
"DiscoveryEntities": [],
"CompanyandSource": [],
"documentDate": "2022-07-21T18:45:35Z",
"createdDate": "2023-10-24T05:49:20Z",
"updatedDate": "2022-08-04T20:17:54Z",
"categories": "",
"Passage": "Lucys an answer engine, and so you can ask her a natural language question and she brings back,... <keyword> Lucys an answer engine, and so you can ask her a natural language question and she brings back, you know, kind of that, that answer. </keyword> ",
"meta": {
"site_id": "eyJzaXRlIjoiZXF1YWxzM2FpLnNoYXJlcG9pbnQuY29tLDliZTQzNzcyLTU3MmItNDU0OC1hOGJkLWY2MGEyYTZmMzkxMyxkMTYyZWFkYi1jNjg0LTQ1NzQtYmU2Mi1hNWFlMzExNmJkZjEifQ==",
"site_name": "Lucy Demo content",
"site_url": "https://equals3ai.sharepoint.com/sites/LucyDemo",
"parent_id": "eyJzaXRlIjoiZXF1YWxzM2FpLnNoYXJlcG9pbnQuY29tLDliZTQzNzcyLTU3MmItNDU0OC1hOGJkLWY2MGEyYTZmMzkxMyxkMTYyZWFkYi1jNjg0LTQ1NzQtYmU2Mi1hNWFlMzExNmJkZjEiLCJkcml2ZSI6ImIhY2pma215dFhTRVdvdmZZS0ttODVFOXZxWXRHRXhuUkZ2bUtscmpFV3ZmR1JlNkxBM1l5aFJwNndaaUNaZFhhcSIsImZpbGUiOiIwMTY1Q1oyQzdIWTJCVklHQ0dGUkNZWURMR1RMWU9LRFlZIn0=",
"parent_name": "HR Docs",
"parent_url": "https://equals3ai.sharepoint.com/sites/LucyDemo/Shared%20Documents/Demo%20Files/HR%20Docs",
"SourceFileName": "PepsiCoHRWebinar.mp4",
"modifier": "SharePoint App",
"Language": "en",
"project_id": 118034285,
"file_id": 118034592,
"enrich": "categories_unavailable",
"video_date_update": "success",
"verified": true,
"upVoteCount": 2,
"downVoteCount": 0,
"remainingVoteCount": 2
},
"relevancyScore": null,
"answerDate": "Created Date",
"upVote": 2,
"downVote": 0,
"embeddings": null,
"weightageByDate": 0.0,
"collectionDetails": null,
"verified": false,
"sourceMeta": "{}",
"isVerified": true
}
],
"Concepts": [],
"Taxonomies": [
{
"MetaData": "science->social science->history",
"total_count": 1
},
{
"MetaData": "society->work->unemployment",
"total_count": 1
},
{
"MetaData": "technology and computing",
"total_count": 1
},
{
"MetaData": "none",
"total_count": 1
}
],
"CustomTaxonomies": [],
"organizations": [
{
"text": "Equals 3, LLC",
"count": 9
},
{
"text": "Equals 3",
"count": 4
},
{
"text": "IBM",
"count": 4
},
{
"text": "IDC",
"count": 4
},
{
"text": "Lucy",
"count": 4
}
],
"locations": [
{
"text": "U.S.",
"count": 8
},
{
"text": "Minneapolis",
"count": 4
},
{
"text": "US",
"count": 4
},
{
"text": "KM",
"count": 3
},
{
"text": "New York",
"count": 3
}
],
"persons": [],
"tags": [],
"agencies": [],
"proximoBrands": [],
"docViewData": [
{
"answer_id": "117193296_7",
"FileName": "Lucyisananswerengine.mov",
"InternalFileName": "Lucyisananswerengine.mov",
"author_name": "",
"modifier_name": "SharePoint App",
"Cite": "SharePoint - Misc",
"Source": "[ObjectStoreURL]/l2-003tu/Lucyisananswerengine.mov?bsaccount=1f441773-53bb-478e-9969-f71d3eaf6791&bsid=eyJzaXRlIjoiZXF1YWxzM2FpLnNoYXJlcG9pbnQuY29tLDliZTQzNzcyLTU3MmItNDU0OC1hOGJkLWY2MGEyYTZmMzkxMyxkMTYyZWFkYi1jNjg0LTQ1NzQtYmU2Mi1hNWFlMzExNmJkZjEiLCJkcml2ZSI6ImIhY2pma215dFhTRVdvdmZZS0ttODVFOXZxWXRHRXhuUkZ2bUtscmpFV3ZmR1JlNkxBM1l5aFJwNndaaUNaZFhhcSIsImZpbGUiOiIwMTY1Q1oyQzZBQUNBM1lHN1pBQkJZWkozRVdFWUdWS1hKIn0=",
"Passage": "",
"Company": "003tu",
"Filter3": "7of32",
"Pages": 0,
"documentDate": "2019-02-28T20:21:25Z",
"updatedDate": "2020-02-02T01:47:35Z",
"Confidence": 0.06529566586017609,
"Entities": []
}
],
"autoSearchFiles": [],
"docViewCount": 1,
"questionId": 1265830,
"qnaToken": "bb16b0b8-dae2-4ed3-8a89-6f9af2314602"
}Response Differences with researchTypeMode
When GPT filtering is enabled, you may notice:
- Higher Relevance Scores: Results matching extracted entities typically have higher relevance scores
- Better Ranking: Results are ranked more intelligently based on entity matches
- More Focused Results: Results are more aligned with the specific context of your question
Best Practices
Use researchTypeMode When:
- Queries contain entities
- You need more context accuracy
- Research questions are complex
Use Standard Search When:
- Queries are very simple
- Very broad results are desired
- Performance is top priority
Query Optimization Tips
- Ask naturally written questions
- Include specific entities
- Use filters alongside GPT
Query Optimization Tips
Be Specific: Include specific entities in your questions for best results
- Good: "What are the sales figures for New York office in Q4 2023?"
- Less effective: "What are sales figures?"
Include Context: Provide context about what you're looking for
- Good: "PepsiCo marketing strategies for carbonated beverages"
- Less effective: "marketing strategies"
- Use Natural Language: Write questions as you would naturally ask them
- Good: "What are the quarterly results for the European division?"
- Less effective: "quarterly results European"
- Combine with Filters: Use researchTypeMode together with other filters for best results
- Combine with
selected_file_typesto search specific document types - Combine with
selected_solr_companiesto search across specific sources - Combine with
time_fromandtime_tofor time-based filtering - Combine with
selected_locationsorselected_brandsfor additional filtering
- Combine with
Fallback Behavior
If GPT filtering encounters any issues:
- The system automatically falls back to standard search
- Your request will still succeed and return results
- No error is returned - the system gracefully degrades
Testing and Validation
Testing Checklist
- Verify Feature Access: Confirm GPT filtering specific feature is enabled for your company/instance
- Test Basic Query: Test a simple query with
researchTypeMode=2orresearchTypeMode=3 - Test Entity Extraction: Test queries with locations, brands, organizations
- Compare Results: Compare results with and without GPT filtering
- Test Error Handling: Verify fallback behavior if GPT processing fails
- Performance Testing: Measure response times in your environment
- Integration Testing: Test with your application's specific use cases
Summary
GPT-Enhanced Search Filtering improves search relevance by:
- Enabling with
researchTypeMode=2orresearchTypeMode=3 - Understanding your question using AI
- Extracting key entities and context
- Refining queries internally
- Delivering more relevant, high-quality results
You can begin using GPT-powered filtering immediately by adding the parameter researchTypeMode=2 or researchTypeMode=3 to your API requests.
Note
- As of now, this feature is not fully developed and it was developed as Prototype
- This search filtering is not production ready yet
- No UI is designed and developed to activate the researchTypeMode for an instance/company
- It is recommended not to enable researchTypeMode for any company as it is not fully developed, verified or approved
