×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.

Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.

Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.

×
Register


Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
  • Your password can’t be too similar to your other personal information.
  • Your password must contain at least 8 characters.
  • Your password can’t be a commonly used password.
  • Your password can’t be entirely numeric.

Enter the same password as before, for verification.
Login

Grow A Dic
Define A Word
Make Space
Set Task
Mark Post
Apply Votestyle
Create Votes
(From: saved spaces)
Exclude Votes
Apply Dic
Exclude Dic

Click here to flash read.

Large Language Models (LLMs), acting as a powerful reasoner and generator,
exhibit extraordinary performance across various natural language tasks, such
as question answering (QA). Among these tasks, Multi-Hop Question Answering
(MHQA) stands as a widely discussed category, necessitating seamless
integration between LLMs and the retrieval of external knowledge. Existing
methods employ LLM to generate reasoning paths and plans, and utilize IR to
iteratively retrieve related knowledge, but these approaches have inherent
flaws. On one hand, Information Retriever (IR) is hindered by the low quality
of generated queries by LLM. On the other hand, LLM is easily misguided by the
irrelevant knowledge by IR. These inaccuracies, accumulated by the iterative
interaction between IR and LLM, lead to a disaster in effectiveness at the end.
To overcome above barriers, in this paper, we propose a novel pipeline for MHQA
called Furthest-Reasoning-with-Plan-Assessment (FuRePA), including an improved
framework (Furthest Reasoning) and an attached module (Plan Assessor). 1)
Furthest reasoning operates by masking previous reasoning path and generated
queries for LLM, encouraging LLM generating chain of thought from scratch in
each iteration. This approach enables LLM to break the shackle built by
previous misleading thoughts and queries (if any). 2) The Plan Assessor is a
trained evaluator that selects an appropriate plan from a group of candidate
plans proposed by LLM. Our methods are evaluated on three highly recognized
public multi-hop question answering datasets and outperform state-of-the-art on
most metrics (achieving a 10%-12% in answer accuracy).

Click here to read this post out
ID: 424501; Unique Viewers: 0
Unique Voters: 0
Total Votes: 0
Votes:
Latest Change: Sept. 25, 2023, 7:31 a.m. Changes:
Dictionaries:
Words:
Spaces:
Views: 11
CC:
No creative common's license
Comments: