NLPIR/ICTCLA2018 ACADEMIC SEMINAR 5th ISSUE

NLPIR/ICTCLA2018 ACADEMIC SEMINAR 5th ISSUE


r!F(t.p0~5U:D9m.`7l,G0

INTRO

@};zDV3D0

        In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Wednesdays, and each time a keynote speaker will share understanding of papers published in recent years with you.自然语言处理与信息检索共享平台’~f1h6ct7o

 

3Jt3SS!DH0

Arrangement

-NX![:cd0

        This week’s seminar is organized as follows:

k,IpIy \0

        1. The seminar time is 1.pm, Wed, at Zhongguancun Technology Park ,Building 5, 1306.自然语言处理与信息检索共享平台9^!c/n;b”lG n!|

        2. The lecturer is Qinghong Jiang, the paper’s title is Towards Designing Better Session Search Evaluation Metrics.

V+w$^2s+sg e0

        3. The seminar will be hosted Shen Li.自然语言处理与信息检索共享平台iv5L0n s(q’X m0Be#PR

        4. Attachment is the paper of this seminar, please download in advance

*{~av@,p1EICT0

 自然语言处理与信息检索共享平台.zT
kdDs:{3]g!A'{

        Everyone interested in this topic is welcomed to join us. the following is the abstract for this week’s paper 自然语言处理与信息检索共享平台X,s
q7OZ?&jl

 自然语言处理与信息检索共享平台9P(\
YW)|)vH

Towards Designing Better Session Search Evaluation Metrics自然语言处理与信息检索共享平台6D/@o3\Qu

Mengyang Liu, Yiqun Liu, Jiaxin Mao, Cheng Luo, Shaoping Ma自然语言处理与信息检索共享平台v2K3Pu-o!Ly i

Abstract

5UE2J2x0{x l0

        User satisfaction has been paid much attention to in recent Web search evaluation studies and regarded as the ground truth for designing better evaluation metrics. However, most existing studies are focused on the relationship between satisfaction and evaluation metrics at query-level. However, while search request becomes more and more complex, there are many scenarios in which multiple queries and multi-round search interactions are needed (e.g. exploratory search). In those cases, the relationship between session-level search satisfaction and session search evaluation metrics remain uninvestigated. In this paper, we analyze how users’ perceptions of satisfaction accord with a series of session-level evaluation metrics. We conduct a laboratory study in which users are required to finish some complex search tasks and provide usefulness judgments of documents as well as session-level and query level satisfaction feedbacks. We test a number of popular session search evaluation metrics as well as different weighting functions. Experiment results show that query-level satisfaction is mainly decided by the clicked document that they think the most useful (maximum effect). While session-level satisfaction is highly correlated with the most recently issued queries (recency effect). We further propose a number of criteria for designing better session search evaluation metrics.自然语言处理与信息检索共享平台zS
R*_lU {.k
j&_l

 

WRx,RU$@0

 

T+ua*h1E6[&a A ZV0

Towards Designing Better Session Search Evaluation Metrics.pdf(1.8 MB)

You May Also Like

About the Author: nlpir

发表评论