什么是粗粮食物有哪些| 四十不惑是什么意思| 什么是调剂| 吊膀子是什么意思| 大姨妈来了不能吃什么水果| 射手和什么星座最配| 背道而驰什么意思| aah是什么意思| 今年夏天为什么这么热| 百合什么时候种植最好| 6月17号什么星座| 质数是什么| 下作是什么意思| 拔牙后吃什么药| 老鼠和什么属相相冲| 甘油三酯高吃什么药效果好| 促甲状腺素低是什么原因| 结婚需要什么| 黄体期是什么时候| 1954属什么生肖| 儿童贫血吃什么补血最快| 梦见抢银行是什么意思| 井是什么生肖| 贴黄瓜片对皮肤有什么好处| 什么样的西瓜甜| 经常打嗝是什么原因| sunglasses什么意思| r值是什么意思| 脚后跟疼用什么药最好| 6.26是什么星座| 康健是什么意思| 小炒肉用什么肉| 肾功能不全是什么意思| 小孩心肌炎有什么症状| 白条鱼是什么鱼| 伟五行属性是什么| 手足口是什么引起的| 三个毛念什么| 此情可待什么意思| 波澜壮阔是什么意思| 快照是什么意思| 菜鸟什么意思| 为什么牛肝便宜没人吃| 清秀是什么意思| msm是什么药| ACS什么意思| 阴虚火旺吃什么药| 保胎吃什么食物好| 秋天有什么花| 下面瘙痒是什么原因| 丁是什么生肖| 吃什么有助于降血压| 血糖高适合吃什么主食| 动则气喘是什么原因| 什么叫四维空间| 补气血喝什么茶| 西安有什么山| 陌路人是什么意思| 感冒吃什么恢复快| 芳华是什么意思| 传宗接代是什么意思| 什么人不穿衣服| 孕妇梦见小蛇是什么意思| 新陈代谢是什么意思| 泡腾片是干什么用的| 骨密度减少是什么意思| 安宫牛黄丸适合什么人群吃| 985代表什么意思| 小三阳吃什么药能转阴| 左侧头疼是什么原因引起的| 抵抗力差吃什么可以增强抵抗力| 心花怒放是什么意思| 美什么美什么| ppl什么意思| 男性吃什么增强性功能| 做梦吃鱼是什么意思| 66大寿有什么讲究| 鬼画符是什么意思| 吃什么补充维生素b1| 眼睛酸疼是什么原因| 莆田荔枝什么时候成熟| 黍米是什么米| 才高八斗代表什么生肖| 喝咖啡对身体有什么好处| 腹泻拉水吃什么药| 男人都是大猪蹄子是什么意思| 水痘疫苗叫什么| 成都是什么气候| 陈宝莲为什么自杀| 安道尔微信暗示什么| 什么龙| 精子有点黄是什么原因| 光顾是什么意思| 成人男性尿床是什么原因造成的| 乳腺结节吃什么食物好| 什么眉什么眼| 维生素c有什么作用| 急救物品五定是什么| 黄什么什么| 乙型肝炎病毒表面抗体阳性是什么意思| 水痘通过什么途径传染| 右脚踝肿是什么原因引起的| ipad什么时候出新款| 黄芪和什么泡水壮阳| 唐人是什么意思| 36周检查什么项目| 头孢属于什么类药物| 霉菌性阴道炎用什么药| 陌上花开可缓缓归矣什么意思| lr是什么意思| 亡羊补牢说明什么道理| 面瘫挂什么科| 牙釉质是什么| 火华读什么| 什么样的孕妇容易翻盘| 心寒是什么意思| 桃李满天下的桃李是什么意思| 有酒瘾是什么感觉| n2是什么| crp是什么意思| 人是由什么组成的| 后背中心疼是什么原因| 为什么会脚麻| 木牛流马是什么意思| 梦到下雪是什么征兆| 乙肝全是阴性是什么意思| 用纸可以折什么| 打歌是什么意思| 四个月宝宝可以吃什么辅食| 妇科检查清洁度二度是什么意思| 88年属什么| 胸痛应该挂什么科| 超五行属什么| 喝什么去湿气| 减肥吃什么药| 色纸是什么| 女性肠痉挛有什么症状| 羊肉水饺配什么菜好吃| 羊水穿刺检查什么| 说辞是什么意思| 什么叫肠上皮化生| 什么情况吃通宣理肺丸| 102是什么意思| 二郎神是什么生肖| ags是什么意思| 1953年是什么生肖| d代表什么| 金戈铁马是什么生肖| 肝主筋的筋是指什么| 格格不入是什么意思| 抽烟对女生有什么危害| 尿痛什么原因引起的| 黑曜石适合什么人戴| 什么是动态心电图| 麝香对孕妇有什么危害性| 集体户口和个人户口有什么区别| 胃胀胃痛吃什么药| 什么药降尿酸最好| 总胆固醇高有什么症状| 口干口苦什么原因| 吃完饭就想睡觉是什么原因| 女人右下巴有痣代表什么| 喝水喝多了有什么坏处| 火龙果不能和什么一起吃| 吃什么补肾壮阳最快| 为什么喝中药会拉肚子| 发来贺电是什么意思| 瘫痪是什么意思| 社保跟医保有什么区别| 什么是避孕套| 三岁属什么生肖| 嗓子疼吃什么水果| 来来来喝完这杯还有三杯是什么歌| 秋葵什么时候播种| 吃什么长肉| 什么是健康管理| 微量元素6项是查什么| 小孩儿咳嗽有什么妙招| 藕色是什么颜色| 为什么前壁容易生男孩| 脑梗病人吃什么营养恢复最好| 塔丝隆是什么面料| 立克次体病是什么意思| 皮肤粗糙缺什么维生素| 什么游戏最赚钱| 瑞字属于五行属什么| 35属什么| 孕妇可以吃什么鱼| super是什么意思| 新生儿白细胞高是什么原因| 甲状腺功能亢进是什么意思| 时乖命蹇是什么意思| 陈旧性心梗是什么意思| 艳字五行属什么| ser是什么氨基酸| 三碘甲状腺原氨酸高是什么意思| 心里堵得慌是什么原因| 四个月读什么| 农历四月是什么月| 香港代购什么东西好| 嗓子疼感冒吃什么药| 什么眼霜去皱效果好| 1和0是什么意思| 宫腔镜检查后需要注意什么| 脚扭了挂什么科| 腮帮子长痘痘是什么原因| 什么叫白眼狼| 小李子为什么叫小李子| 尿检能查出什么| 低压高用什么药| 属虎和什么属相最配| 夜尿多是什么原因引起的| 处是什么结构| 辩驳是什么意思| 梦见火灾预示什么| domestic是什么意思| 专政是什么意思| 喉部有异物感是什么病| mdt是什么意思| 么么哒是什么意思| 国防部是干什么的| 超声介入是什么意思| 燕窝是什么做的| 4ever是什么意思| 瓜田李下什么意思| 爱好是什么意思| 碱性食物都有什么| 女生纹身什么图案好看| 眼角下面长斑是什么原因引起的| 泌乳素高是什么原因| 颈部出汗是什么原因| 怀孕胎盘低有什么影响| 胃烧心吃什么能缓解| 神经衰弱吃什么药好| ups是什么快递公司| 维生素d是什么东西| 右上腹是什么器官| 左眼皮跳是什么预兆呢| 古丽是什么意思| 烧酒是什么酒| 金黄的稻田像什么| 脸颊两边长痘痘是什么原因引起的| 真相是什么意思| 疣是一种什么病| 藜麦是什么| 开天门是什么意思| 什么是泥炭土| c2驾照可以开什么车| 为什么肝最怕吃花生| 胰岛素是干什么的| sls是什么化学成分| 憬五行属什么| 熟石灰是什么| 碧根果和核桃有什么区别| 叉烧是什么肉做的| 撸铁是什么| 扳机点是什么意思| lucas是什么意思| 西芹和芹菜有什么区别| 10.28是什么星座| 荞麦枕头有什么好处| est.是什么意思| 建军节是什么时候| 聪明绝顶是什么意思| 百度

游戏加加(原:N2O游戏大师) V2.49.123.413官方版

百度 不可否认,社会很多培训机构是有证照的,换言之得到了相关部门的批准,虽然也有很多培训班属于无证经营,但是都会受到同样的检查与整治,原因是课外培训给孩子造成的压力实在太大,给家长带来的经济负担也是非常沉重的,全社会都在呼吁为孩子减负、为家长减负,看来并非没有道理。

Natural language generation (NLG) is a software process that produces natural language output. A widely cited survey of NLG methods describes NLG as "the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems that can produce understandable texts in English or other human languages from some underlying non-linguistic representation of information".[1]

While it is widely agreed that the output of any NLG process is text, there is some disagreement about whether the inputs of an NLG system need to be non-linguistic.[2] Common applications of NLG methods include the production of various reports, for example weather [3] and patient reports;[4] image captions;[5] and chatbots like ChatGPT.

Automated NLG can be compared to the process humans use when they turn ideas into writing or speech. Psycholinguists prefer the term language production for this process, which can also be described in mathematical terms, or modeled in a computer for psychological research. NLG systems can also be compared to translators of artificial computer languages, such as decompilers or transpilers, which also produce human-readable code generated from an intermediate representation. Human languages tend to be considerably more complex and allow for much more ambiguity and variety of expression than programming languages, which makes NLG more challenging.

NLG may be viewed as complementary to natural-language understanding (NLU): whereas in natural-language understanding, the system needs to disambiguate the input sentence to produce the machine representation language, in NLG the system needs to make decisions about how to put a representation into words. The practical considerations in building NLU vs. NLG systems are not symmetrical. NLU needs to deal with ambiguous or erroneous user input, whereas the ideas the system wants to express through NLG are generally known precisely. NLG needs to choose a specific, self-consistent textual representation from many potential representations, whereas NLU generally tries to produce a single, normalized representation of the idea expressed.[6]

NLG has existed since ELIZA was developed in the mid 1960s, but the methods were first used commercially in the 1990s.[7] NLG techniques range from simple template-based systems like a mail merge that generates form letters, to systems that have a complex understanding of human grammar. NLG can also be accomplished by training a statistical model using machine learning, typically on a large corpus of human-written texts.[8]

Example

edit

The Pollen Forecast for Scotland system[9] is a simple example of a simple NLG system that could essentially be based on a template. This system takes as input six numbers, which give predicted pollen levels in different parts of Scotland. From these numbers, the system generates a short textual summary of pollen levels as its output.

For example, using the historical data for July 1, 2005, the software produces:

Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country. However, in Northern areas, pollen levels will be moderate with values of 4.

In contrast, the actual forecast (written by a human meteorologist) from this data was:

Pollen counts are expected to remain high at level 6 over most of Scotland, and even level 7 in the south east. The only relief is in the Northern Isles and far northeast of mainland Scotland with medium levels of pollen count.

Comparing these two illustrates some of the choices that NLG systems must make; these are further discussed below.

Stages

edit

The process to generate text can be as simple as keeping a list of canned text that is copied and pasted, possibly linked with some glue text. The results may be satisfactory in simple domains such as horoscope machines or generators of personalized business letters. However, a sophisticated NLG system needs to include stages of planning and merging of information to enable the generation of text that looks natural and does not become repetitive. The typical stages of natural-language generation, as proposed by Dale and Reiter,[6] are:

Content determination: Deciding what information to mention in the text. For instance, in the pollen example above, deciding whether to explicitly mention that pollen level is 7 in the southeast.

Document structuring: Overall organisation of the information to convey. For example, deciding to describe the areas with high pollen levels first, instead of the areas with low pollen levels.

Aggregation: Merging of similar sentences to improve readability and naturalness. For instance, merging the two following sentences:

  • Grass pollen levels for Friday have increased from the moderate to high levels of yesterday and
  • Grass pollen levels will be around 6 to 7 across most parts of the country

into the following single sentence:

  • Grass pollen levels for Friday have increased from the moderate to high levels of yesterday with values of around 6 to 7 across most parts of the country.

Lexical choice: Putting words to the concepts. For example, deciding whether medium or moderate should be used when describing a pollen level of 4.

Referring expression generation: Creating referring expressions that identify objects and regions. For example, deciding to use in the Northern Isles and far northeast of mainland Scotland to refer to a certain region in Scotland. This task also includes making decisions about pronouns and other types of anaphora.

Realization: Creating the actual text, which should be correct according to the rules of syntax, morphology, and orthography. For example, using will be for the future tense of to be.

An alternative approach to NLG is to use "end-to-end" machine learning to build a system, without having separate stages as above.[10] In other words, we build an NLG system by training a machine learning algorithm (often an LSTM) on a large data set of input data and corresponding (human-written) output texts. The end-to-end approach has perhaps been most successful in image captioning,[11] that is automatically generating a textual caption for an image.

Applications

edit

Automatic report generation

edit

From a commercial perspective, the most successful NLG applications have been data-to-text systems which generate textual summaries of databases and data sets; these systems usually perform data analysis as well as text generation. Research has shown that textual summaries can be more effective than graphs and other visuals for decision support,[12][13][14] and that computer-generated texts can be superior (from the reader's perspective) to human-written texts.[15]

The first commercial data-to-text systems produced weather forecasts from weather data. The earliest such system to be deployed was FoG,[3] which was used by Environment Canada to generate weather forecasts in French and English in the early 1990s. The success of FoG triggered other work, both research and commercial. Recent applications include the UK Met Office's text-enhanced forecast.[16]

Data-to-text systems have since been applied in a range of settings. Following the minor earthquake near Beverly Hills, California on March 17, 2014, The Los Angeles Times reported details about the time, location and strength of the quake within 3 minutes of the event. This report was automatically generated by a 'robo-journalist', which converted the incoming data into text via a preset template.[17][18] Currently there is considerable commercial interest in using NLG to summarise financial and business data. Indeed, Gartner has said that NLG will become a standard feature of 90% of modern BI and analytics platforms.[19] NLG is also being used commercially in automated journalism, chatbots, generating product descriptions for e-commerce sites, summarising medical records,[20][4] and enhancing accessibility (for example by describing graphs and data sets to blind people[21]).

An example of an interactive use of NLG is the WYSIWYM framework. It stands for What you see is what you meant and allows users to see and manipulate the continuously rendered view (NLG output) of an underlying formal language document (NLG input), thereby editing the formal language without learning it.

Looking ahead, the current progress in data-to-text generation paves the way for tailoring texts to specific audiences. For example, data from babies in neonatal care can be converted into text differently in a clinical setting, with different levels of technical detail and explanatory language, depending on intended recipient of the text (doctor, nurse, patient). The same idea can be applied in a sports setting, with different reports generated for fans of specific teams.[22]

Image captioning

edit

Over the past few years, there has been an increased interest in automatically generating captions for images, as part of a broader endeavor to investigate the interface between vision and language. A case of data-to-text generation, the algorithm of image captioning (or automatic image description) involves taking an image, analyzing its visual content, and generating a textual description (typically a sentence) that verbalizes the most prominent aspects of the image.

An image captioning system involves two sub-tasks. In Image Analysis, features and attributes of an image are detected and labelled, before mapping these outputs to linguistic structures. Recent research utilizes deep learning approaches through features from a pre-trained convolutional neural network such as AlexNet, VGG or Caffe, where caption generators use an activation layer from the pre-trained network as their input features. Text Generation, the second task, is performed using a wide range of techniques. For example, in the Midge system, input images are represented as triples consisting of object/stuff detections, action/pose detections and spatial relations. These are subsequently mapped to <noun, verb, preposition> triples and realized using a tree substitution grammar.[22]

A common method in image captioning is to use a vision model (such as a ResNet) to encode an image into a vector, then use a language model (such as an RNN) to decode the vector into a caption.[23][24]

Despite advancements, challenges and opportunities remain in image capturing research. Notwithstanding the recent introduction of Flickr30K, MS COCO and other large datasets have enabled the training of more complex models such as neural networks, it has been argued that research in image captioning could benefit from larger and diversified datasets. Designing automatic measures that can mimic human judgments in evaluating the suitability of image descriptions is another need in the area. Other open challenges include visual question-answering (VQA),[25] as well as the construction and evaluation multilingual repositories for image description.[22]

Chatbots

edit

Another area where NLG has been widely applied is automated dialogue systems, frequently in the form of chatbots. A chatbot or chatterbot is a software application used to conduct an on-line chat conversation via text or text-to-speech, in lieu of providing direct contact with a live human agent. While natural language processing (NLP) techniques are applied in deciphering human input, NLG informs the output part of the chatbot algorithms in facilitating real-time dialogues.

Early chatbot systems, including Cleverbot created by Rollo Carpenter in 1988 and published in 1997,[citation needed] reply to questions by identifying how a human has responded to the same question in a conversation database using information retrieval (IR) techniques.[citation needed] Modern chatbot systems predominantly rely on machine learning (ML) models, such as sequence-to-sequence learning and reinforcement learning to generate natural language output. Hybrid models have also been explored. For example, the Alibaba shopping assistant first uses an IR approach to retrieve the best candidates from the knowledge base, then uses the ML-driven seq2seq model re-rank the candidate responses and generate the answer.[26]

Creative writing and computational humor

edit

Creative language generation by NLG has been hypothesized since the field's origins. A recent pioneer in the area is Phillip Parker, who has developed an arsenal of algorithms capable of automatically generating textbooks, crossword puzzles, poems and books on topics ranging from bookbinding to cataracts.[27] The advent of large pretrained transformer-based language models such as GPT-3 has also enabled breakthroughs, with such models demonstrating recognizable ability for creating-writing tasks.[28]

A related area of NLG application is computational humor production.  JAPE (Joke Analysis and Production Engine) is one of the earliest large, automated humor production systems that uses a hand-coded template-based approach to create punning riddles for children. HAHAcronym creates humorous reinterpretations of any given acronym, as well as proposing new fitting acronyms given some keywords.[29]

Despite progresses, many challenges remain in producing automated creative and humorous content that rival human output. In an experiment for generating satirical headlines, outputs of their best BERT-based model were perceived as funny 9.4% of the time (while real headlines from The Onion were 38.4%) and a GPT-2 model fine-tuned on satirical headlines achieved 6.9%.[30]  It has been pointed out that two main issues with humor-generation systems are the lack of annotated data sets and the lack of formal evaluation methods,[29] which could be applicable to other creative content generation. Some have argued relative to other applications, there has been a lack of attention to creative aspects of language production within NLG. NLG researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve NLG output even in data-to-text systems.[22]

Evaluation

edit

As in other scientific fields, NLG researchers need to test how well their systems, modules, and algorithms work. This is called evaluation. There are three basic techniques for evaluating NLG systems:

  • Task-based (extrinsic) evaluation: give the generated text to a person, and assess how well it helps them perform a task (or otherwise achieves its communicative goal). For example, a system which generates summaries of medical data can be evaluated by giving these summaries to doctors, and assessing whether the summaries help doctors make better decisions.[4]
  • Human ratings: give the generated text to a person, and ask them to rate the quality and usefulness of the text.
  • Metrics: compare generated texts to texts written by people from the same input data, using an automatic metric such as BLEU, METEOR, ROUGE and LEPOR.

An ultimate goal is how useful NLG systems are at helping people, which is the first of the above techniques. However, task-based evaluations are time-consuming and expensive, and can be difficult to carry out (especially if they require subjects with specialised expertise, such as doctors). Hence (as in other areas of NLP) task-based evaluations are the exception, not the norm.

Recently researchers are assessing how well human-ratings and metrics correlate with (predict) task-based evaluations. Work is being conducted in the context of Generation Challenges[31] shared-task events. Initial results suggest that human ratings are much better than metrics in this regard. In other words, human ratings usually do predict task-effectiveness at least to some degree (although there are exceptions), while ratings produced by metrics often do not predict task-effectiveness well. These results are preliminary. In any case, human ratings are the most popular evaluation technique in NLG; this is contrast to machine translation, where metrics are widely used.

An AI can be graded on faithfulness to its training data or, alternatively, on factuality. A response that reflects the training data but not reality is faithful but not factual. A confident but unfaithful response is a hallucination. In Natural Language Processing, a hallucination is often defined as "generated content that is nonsensical or unfaithful to the provided source content".[32]

See also

edit

References

edit
  1. ^ Reiter, Ehud; Dale, Robert (March 1997). "Building applied natural language generation systems". Natural Language Engineering. 3 (1): 57–87. doi:10.1017/S1351324997001502. ISSN 1469-8110. S2CID 8460470.
  2. ^ Gatt A, Krahmer E (2018). "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation". Journal of Artificial Intelligence Research. 61 (61): 65–170. arXiv:1703.09902. doi:10.1613/jair.5477. S2CID 16946362.
  3. ^ a b Goldberg E, Driedger N, Kittredge R (1994). "Using Natural-Language Processing to Produce Weather Forecasts". IEEE Expert. 9 (2): 45–53. doi:10.1109/64.294135. S2CID 9709337.
  4. ^ a b c Portet F, Reiter E, Gatt A, Hunter J, Sripada S, Freer Y, Sykes C (2009). "Automatic Generation of Textual Summaries from Neonatal Intensive Care Data" (PDF). Artificial Intelligence. 173 (7–8): 789–816. doi:10.1016/j.artint.2008.12.002.
  5. ^ Farhadi A, Hejrati M, Sadeghi MA, Young P, Rashtchian C, Hockenmaier J, Forsyth D (2025-08-07). Every picture tells a story: Generating sentences from images (PDF). European conference on computer vision. Berlin, Heidelberg: Springer. pp. 15–29. doi:10.1007/978-3-642-15561-1_2.
  6. ^ a b Dale, Robert; Reiter, Ehud (2000). Building natural language generation systems. Cambridge, U.K.: Cambridge University Press. ISBN 978-0-521-02451-8.
  7. ^ Ehud Reiter (2025-08-07). History of NLG. Archived from the original on 2025-08-07.
  8. ^ Perera R, Nand P (2017). "Recent Advances in Natural Language Generation: A Survey and Classification of the Empirical Literature". Computing and Informatics. 36 (1): 1–32. doi:10.4149/cai_2017_1_1. hdl:10292/10691.
  9. ^ R Turner, S Sripada, E Reiter, I Davy (2006). Generating Spatio-Temporal Descriptions in Pollen Forecasts. Proceedings of EACL06
  10. ^ "E2E NLG Challenge".
  11. ^ "DataLabCup: Image Caption".
  12. ^ Law A, Freer Y, Hunter J, Logie R, McIntosh N, Quinn J (2005). "A Comparison of Graphical and Textual Presentations of Time Series Data to Support Medical Decision Making in the Neonatal Intensive Care Unit". Journal of Clinical Monitoring and Computing. 19 (3): 183–94. doi:10.1007/s10877-005-0879-3. PMID 16244840. S2CID 5569544.
  13. ^ Gkatzia D, Lemon O, Reiser V (2017). "Data-to-Text Generation Improves Decision-Making Under Uncertainty" (PDF). IEEE Computational Intelligence Magazine. 12 (3): 10–17. doi:10.1109/MCI.2017.2708998. S2CID 9544295.
  14. ^ "Text or Graphics?". 2025-08-07.
  15. ^ Reiter E, Sripada S, Hunter J, Yu J, Davy I (2005). "Choosing Words in Computer-Generated Weather Forecasts". Artificial Intelligence. 167 (1–2): 137–69. doi:10.1016/j.artint.2005.06.006.
  16. ^ S Sripada, N Burnett, R Turner, J Mastin, D Evans(2014). Generating A Case Study: NLG meeting Weather Industry Demand for Quality and Quantity of Textual Weather Forecasts. Proceedings of INLG 2014
  17. ^ Schwencke, Ken Schwencke Ken; Journalist, A.; Programmer, Computer; in 2014, left the Los Angeles Times (2025-08-07). "Earthquake aftershock: 2.7 quake strikes near Westwood". Los Angeles Times. Retrieved 2025-08-07.{{cite web}}: CS1 maint: numeric names: authors list (link)
  18. ^ Levenson, Eric (2025-08-07). "L.A. Times Journalist Explains How a Bot Wrote His Earthquake Story for Him". The Atlantic. Retrieved 2025-08-07.
  19. ^ "Neural Networks and Modern BI Platforms Will Evolve Data and Analytics".
  20. ^ Harris MD (2008). "Building a Large-Scale Commercial NLG System for an EMR" (PDF). Proceedings of the Fifth International Natural Language Generation Conference. pp. 157–60.
  21. ^ "Welcome to the iGraph-Lite page". www.inf.udec.cl. Archived from the original on 2025-08-07.
  22. ^ a b c d Gatt, Albert; Krahmer, Emiel (2025-08-07). "Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation". arXiv:1703.09902 [cs.CL].
  23. ^ Vinyals, Oriol; Toshev, Alexander; Bengio, Samy; Erhan, Dumitru (2015). "Show and Tell: A Neural Image Caption Generator": 3156–3164. {{cite journal}}: Cite journal requires |journal= (help)
  24. ^ Karpathy, Andrej; Fei-Fei, Li (2015). "Deep Visual-Semantic Alignments for Generating Image Descriptions": 3128–3137. {{cite journal}}: Cite journal requires |journal= (help)
  25. ^ Kodali, Venkat; Berleant, Daniel (2022). "Recent, Rapid Advancement in Visual Question Answering Architecture: a Review". Proceedings of the 22nd IEEE International Conference on EIT. pp. 133–146. arXiv:2203.01322.
  26. ^ Mnasri, Maali (2025-08-07). "Recent advances in conversational NLP: Towards the standardization of Chatbot building". arXiv:1903.09025 [cs.CL].
  27. ^ "How To Author Over 1 Million Books". HuffPost. 2025-08-07. Retrieved 2025-08-07.
  28. ^ "Exploring GPT-3: A New Breakthrough in Language Generation". KDnuggets. Retrieved 2025-08-07.
  29. ^ a b Winters, Thomas (2025-08-07). "Computers Learning Humor Is No Joke". Harvard Data Science Review. 3 (2). doi:10.1162/99608f92.f13a2337. S2CID 235589737.
  30. ^ Horvitz, Zachary; Do, Nam; Littman, Michael L. (July 2020). "Context-Driven Satirical News Generation". Proceedings of the Second Workshop on Figurative Language Processing. Online: Association for Computational Linguistics: 40–50. doi:10.18653/v1/2020.figlang-1.5. S2CID 220330989.
  31. ^ Generation Challenges
  32. ^ Ji, Ziwei; Lee, Nayeon; Frieske, Rita; Yu, Tiezheng; Su, Dan; Xu, Yan; Ishii, Etsuko; Bang, Yejin; Madotto, Andrea; Fung, Pascale (17 November 2022). "Survey of Hallucination in Natural Language Generation". ACM Computing Surveys. 55 (12): 3571730. arXiv:2202.03629. doi:10.1145/3571730. S2CID 246652372.

Further reading

edit


什么是正念 什么是玄关在哪个位置 浪琴手表什么档次 腿部发痒是什么原因引起的 hcy是什么意思
巴不得是什么意思 休学是什么意思 每天一杯蜂蜜水有什么好处 干燥综合症挂什么科 美妙绝伦是什么意思
白细胞酯酶弱阳性是什么意思 肉包子打狗的歇后语是什么 上天是什么意思 胎儿胆囊偏大有什么影响 panerai是什么牌子
兵戎相见是什么意思 梦见瓜是什么意思 来曲唑片是什么药 胸疼是什么原因 六味地黄丸什么牌子的好
厥阴病是什么意思hcv9jop3ns9r.cn 均匀是什么意思hcv8jop9ns2r.cn 为什么会长闭口粉刺hcv7jop5ns5r.cn 1129是什么星座hcv8jop0ns9r.cn 缺血灶是什么病xinjiangjialails.com
玉米炒什么好吃hcv9jop2ns1r.cn 小便刺痛什么原因hcv9jop1ns7r.cn 小心地什么hcv9jop1ns7r.cn 同性恋是什么hcv9jop3ns0r.cn 什么情况下需做肠镜hcv9jop1ns0r.cn
什么颜色的头发显白hcv9jop0ns8r.cn 修成正果是什么意思hcv8jop7ns7r.cn 花开花落不见你回头是什么歌hcv9jop4ns4r.cn 下午6点是什么时辰hcv9jop3ns5r.cn 为什么会有床虱huizhijixie.com
黑暗料理是什么意思hcv8jop3ns4r.cn 苋菜长什么样hcv9jop8ns0r.cn 结节病变是什么意思wuhaiwuya.com wwe是什么意思hcv7jop6ns6r.cn 猕猴桃什么时候上市hcv7jop6ns8r.cn
百度