
Table 6: Examples of DADU-generated and labeled summaries.
Label Summary Generated Summary
Accident happens in Santa Ynez, California, near where Crosby lives. The jogger
suffered multiple fractures; his injuries are not believed to be life-threatening.
A driving accident occurred in Santa Ynez, California. Crosby struck a jogger who
suffered multiple fractures and was airlifted to a hospital in Santa Barbara.
Sigma Alpha Epsilon is being tossed out by the University of Oklahoma. Its also
run afoul of officials at Yale, Stanford, and Johns Hopkins in recent months.
Sigma Alpha Epsilon is under fire for a video showing fraternity members chanting
a racist phrase. The party that the video was made at has been suspended and
the fraternity is facing permanent ban. This controversy is the latest in a series of
incidents involving SAE in recent months, with hard work being done by alumni
and chapters to make changes.
Join Meerkat founder Ben Rubin for a live chat at 2 p.m. ET Wednesday. Follow
@benrbn and @lauriesegallCNN on Meerkat. Use hashtag #CNNInstantStartups to
join the conversation on Twitter.
Meerkat launched during SXSW in Austin last week. Ben Rubin, the founder of
Meerkat, will be available for questions and comments on Meerkat or Twitter. This
is a great opportunity to ask him anything you want to know about the app.
Gehrmann, S., Deng, Y., and Rush, A. M. (2018). Bottom-
up abstractive summarization.
Graff, D. and Cieri, C. (2007). English gigaword, third
edition. https://catalog.ldc.upenn.edu/LDC2007T07.
LDC2007T07.
Kupiec, J., Pedersen, J., and Chen, F. (1995). A trainable
document summarizer. In Proceedings of the Annual
International ACM SIGIR Conference on Research
and Development in Information Retrieval, pages 68–
73.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo-
hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer,
L. (2019). Bart: Denoising sequence-to-sequence pre-
training for natural language generation, translation,
and comprehension.
Lin, C.-Y. (2004). Rouge: A package for automatic evalu-
ation of summaries. In Text summarization branches
out, pages 74–81.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D.,
Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov,
V. (2019). Roberta: A robustly optimized bert pre-
training approach.
Luong, M.-T., Le, Q. V., Sutskever, I., Vinyals, O., and
Kaiser, L. (2015). Multi-task sequence to sequence
learning.
Nallapati, R., Zhai, F., and Zhou, B. (2017). Summarunner:
A recurrent neural network based sequence model for
extractive summarization of documents. In Proceed-
ings of the AAAI Conference on Artificial Intelligence,
volume 31.
Narayan, S., Cohen, S. B., and Lapata, M. (2018). Rank-
ing sentences for extractive summarization with rein-
forcement learning.
Pasunuru, R. and Bansal, M. (2018). Multi-reward rein-
forced summarization with saliency and entailment.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D.,
Sutskever, I., et al. (2019). Language models are un-
supervised multitask learners. OpenAI Blog, 1(8):9.
Rothe, S., Narayan, S., and Severyn, A. (2020). Leverag-
ing pre-trained checkpoints for sequence generation
tasks. Transactions of the Association for Computa-
tional Linguistics, 8:264–280.
Song, K., Tan, X., Qin, T., Lu, J., and Liu, T.-Y. (2019).
Mass: Masked sequence to sequence pre-training for
language generation. In International Conference on
Machine Learning (ICML), pages 5926–5936.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones,
L., Gomez, A. N., Kaiser, ., and Polosukhin, I. (2017).
Attention is all you need. In Advances in Neural In-
formation Processing Systems, volume 30.
Wang, T. and Cho, K. (2015). Larger-context language
modelling. arXiv preprint arXiv:1511.03729.
Yang, Z., Zhu, C., Gmyr, R., Zeng, M., Huang, X., and
Darve, E. (2020). Ted: A pretrained unsupervised
summarization model with theme modeling and de-
noising.
Yao, J.-g., Wan, X., and Xiao, J. (2017). Recent advances
in document summarization. Knowledge and Infor-
mation Systems, 53(2):297–336.
Zhang, J., Zhao, Y., Saleh, M., and Liu, P. J. (2020). Pega-
sus: Pre-training with extracted gap-sentences for ab-
stractive summarization. In International Conference
on Machine Learning (ICML), pages 11328–11339.
Objective-Oriented Transformer for Abstractive Document Summarization
247