帳號:guest(3.133.144.217)          離開系統
字體大小: 字級放大   字級縮小   預設字形  

詳目顯示

以作者查詢圖書館館藏以作者&題名查詢臺灣博碩士以作者查詢全國書目
作者:林友祥
作者(英文):Lin, You-Siang
論文名稱(中文):CAST:自動化動態軟體驗證工具
論文名稱(英文):CAST: Automatic and Dynamic Software Verification Tool
指導教授(中文):黃世昆
指導教授(英文):Huang, Shih-Kun
學位類別:碩士
校院名稱:國立交通大學
系所名稱:資訊科學與工程研究所
學號:9555545
出版年(民國):98
畢業學年度:97
語文別:英文
論文頁數:53
中文關鍵詞:測試驗證動態符號測試
外文關鍵詞:testingverificationconcolic testingexploit
相關次數:
  • 推薦推薦:0
  • 點閱點閱:391
  • 評分評分:*****
  • 下載下載:9
  • 收藏收藏:0
軟體測試是軟體工程用以確保軟體品質重要的一部分。此外,在程式中自動驗證特性是軟體測試的遠程目標。近年來,結合具體與符號執行( concolic 測試)成為一個眾所周知的方法用來路徑分支測試並且許多研究表明該方法可以結合全域檢查,來找出程式錯誤。在本論文中,我們提出CAST 的規範語言,建立於concolic 測試結合全域檢查的基礎上,可以描述各種規格檢查C 語言程式的安全特性(從另一個角度來看,我們可以將此作為一種駭客攻擊,以獲得接近exploit 的測試資料)。CAST 是一個自動和動態軟體驗證工具,主要包括樣式匹配,全域檢查和資料流分析,所以可以使我們的全域檢查比一般的concolic 測試的更加靈活和複雜。
Software testing is an essential part of software engineering for ensuring software quality. Furthermore, automatically verifying properties in programs is a long-time goal in software testing. In recent years, combining concrete and symbolic execution (concolic testing) becomes a well-known approach for branch testing and many researches indicate that the approach can combine with universal checks to find bugs. In this paper, we present the CAST specification language which can describe various kinds of specification for checking security properties of C programs (from another point of view, we can take this as a hack attack to attain test cases close to exploit) based on concolic testing with universal checks. CAST, an automatic and dynamic software verification tool, is mainly composed of pattern matching, universal check and data flow analysis such that we can make universal checks more flexible and complex than that general concolic testing uses.
摘要 .......................................................................................................................................... i
ABSTRACT ................................................................................................................................... ii
誌謝 ........................................................................................................................................ iii
Contents .........................................................................................................................................iv
List of Figures ................................................................................................................................vi
List of Tables ................................................................................................................................ vii
1. Introduction ................................................................................................................................. 1
1.1. Background ..................................................................................................................... 1
1.1.1. Universal Checks ..................................................................................................... 2
1.1.2. Dynamic Data Flow Analysis .................................................................................. 3
1.1.3. Static Program Analysis ........................................................................................... 4
1.1.4. Dynamic Program Analysis ..................................................................................... 5
1.1.5. Fuzz Testing (Fuzzing or Random Testing).............................................................. 5
1.1.6. Symbolic Execution ................................................................................................. 5
1.1.7. Concolic Testing ...................................................................................................... 6
1.1.8. Static Data Flow Analysis ........................................................................................ 6
1.2. Common Vulnerabilities .................................................................................................. 7
1.2.1. Buffer Overflow ...................................................................................................... 7
1.2.1.1. Stack Overflow .................................................................................................. 8
1.2.1.2. Heap Overflow .................................................................................................. 9
1.2.2. Command Injection ................................................................................................. 9
1.3. Motivation..................................................................................................................... 10
1.4. Objective ........................................................................................................................ 11
1.5. A Motivation Example................................................................................................... 11
2. Testing Sequential Programs with CAST Specification ............................................................... 13
2.1. CAST Architecture ........................................................................................................ 13
2.1.1. CIL Simplification ................................................................................................. 14
2.1.2. CAST Parser .......................................................................................................... 16
2.1.3. CIL Instrumenter ................................................................................................... 16
2.1.4. Self-Tested Program .............................................................................................. 19
2.2. Implementation ............................................................................................................. 19
2.2.1. Symbolic Execution ............................................................................................... 19
2.2.2. Symbolic Name and Symbolic Byte Name ............................................................. 20
2.2.3. Object Map ............................................................................................................ 22
2.2.4. Symbolic Pointer Read/Write with Symbolic Value ............................................... 23
2.2.5. Use Object Map to Construct Shellcode Constraints .............................................. 25
2.2.6. Dynamic Tainted Data Flow for Function Call ....................................................... 26
v
2.2.7. Standard Library Testing by Using uclibc instead of glibc ..................................... 26
2.2.8. Post-condition for standard library fgets and read from stdin ................................. 27
2.2.9. Environment .......................................................................................................... 27
2.3. Syntax of Specification .................................................................................................. 27
2.3.1. Pattern Matching ................................................................................................... 29
2.3.2. Universal Check (Specification Check Instrumentation) ........................................ 30
2.4. Semantic of specification ............................................................................................... 31
2.4.1. Specification of Integer Overflow .......................................................................... 31
3. Experiment ................................................................................................................................ 33
3.1. Wargame 1 (Exploiting a Bug without ShellCode) ...................................................... 33
3.2. Wargame 2 (Exploiting a Bug with Command Injection) ........................................... 35
3.3. Wargame 3 (Exploiting a Buffer Overlfow Bug with ShellCode Injection) ................ 38
4. Related Work .............................................................................................................................. 44
5. Conclusion ................................................................................................................................. 46
6. Future Work and Discussion ....................................................................................................... 47
6.1. Concrete Value Bound to Symbolic Value .................................................................... 47
6.2. The Influence of Alignment of gcc Compiler ................................................................ 47
6.3. Source Level Testing without the Details of Low Level Implementation .................... 49
6.4. How to Debug a Debug Tool .......................................................................................... 50
References ..................................................................................................................................... 50
[1] G. Tassey. The economic impacts of inadequate infrastructure for software testing. National
Institute of Standards and Technology RTI Project Number 7007.011, 2002.
[2] R. S. Boyer, B. Elspas and K. N. Levitt. SELECT—a formal system for testing and debugging
programs by symbolic execution. In Proceedings of the International Conference on Reliable
Software, Los Angeles, CA, 21–23 April 1975; 234–245.
[3] C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill and D. R. Engler. EXE: Automatically
generating inputs of death. In Proceedings of the 13th ACM Conference on Computer and
Communications Security, 2006.
[4] B. Chess and J. West. Secure Programming with Static Analysis. Boston, MA, USA:
Addison-Wesley Professional, 2007,
[5] C. Kaner, J. L. Falk and H. Q. Nguyen, Testing Computer Software. John Wiley & Sons, Inc.
New York, USA, 1999.
[6] K. Sen, D. Marinov and G. Agha. CUTE: A concolic unit testing engine for C. In Proceedings of
the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT
International Symposium on Foundations of Software Engineering, 2005.
51
[7] C. Cadar, P. Twohey, V. Ganesh and D. Engler. EXE: A system for automatically generating
inputs of death using symbolic execution. Technical Report CSTR 200601, Stanford, 2006.
[8] J. Larus, T. Ball, M. Das, R. DeLine, M. Fahndrich, J. Pincus, S. Rajamani and R. Venkatapathy,
Righting software. IEEE Software, vol. 21, pp. 92-100, 2004.
[9] N. Nagappan and T. Ball. Static analysis tools as early indicators of pre-release defect density. In
Proceedings of the 27th International Conference on Software Engineering, 2005.
[10] S. C. Johnson and inc Bell Telephone Laboratories, Lint, a C Program Checker. Bell Telephone
Laboratories, 1977,
[11] D. Evans, J. Guttag, J. Homing and Y. M. Tan, LCLint: A tool for using specifications to check
code. In Proceedings of the 2nd ACM SIGSOFT symposium on Foundations of software
engineering, p.87-96, December 06-09, 1994.
[12] C. Flanagan, K. R. M. Leino, M. Lillibridge, G. Nelson, J. B. Saxe and R. Stata. Extended static
checking for java. In Proceedings of the ACM SIGPLAN 2002 Conference on Programming
language design and implementation, June 17-19, 2002.
[13] J. Hatcliff and M. Dwyer. Using the bandera tool set to model-check properties of concurrent
java software. In Proceedings of the 12th International Conference on Concurrency Theory,
p.39-58, August 20-25, 2001.
[14] T. Jim, G. Morrisett, D. Grossman, M. Hicks, J. Cheney and Y. Wang. Cyclone: A safe dialect of
C. In Proceedings of the General Track: 2002 USENIX Annual Technical Conference,
p.275-288, June 10-15, 2002.
[15] B. Beizer. Software Testing Techniques. Van Nostrand Reinhold Co. New York, USA, 1990.
[16] P. Coward and B. Polytech. Symbolic execution systems-a review. Software Engineering
Journal 3(6), pp. 229-239, 1988.
[17] K. Sen. Concolic testing. In Proceedings of the Twenty-Second IEEE/ACM International
Conference on Automated Software Engineering, 2007, pp. 571-572.
[18] C. Cadar and D. Engler. Execution generated test cases: How to make systems code crash itself.
In Proceedings of SPIN Workshop, 2005.
[19] P. Godefroid, N. Klarlund and K. Sen. DART: Directed automated random testing. In
Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and
implementation, June 12-15, 2005.
52
[20] R. Majumdar and K. Sen. Hybrid concolic testing. In Proceedings of the 29th international
conference on Software Engineering, p.416-426, May 20-26, 2007.
[21] R. Majumdar and K. Sen. Latest: Lazy dynamic test input generation. Technical Report
UCB/EECS-2007-36, EECS Department, University of California, Berkeley, 2007,
[22] U. Shankar, K. Talwar, J. S. Foster and D. Wagner, Detecting format string vulnerabilities with
type qualifiers. In Proceedings of the 10th USENIX Security Symposium, 2001,
[23] G. C. Necula, S. McPeak, S. P. Rahul and W. Weimer. CIL: Intermediate language and tools for
analysis and transformation of C programs. In Proceedings of the 11th International Conference
on Compiler Construction, p.213-228, April 08-12, 2002.
[24] W. Chang, B. Streiff and C. Lin. Efficient and extensible security enforcement using dynamic
data flow analysis. In Proceedings of the 15th ACM conference on Computer and
communications security, October 27-31, 2008.
[25] OWASP Top 10 2004. http://www.owasp.org/index.php/Top_10_2004
[26] CERT/CC Advisories. http://www.cert.org/advisories/
[27] C. F. Yang. Resolving constraints from COTS/Binary components for concolic random testing.
Master thesis, NCTU, 2007.
[28] Y. L. Yen. Target directed random testing for feasible state generation. Master thesis, NCTU,
2007.
[29] Li-Wen Hsu. Resolving unspecified software features by directed random testing. Master thesis,
NCTU, 2007.
[30] C. Barrett and C. Tinelli. (2008, CVC3. LECTURE NOTES IN COMPUTER SCIENCE
[31] J. Burnim and K. Sen. Heuristics for scalable dynamic test generation. Presented at 23rd
IEEE/ACM International Conference on Automated Software Engineering (ASE 2008).
[32] O. Ruwase and M. S. Lam. A practical dynamic buffer overflow detector. In Proceedings of the
11th Annual Network and Distributed System Security Symposium, 2004.
[33] Metasploit Shellcode, 2009. http://www.metasploit.com/shellcode/
[34] Y. Kannan and K. Sen. Universal symbolic execution and its application to likely data structure
invariant generation. In Proceedings of the 2008 International Symposium on Software Testing
and Analysis, 2008.
53
[35] R. G. Xu, P. Godefroid and R. Majumdar. Testing for buffer overflows with length abstraction.
In Proceedings of the 2008 International Symposium on Software Testing and Analysis, 2008.
[36] P. Godefroid, M. Y. Levin and D. Molnar. Automated whitebox fuzz testing. In Proceedings of
the Network and Distributed System Security Symposium, 2008.
[37] P. Godefroid, M. Y. Levin and D. A. Molnar. Active property checking. In Proceedings of the
7th ACM International Conference on Embedded Software, 2008.
[38] C. Cadar, V. Ganesh, P. M. Pawlowski, D. L. Dill and D. R. Engler. EXE: Automatically
generating inputs of death. ACM Transactions on Information and System Security, 2008.
[39] P. Boonstoppel, C. Cadar and D. Engler. RWset: Attacking path explosion in constraint-based
test generation. In Proceedings of Tools and Algorithms for the Construction and Analysis of
Systems, 2008.
[40] C. Cadar, D. Dunbar and D. Engler. Klee: Unassisted and automatic generation of
high-coverage tests for complex systems programs. USENIX Symposium on Operating Systems
Design and Implementation, 2008.
[41] D. Beyer, A. J. Chlipala, T. A. Henzinger, R. Jhala and R. Majumdar. The blast query language
for software verification. In Proceedings of the 11th International Static Analysis Symposium,
2004.
[42] A. Zeller, Yesterday, my program worked. Today, it does not. Why?, Proceedings of the 7th
European software engineering conference held jointly with the 7th ACM SIGSOFT
international symposium on Foundations of software engineering, p.253-267, September 06-10,
1999.
[43] A. Zeller, U. des Saarlandes and G. SaarbrOcken, Isolating cause-effect chains from computer
programs, Proceedings of the 10th ACM SIGSOFT symposium on Foundations of software
engineering, November 18-22, 2002.
 
 
 
 
第一頁 上一頁 下一頁 最後一頁 top
* *