Wednesday, September 26, 2012

Architectural Mistmatch: Why Reuse is so Hard?


Well the paper is written in 1994 which is almost 2 decades earlier, which if we compare with present existing integration strategies , design principles and architectural decisions may or may not give us a complete solution. Like if we want to adopt the 4 given suggestions in this paper, then we may or may not find them that useful for research into “software integration” domain, because there are architectures like SOA -> eliminates the need for dependencies as all components talk through web-based-messaging technology like SOAP, OR we may think of REST which also in various ways eliminates the need for architectural dependency.   
Problems/challenges of Integrating reusable components to build a system ( eg. Aesop -> Environment Generating System )
Components studied ->
OBST -> object-oriented DB
Interviews -> GUI interfaces (Stanford University)
SoftBench -> event-Based tool integration (HP)
Mach RPC Interface Generator (MIG)-> RPC stub generator (CMU)
Performance of resultant system ->
delayed development
sluggish Software

Problems in Integration


+ Excessive code.  The binary code of our user interface alone was more than 3  Mbytes after stripping. The binary code of  our database server was  2 . 3 
Mbytes after stripping. Even small tools (of, say, 20 lines of code) interact- ing with  our system were more than 600,000 lines  after stripping! In an
operating system  without  shared libraries, running the  central  compo- nents plus the supporting tools (such as external structure editors, specification
checkers, and compilers) overwhelmed the resources of a midsize workstation.
+  Poor performance.  The  system operated much more slowly than we wished.  Some  of  the  problems occurred  because of overhead from tool-to-database  communication. For example, saving the state of  a simple architectural diagram (containing, say, 20 design objects) took several minutes when we first  tried it out. Even with performance  tuning, it still took many seconds to perform such an operation. The excessive code also contributed to the performance problem. Under the Andrew File System, which we were using, files are cached at the local workstation  in total when they are opened. When tools  are large, the start-up  overhead is  also large. For example, the start-up time of an Aesop environment with  even a minimal tool configuration took approximately three minutes.
Need  to  modify  external packages. Even though the reusable  packages seemed to run "out  of the box"  in our initial tests, we discovered that once we
combined  them in a  complete system they needed major modifications to work together at  all. For example, we had  to  significantly  modify  the
SoftBench client-event  loop (a  critical piece of the functionality)  for it to work with the Interviews event mechanism. We  also had to reverse-engi- neer the memory-allocation routines for OBST to communicate object handles to external tools.
+  Need  to reinvent existing finctions. In some cases, modifymg the packages was  not enough. W e  also had  to aug- ment the packages with  different ver- sions of the functions they already sup- plied.  For example, we were forced to bypass Interviews' support for hierar- chical data structures because it did not allow direct, external  access to hierar- chically nested  subvisualizations. Similarly, we  ended up building our own separate transaction mechanism
that acted  as  a  server  on top of  a  ver- sion of  the OBST database software, even though the original version sup- ported  transactions. W e  did  this so
that we could share transactions across multiple address spaces, a  capability not in original version.
+  Unnecessarily complicated  tools. Many of  the architectural  tools we wanted to develop on top of the infra- structure were logically simple sequen-
tial programs. However, in many cases it was difficult  to build  them as such because the standard interface to their environment required them to handle
multiple, independent threads  of com- putation simultaneously.
+ Error-prone construction process. As we  built the system, modifications became increasingly costly. The time to recompile a new version of the sys-
tem became quite long and seemingly simple modifications (such as the introduction of  a new procedure  call) would  break the automated build rou-
tines. The recompilation time was due in part to the code size. But more sig- nificantly,  it was also because of inter- locking code dependencies  that
required minor  changes to propagate (in  the form of required recompila- tions) throughout most of the system.

Excessive Code ->  GUI (3 MB), DB server(2.3 MB), (editors +specs checkers + compilers) = Overwhelmed workstation.
Poor performance -> Tool-to-DB communications overhead + start-up overhead.
Modify External Packages -> SoftBench’s Client-event loop, OSBT’s memory-allocation routines
Reinvent Existing functions -> OSBT’s transaction mechanism (multiple address space access)
Unnecessarily complicated  tools -> handling multithread environment where its not required.
Error-prone construction process -> costly modifications, reform build routines, dependencies require changes to propagate throughout the system.

Causes in nutshell - Assumptions

Nature of Components
    –Infrastructure
    –Control Model
    –Data Model
Nature of connectors
    Protocols
    Data model
Global architectural structure - > OSBT’s transaction server rebuilt
Construction process -> Package wise code integration

Solutions

Make architectural assumptions explicit-> documentation, 3-D interfaces
Orthogonal Components ->
    substitute sub-modules to play with architectural assumptions
     modularization
Bridging techniques
    wrappers, negotiated interfaces etc be standardized
Sources of design guidance ->
    Reuse expertise and guidance from design patterns and tools

No comments:

Post a Comment