On this page...
|| Friday, February 12, 2010
Laputa started out as the team's joint blog when the team was small, and feature news were far and few between. As the team has grown, every developer got their own blog at http://community.sharpdevelop.net/blogs/.
Now, the rest of us is moving too:
|| Tuesday, January 05, 2010
You are building SharpDevelop using debugbuild.bat and getting
fatal error C1083: Cannot open include file: 'windows.h': No such file or directory
\corprofilercallbackimpl.h(19): fatal error C1083: Cannot open include file: 'cor.h': No such file or directory
One possible cause could be that you have more than one Windows SDK installed on your box. Simply start the Windows SDK Configuration Tool, and select 7.0A (that's the version number for Windows 7 SDK with .NET 3.5 SP1 SDK):
|| Sunday, September 06, 2009
Starting with SharpDevelop 3.1 RC2, new projects are created with a "Target CPU" setting of "x86". Previously, projects were created as "AnyCPU". This change affects only new projects; existing projects keep their old setting.
Now, what is the difference between these settings? On 32-bit Windows, there isn't any. But on 64-bit Windows, for programs
, the "x86" setting means your program will run as a 32-bit process in the "Windows on Windows" emulation layer. AnyCPU programs would run as a native 64-bit process.
, the new setting will prevent them from being loaded into 64-bit processes.
Now, restricting stuff to 32-bit doesn't sound like it's the way forward. Why did we do this change?
- If you never test on 64-bit Windows, the new setting ensures your program will run in compatibility mode. This is better than breaking on your user's 64-bit machines because you unknowingly had 32-bit-only code in your program.
- The SharpDevelop debugger does not yet support 64-bit processes.
- Microsoft did the same change: Visual Studio 2010 also creates x86 projects by default.
The main problem with the target processor is that you cannot mix libraries with different processor types
. If your program is running as 64-bit process, it cannot load 32-bit libraries. If your program is running as 32-bit process, it cannot load 64-bit libraries.If you have an existing AnyCPU solution
and add new projects to it using SharpDevelop 3.1, you should change the target CPU of all new projects back to AnyCPU.
As soon as your program depends on an unmanaged library, you will be forced to pick the corresponding processor type (e.g. SharpDevelop includes 32-bit SQLite and Subversion, so it must run as a 32-bit process). Unless your program is completely managed, AnyCPU is a bad idea because you would have to load a different unmanaged library depending on the process type your program got loaded into.
For purely managed libraries, the situation is different. Here I must recommend to use AnyCPU to allow your library to be loaded into any process type. In fact, in the case of SharpDevelop, only the executable (SharpDevelop.exe) is marked as 32-bit; all other libraries are AnyCPU.
|| Sunday, August 30, 2009
Later today, this year's #d^3 will wind down. It is the first time that the event went over full four days, but we kept the familiar location: Bad Ischl, the heart of the Salzkammergut (Austria, for those of you thinking in larger geographical terms).
This year's participants are pictured in the below photo (left to right): Tomasz (Gsoc: C++ Backend Binding), Daniel (Senior Developer, Architect), Martin (Gsoc: Debugger Visualizer), Siegfried (Gsoc: Xaml Binding), David (Debugger), Peter (SharpDevelop Reports), Chris (Project Management).
|| Monday, August 17, 2009
In SharpDevelop 220.127.116.1111, I have rewritten the ParserService class.
There are lots of changes.
First, from the point of view of an AddIn implementing IParser:
- SharpDevelop will not create a single instance of your class, but one instance per file.
- Keep in mind that there might be concurrent parser runs for the same file. IParser implementations must be thread-safe.
- The ITextBuffer interface now provides a 'Version' property which allows comparing two versions of the same document and efficiently retrieving the changes between them.
Together, these changes allow for the implementation of incremental parsers. The parser instance can simply store the ITextBufferVersion of a the last run in a field and use it to detect the changes to the next version.
We do not plan to write replace our existing parsers with incremental parsers now - but we are working on an incremental XML parser. The XAML code completion support is already taking advantage of this; and once we re-implement XML code folding for SharpDevelop 4, it will likely use this incremental parser, too.
However, a Parser instance should never cache the ICompilationUnit or parts of it: it now is possible for a file to have multiple compilation units (one per project that contains it). See "Support for files shared between multiple projects" below.
Originally, I wanted to give a "no-concurrency guarantee" for IParser implementations, i.e. the ParserService would ensure that there is only a single parser run for each file. However, to implement this, a
per-file lock while calling into the parser was required. The main thread could wait for existing parser runs to finish while the parser implementation would wait for the main thread to run an invoked method -> deadlock.
In the end, I decided the IParser implementation should have the responsibility for this - if it needs to avoid concurrent execution, it should use a lock.
For someone using the ParserService:
- Methods dealing with assembly references have been moved into the new class 'AssemblyParserService'.
- The remaining methods are now documented, in particular regarding their thread-safety.
- All events are now raised on the main thread. This guarantees that events arrive in the correct order and makes consuming them easier.
- EnqueueForParsing has been renamed to BeginParse and now provides a future (Task<ParseInformation>) to allow waiting for the result. However, to avoid deadlocks, this should not be done by any thread the parser might be waiting for (especially the main thread).
- The ParseFile method does not necessarily parse the snapshot of the file you specify - it might parse a newer version instead (but never an older version). Unlike BeginParse().Wait(), ParseFile() is safe to call from the main thread.
- If a file hasn't changed, calling ParseFile is a no-op.
- The ParseInformation class has been made immutable. Support for 'ValidCompilationUnit' and 'DirtyCompilationUnit' has been removed.
- The GetParser() method allows retrieving the IParser instance for a specific file. This is useful in some special cases for using details of specific IParser implementations.
The API changes here are more limited. Most important is the change to ParseInformation: the existing concept of keeping an old but valid compilation unit during parse errors was dropped because the was no useful upper bound on the age of the valid compilation unit; in some cases the 'valid compilation unit' might be several hours old and would represent an empty file.
All CompilationUnit-properties on ParseInformation now return same value; the old properties will be marked [Obsolete].
If a parser wants to reuse information from old parse runs because its error recovery is not reliable enough, the parser itself now has to maintain this state and mix it into the new compilation units - doing this is much easier now due to 'one IParser instance per file'.Support for files shared between multiple projects
Also, there has been a major internal change that isn't apparent in the API:
A single file can now have multiple ParseInformation instances - one per project that contains the file. Previously, files used by multiple projects would show up in code completion only for one of the projects. Now the file will be parsed once for each project that contains it.
Because a single IParser instance is used for all these parse runs, it is possible for incremental parsers to avoid redundantly parsing the file. However, a separate ICompilationUnit must be produced for each run because it contains a pointer to the parent project content.
|| Saturday, June 13, 2009
SharpDevelop uses the MSBuild libraries for compilation. But when you compile a project inside SharpDevelop, there's more going on than a simple call to MSBuild.
SharpDevelop does not pass the whole solution to MSBuild, but performs one build for each project. This is done to give SharpDevelop more control - e.g. we can pass properties to MSBuild per-project. For example, when you click "Run code analysis on project X", we'll first build all dependencies of X normally, and then project X with code analysis enabled. A normal MSBuild call (e.g. if you use it on the command line) would perform code analysis on all dependencies of X, too.
When calling MSBuild on a project on the command line, MSBuild will find all referenced projects and build them recursively. We have to prevent this kind of recursive build inside SharpDevelop, as we already took care of the referenced projects. We don't want MSBuild to check referenced projects for changes repeatedly (this would dramatically slow down builds), and of course we need to prevent MSBuild from running stuff like code analysis on the dependencies. Fortunately, the Visual Studio team had the same requirement, so SharpDevelop can simply set the property "BuildingInsideVisualStudio" to true to disable recursive builds.
However, using that property, MSBuild will call the C# compiler on every build, even if no source files were changed. This is desired in Visual Studio - VS has its own "host compiler" that does change detection. But it's bad for SharpDevelop. As a workaround, we override the MSBuild target responsible for this and fix it. This is an in-memory modification
to your project file; it never gets saved to disk.
Another point where we use this kind of in-memory modifications is for importing additional .targets files
in your project. Some AddIns in SharpDevelop do this to add new features to the build process - for example the code analysis AddIn.
Now enter parallel builds
. It would be nice to be able to call MSBuild on multiple threads and compile projects in parallel. Unfortunately, that's not possible. MSBuild uses the process's working directory
as a global variable. In one process, only one build can run at a time. Even worse: if you have MSBuild in your process, all your other code must deal with concurrent changes to the working directory.
In .NET 3.5
, Microsoft introduced the "/m" switch in MSBuild. This makes MSBuild create multiple worker processes (usually one per processor), enabling concurrent builds. Unfortunately, this feature is exposed in the MSBuild API only through a single method, and that only allows several project files from the hard disk in parallel. It does not support in-memory projects; it cannot even compile projects in parallel if there are dependencies. Microsoft solves the latter problem by separating the project in the solution into 'levels' which depend only on projects from the previous level. However, this doesn't mix well with the way building is integrated in SharpDevelop - we don't use levels but do a kind of topological sort on the dependency graph, and not all SharpDevelop project have to use MSBuild; AddIn authors could choose their own project format and build engine. In the end, I had to create my own build worker executable
in .NET 4.0
, Microsoft created a completely new MSBuild API
. The 'level' problem is solved: the new API allows adding new jobs to a running build. It also seems like it is possible to build in-memory projects in parallel now. But as it turned out, in-memory changes only work in the primary build worker (in-process), all other workers load the file from the hard disk and ignore our changes
Instead of going back to our custom build worker, however; I decided to find a different solution for handling our modifications. Microsoft.Common.targets contains several extensions points adding custom .targets files by setting a property. A good solution for added a custom new target might be setting "CustomAfterMicrosoftCommonTargets" to the name of the .targets file. However, this might conflict with projects that already use this feature, so instead I chose "CodeAnalysisTargets". SharpDevelop comes with its own code analysis targets, so it doesn't hurt if we disable the Microsoft targets.
So in the end, the solution is trivial: create a temporary file containing only our modifications and set a property to tell MSBuild to pick up that file.
Why couldn't I simply write the project file including the modifications into a temporary file? The MSBuild reserved properties
would point to the temporary file and custom build events using those properties would likely fail.
|| Friday, May 22, 2009
As you've probably already heard, Microsoft released .NET 4.0 Beta 1 on May, 20th.
SharpDevelop 4.0 will be the SharpDevelop version built on top of .NET 4.0. I've just got it running:
Yes, that message really reads: "Compiling is not yet implemented". There were huge changes in MSBuild 4.0 and the parts of SharpDevelop's project system that are talking to MSBuild will have to be rewritten.
The .NET 4.0 work on SharpDevelop is going on in the dotnet4 branch, for which we do not provide builds. The dotnet4 branch will be merged back into trunk as soon as it's good enough so that I think other SharpDevelop contributors can be expected to use it.
But myself, I'm currently stuck using a dysfunctional version of SharpDevelop for development. I cannot use previous versions since they cannot compile for .NET 4.0 and don't have code completion for 4.0 libraries. Using the dotnet4 version of SharpDevelop at least gives me code completion for 4.0 libraries, but it looks like I'll have to compile from the command line for some time.
I cannot even use Visual Studio 2010 as it doesn't want to open ICSharpCode.SharpDevelop.csproj - it says it's an unsupported project format. Even if I recreate that project in Visual Studio, VS doesn't want to open it - looks like a VS bug to me.
|| Thursday, May 14, 2009
It is finally available
The really newsworthy feature is the profiler - it has been in the works for over a year, and it is the first full-fledged open source profiler for .NET. Siegfried would love to hear your feedback in the forums, especially which features should be next in the pipeline.
|| Saturday, April 04, 2009
The time to send in proposals for Google Summer Of Code is over now.
Now we're busy reading your proposals and trying to decide on a ranking. This is a lot more work than I initially expected - we got lots of proposals during the last three days. Unfortunately, most of the late proposals were of a rather low quality.
In total, we got 44 proposals from 34 students - much more than I expected.
Here's the list of topics proposals were written on. As you can see, most of them come straight from the ideas page.
- 10 proposals on Database tools
- 5 class diagram / UML related
- 4 Edit and Continue / C# background compilation
- 4 ASP.NET
- 4 Refactoring
- 3 C++ support
- 3 Debugger visualizer
- 3 Customizable Shortcuts
- 1 VB 9 code completion
- 1 Pretty printer
- 1 XAML code completion
- 1 Integrated bug tracking
- 1 actually creative idea
- 1 idea completely unrelated to SharpDevelop
- 1 idea I couldn't understand - due to completely broken English and an empty 'Details' section
- 1 proposal that didn't have any idea
But we're looking for students who would like to join the SharpDevelop team; we don't simply want to get some work done. So it's possible that we'll pick multiple students from the same 'category'; and having the only proposal on a much required feature doesn't mean you're automatically accepted.
There also were some Java proposals but I'm not sure where they disappeared to. In any case, SharpDevelop is a .NET IDE, not a Java one. There are already good open source Java IDEs available; no need to add Java support to SharpDevelop.
This is our first GSOC and I'm not too sure how we should judge the proposals. A surprisingly large part of them is obviously disqualified because the proposal is missing necessary details / the template isn't filled out completely. And what to do with a student who makes a promising impression but chose a project that isn't really interesting to us; or looks like it's not enough work for GSOC? What about projects that look like they cannot be done in the GSOC time frame; but it might be possible for a good coder and the Bio looks like the student knows what he's doing?
We don't know yet how many slots Google will give to us, so we are as excited as you are :)
|| Friday, April 03, 2009
In SharpDevelop 18.104.22.16848, I changed our Subversion integration to use SharpSVN instead of SvnDotNet.
SharpSVN exposes more Subversion APIs to managed code, which could result in some nice features in the (far) future - for example, "SVN Diff" right inside the text editor.
But the main reason for the upgrade was that SharpSVN supports Subversion 1.6. If you are using TortoiseSVN 1.6, you need to update to SharpDevelop 3.1. The old SvnDotNet does not work with new working copies.
However, the same is true in the other direction: if you use SharpDevelop 3.1, you must update to TortoiseSVN 1.6. No matter which .NET wrapper or client version is accessing a repository, the underlying Subversion library has the unpleasant feature to automatically upgrade working copies. As soon as the Subversion 1.6 library inside SharpDevelop touches your working copy, Subversion 1.5 clients will no longer be able to access it.
You need to update all Subversion clients on your machine at the same time. SharpDevelop contains a Subversion client:
- SharpDevelop 3.0 comes with Subversion 1.5 and requires TortoiseSVN 1.5.
- SharpDevelop 3.1 (starting with revision 3948) comes with Subversion 1.6 and requires TortoiseSVN 1.6.
This entry in the Subversion FAQ describes the problem and offers a working copy downgrade script, in case you decide to go back to a previous SVN client version.
© Copyright 2015 SharpDevelop Core Team