@Raimund - Not sure about the templates, but I'm using this functionality with .NET 7 now in most apps.
Make sure that you have your order right, and that you're not letting the Web Server serve static files (or the files you are trying to handle here).
hi Rick, 1: great windsurfing. 2: the app.UseStaticFiles(..FileProvider=new PhysicalFileProvider(..)..) does not work on .net6 .net7 with the new angular or react templates. on .net5 everything works as you described. Crosscheck: create new asp with angular at .net6, add the usestaticfiles to an external folder->will fail regards raimund
@Jon - that's fair for C++ or other low level languages. Not so much for a runtime based language like .NET though that handles memory allocation for you and is unlikely to make assumptions on buffer size. For those I think the default should be enabled with an option to turn it off in the manifest.
Almost 7 years later, this article continues to crush it! Excellent article, thanks for writing this.
Re: Why isn't it on by default
It can break things. Some applications would always allocate 260 byte buffers for filenames and didn't bother with bounds checks since at the time the Windows API would never return more than 260 characters. Windows cares about compatibility a lot.
Hey, thanks a lot for this. As you say, there are not many other posts about the matter. I'm also experiencing this binding loop problem with .net 4.8. It occurs to me when adding elements to a datagrid with custom columns when the window is smaller than certain values. I can go around it, but it is really scary to find these bugs.
@Jakob - No that's not possible - you can't pass back an object by reference to JavaScript. You can only pass back JSON that gets deserialized on the client.
And that's a good thing. Potentially big security issue. You can proxy operations to .NET and execute in .NET then return the results as long as they are serializable.
I think it would be really cool to have HTA-like runtime that uses WebView2. I was able to put a WinForms together that used the old WebBrowser class. Found out i could make the functions in C# script available via the windows.external object with [ComVisible(true)].
I Could even recreate the new ActiveXObject() method that i know from HTA's
public Object NewActiveXObject(String progId) { return (Activator.CreateInstance(Type.GetTypeFromProgID(progId))); }
I tried to add that function in your script, and then from javascript do:
var fso = window.chrome.webview.hostObjects.mm.NewActiveXObject("Scripting.FileSystemObject");
But that did not work. Do you know if it's somehow possible? Webview2 has the window.external object aswell.. but i'm not sure if it can be used in the same manner as with WebBrowser class.
Damn useful, helpful, and works. Thanks for the explanation, fix and the wider context.
@Mitch - didn't know. Will take a look and see. Using ~/ syntax would be helpful.
@Rick Thanks for collecting the code...
Is there a reason why you did not use
razorViewEngine.GetView()
instead of
razorViewEngine.FindView()
It appears that .GetView() handles '~/' paths.
@David - your function to call via WebView has to be global in the window scope. You are using var updateData which makes it private. Remove the var or use window.updateData for the name and then you can call it.
window
var updateData
var
window.updateData
I have a simple page:
<!DOCTYPE html> <html lang="en" dir="ltr"> <head> <meta charset="utf-8"> </head> <body> <div id="tvchart"></div> </body> <script src="https://unpkg.com/lightweight-charts/dist/lightweight-charts.standalone.production.js"></script> <script type="text/javascript" src="IndexWithMarkers6.js"></script> <script> var updateData = function (data) { console.log(data); }; </script> </html>
I call the 'updateData' using
string js = "updateData('yyy')"; await webView.CoreWebView2.ExecuteScriptAsync(js);
But it did not work. If I delete the 2 js file reference, it worked. So, I recognized that the key is how to define Global Function when we have js file reference. Can anyone tell me how to let it work? Thanks.
After a bit of digging...
// Sven
public class PartialViewLocationExpander : IViewLocationExpander { public IEnumerable ExpandViewLocations(ViewLocationExpanderContext context, IEnumerable viewLocations) { if (!context.Values.TryGetValue("FromView", out var fromView)) { return viewLocations; }
var folder = Path.GetDirectoryName(fromView) ?? "/"; var name = context.ViewName; if (!name.EndsWith(".cshtml", StringComparison.OrdinalIgnoreCase)) { name += ".cshtml"; } var path = Path.Combine(folder, name) .Replace('\\', '/'); return viewLocations.Concat(new[] { path }); } public void PopulateValues(ViewLocationExpanderContext context) { var ctx = context.ActionContext as ViewContext; if (ctx == null) { return; } var path = ctx.ExecutingFilePath; if (!string.IsNullOrEmpty(path)) { context.Values["FromView"] = path; context.Values["ViewName"] = context.ViewName; } }
}
I totally agree. I was suprised that .Net Core ViewComponents did not resolve Partial views in the "current loction". Adding hardcoded path for every component does however not work for me (...new string[] { "/Views/ShoppingCart/...) we have so many components so its seems stupid. However can i not figure out a more programmtic wasy of fixing this.
I'm so disappointed nobody bothered to post a hysterical, "right click deploy is evil" rant. π
The nuget.config works for non-SDK projects too -- I've used that for years. The direct project property requires the SDK project.
One downside of this sort of thing (which is why publishing a feed is preferred) is that it requires committing binary packages to your source control, which is generally frowned on -- especially if you're releasing updates frequently. It also means you can end up with multiple copies of these dependency packages strewn around multiple repositories, and it becomes less obvious when something is outdated since it no longer has a central feed that's up to date (so every copy will think it's the latest).
For in-house dev I've used a shared folder as nuget feed combined with nuget.config -- but of course that will only work for people who have access to that shared folder. There's also quite a few private feed servers around for wider usage.
I'm looking at implementing a similar dual token/cookie authorization, but for OIDC; i.e. rather rather than have my own authorize endpoint, the token or cookie originates in an external OIDC provider. It seems pretty straightforward, at least in theory, to dynamically choose token or cookie from each request as in your example, and then to also tie into the OIDC middleware. Any wisdom or gotchas on this approach that you can pass to me would be greatly appreciated. Thanks!
One downside of TransactionScope is that we cannot assign that to a specific data context. It usually tries to scope all the contexts with all the different connection strings. This makes things escalated to DTC. DTC is not running in Azure app services, and that was a huge problem for us. We had to resort to the old Transaction way of things.
TransactionScope
Transaction
@Vince - good question. I haven't checked with 7.0 but with 6.0 - raw throughput processing with the IIS module under heavy load was roughly 50% faster than using Kestrel for me in local testing.
I think anytime you have to make an extra http call/hop to pass request data on you're going to lose some performance in the transport layer, so I'm not surprised at all to see this. You're basically doubling the overhead of connection management as you are doing it in two places effective (ie. IIS Proxy and then again with Kestrel).
All that said - in terms of real per request performance for typical application hits I think the overhead of all of this is negliable anyway for all but the most high performance types of services where request overhead is a significant part of every request - not likely in most typical business applications. So you can use either and likely see very little - if any - impact in performance. You do end up with extra processes and some slight system behavior differences due to the different hosting processes.
It's 2022 now and .NET 7 just got released. I'm hearing about all the improvements in Kestrel. Is InProcess (IIS + IISHttpServer) still more performant than OutOfProcess (IIS + Kestrel)? Or is the IIS reverse proxy just too much of a bottleneck to make OutOfProcess worth it?
Hi, sorry i didn't, it was a bit TL;DR
My point was, that try the app_offline approach first until it fails and then proceed with the shadow copy.
In my case, I use local IIS for debugging and it was driving me crazy that IIS would lock the files and make it impossible to Build the website. Having the Pre-Build script was enough though.
@Martin - did you actually read the post?
WebDeploy has built-in support for AppOffline.html but it will not always work to unload the application. It is not reliable to unload running application, as there can be operations that are still processing longer than the timeout allows.
Nor is this any simpler even if it did work- because you now have to copy the file using some other mechanism outside of WebDeploy. The whole point of WebDeploy is to automate the process of publishing so you don't have to go through many steps to update the server binaries.
Hi Rick.
I ran into this issue of locked files when developing a WebAPI in .NET6.
I was adviced to look into Shadow Copy - but managed to find a much simpler solution:
Simply by leveraging the App_Offline.htm functionality of IIS.
In the root of my project i have the file ready, then in the Pre-Build Script i simply copy it to the destination like this:
copy "\((ProjectDir)App_Offline.htm" "\)(TargetDir)"
This has proven for me to be enough to fix the issue of locked files. The App_Offline.htm shuts down the App Pool and the file is automatically removed when overwriting the files in Publish phase.
Hi,
2 Years late but anyway.
as I needed it to create a template here is the PS to get a list of "enabled" Features π
Get-WindowsOptionalFeature -Online | Where-Object { $_.State -ne "Disabled" } | Select FeatureName | Format-Table
Have you looked at Task.Yield()? https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.yield?view=net-6.0 TLDR; it forces the runtime to execute asynchronously when awaited/called, so maybe this might be a little "cleaner" than "await Task.Delay(1)"... that's kinda splitting hairs over a millisecond delay.
Also, i don't see any (Task).ConfigureAwait(false). https://learn.microsoft.com/en-us/dotnet/api/system.threading.tasks.task.configureawait?view=net-6.0 TLDR; like many/most UI frameworks, WPF has it's own SynchronizationContext... don't quote me on this, but i'm 98.732197% sure that .NET (the artist formerly known as .NET Core) dropped the need to call .ConfigureAwait(false) to prevent Task from auto-synching back to the UI thread, however .Net Framework (the behemoth we've loved for the last 20 yrs) will do you that favor in sneaky/particular contexts (WPF/WinForms/various testing AppDomains/etc.) - IMHO, it's the most annoying "feature" in the history of .NET - you basically need to always add .ConfigureAwait(false) if you don't want the runtime doing that favor for you.
Also wanted to say "Thanks" for your posts through the years! They have helped me out tremendously for a long time.
Thanks a lot, Rick! My case is with two WebView2 in a TabControl. First (selected by default) is initialized, second - never. Your post confirmed to me that Visibility is the problem. Switching a Visibility is a solution, but I think MS have to add callback for initializing. Or kind of default options. The problem is that default userDataFolder is not accessible when the app is installed per computer and user is a restricted one.
userDataFolder
Best regards, Mikhail.
@Esty - yes it works with .NET Framework, but only if you use an SDK style project. You can target net472, net48 etc. and with those projects it works.
There might be some way to make this work with old projects too but probably requires some custom MSBuild tasks.
thank you! exactly what i was looking for -local nuget.config in local solution root folder! just one question please: will it work in .net framework too?
I have also had some success in using a secure tunneling software like ngrok while in development to access my local site over https. Here's an example blog post https://adamtheautomator.com/ngrok/
Just got hit by this one while migrating a project from System.Data.SqlClient to Microsoft.Data.SqlClient. I'm fine with the change to the default but that error message could be a lot better. Instead of throwing a certificate exception, SqlClient should be aware that since the server returned to certificate and the connection string expects encryption, that the mismatch is the issue, not the (non-existent) remote certificate.
I faced the same problem for days trying to figure out why a new Web API 2.0 with Get, Post, and Puts would not do my Put calls and returned 404s for those. Wish I had found this post sooner. Thanks for posting. I added the ExtensionlessUrlHandler-Integrated-4.0 handler to web.config and it worked immediately. Great job and thanks for posting such valuable info. I really enjoy your site. Always great info on it. Thanks
This was way easier for me:
http://www.hanselman.com/blog/breaking-all-the-rules-with-wcf
Another option would be to set the Build Action to None on the items you don't wish published. Unfortunately, you can't set it on a directory, but you can multiselect the individual items in the directory and set the Build Action on them all at once.
@Wolfgang - I guess it depends on what you need to do and what assemblies you need to have loaded. In all of my scenarios I'm using it for scripting purposes so any assemblies needed are already loaded and I can assign them from the loaded assembly list plus whatever is explicitly specified. The big issue in .NET Core is to know what assemblies are needed due to the extremely atomized nature of the 'runtime' assemblies compared to full framework.
If you want to be sure the runtime refs are probably safer but it should be easily testable to work with references manually added - it either works or it doesn't.
@Rick about "They are not available on a non-dev machine": When using the project property PreserveCompilationContext or PreserveCompilationReferences, this works when using dotnet publish: the "refs" subdir is then added to the publish dir and must be copied to the target machines. "Only" 10 MB and 200 files.
PreserveCompilationContext
PreserveCompilationReferences
dotnet publish
In our real app, I ran into strange compilation errors as reported in the linked Github issue, so I finally found out about the reference assemblies.
It's almost 2023 and this article could have been written yesterday.
I have a Maui app that is using httpclient and it works great on Windows and iOS but the android client keeps failing with 401 error. I have tried using the AndroidHandler and everything else I can find with no success. I heard that .NET Core 6 has this issue and .NET Core 7 was supposed to fix it. However after using the Preview version it still fails. I have verified that I have all of the Android Permissions for this task as well. I am using NTLM authentication. Any advise will be greatly appreciated.
@Wolfgang, yes that's by design if you don't specify the reference assemblies and use the default imports. The reference assemblies add a large disk footprint because it has to be present at runtime for dynamic compilation to work, and some of us don't want to add that our applications. They are not available on a non-dev machine.
There are different ways to specify the assembly list, and using the reference assemblies is one of them. There's an entire section in the post above on this topic π
I tend to avoid localhost if possible, and prefer a local.domain.com approach (inline with test.domain.com, staging.domain.com, www.domain.com, etc.
localhost
local.domain.com
test.domain.com
staging.domain.com
www.domain.com
To that end, I created my own Certificate Authority (CA) using OpenSSL and have a powershell script for easily generating and installing certificates on my development box.
Really great article - helped me a lot!
But I think this line of code might cause trouble:
var rtPath = Path.GetDirectoryName(typeof(object).Assembly.Location)...
This picks a runtime assembly from "C:\Program Files\dotnet\shared\Microsoft.NETCore.App\6.0.9". For compilation, it seems to be better to use the "Reference Assemblies" found e.g. at. "C:\Program Files\dotnet\packs\Microsoft.NETCore.App.Ref\6.0.9\ref\net6.0".
See e.g. here for a discussion about this: https://github.com/dotnet/core/issues/2082
For the same reasons, you should also not add a reference to "System.Private.CorLib.dll" π
By adding this line to the project file, Visual Studios copies the reference assemblies to the bin\Debug\net6.0-windows\refs subdirectory and you could load from there:
<PreserveCompilationContext>True</PreserveCompilationContext>
Seeing this post again after 14 years as I came across my comment in 2009 in this article. Look like world is moved far from here but concepts are still same with new terminology. We are still working with .NET and our recent online free tool site Doozy Tools is still using LINQ and DataContext. We are more reliant on JavaScript which is now able to process PDF files without uploading it.
@Thomas - That seems unnecessary since the app can respond on first hit with an HSTS header and redirect. I get the immediate redirect is one extra step for potential attack vector, but there will always be at least 1 first request against the site anyway, so that risk is always there regardless of the caching.
Agreed though - I think the only place where this is really an issue these days is for a local dev site. So it sure would make sense for HSTS perhaps to be more lenient on local machine IP addresses (or even just localhost).
Rick, thank you so much! I crashed my website after I published it by using webpublisher cause App_Data folder was overwriten. I only used this tag '<ExcludeApp_Data>true</ExcludeApp_Data>'. I have no idea why this bug is not fixing by the team. Using App_Data is not realy special. Very crazy bug!
HSTS headers should not be cached and certainly not beyond the scope of the active session
Actually, this is kind of the point of HSTS... It ensures the website is always accessed via HTTPS after the first visit (or even for the first visit thanks to HSTS Preload list). But maybe there should be an exception for locahost, since it's typically used for development purposes...
Just implemented this change to create the context inside my for loop... shaved hours off the processing time. I'm iterating ~24k times. My best estimate is I went from 6+ hours to 8 minutes. Now I'm wondering if I should iterate some number of times in a single context before recreating it. In other works, instead of 1 context to 1 iteration... do 1 context to 50 iterations for example. Is there even more speed to squeeze out of this? π Anyone tried tuning like this?
Thanks for the post!
@Thom - great tip - I have to try this with WinAcme... that would be a great and relatively easy way to create a local development certificate in general.
Another workaround using LetsEncrypt that doesn't involve tunneling to your local site, as long as you have access to the DNS for a site: let's say you're developing for a public site named www.whatever.com. Create a local development site with the URL development.whatever.com. Then use LetsEncrypt verification method of looking up a specific TXT record for the subdomain. When LetsEncrypt prompts you to add the specific txt string to the TXT record for the site's DNS, do so. It'll then add the SSL cert to your local site without ever having to allow external access to the site. I use https://www.win-acme.com/ to do this, but I suspect other methods might also support this.
@bitbonk - it'll be checked in and pulls down with the Repo. This could be a private repo, so not broad exposure. But even for a public repo in some cases I want to distribute a binary as part of my project, but not necessarily have it as a general use library that other would use.
I realize people could still do that with the repo scenario but much less likely than if it was published or available in source code.
If you reference a library that is not public in a public project, how is anyone who pulls down the public project repo going to be able to build it? They wonβt have access that private library and the project therefore wonβt build.